halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
01764942 | en | [
"spi.gproc"
] | 2024/03/05 22:32:13 | 2016 | https://pastel.hal.science/tel-01764942/file/2016PSLEM074_archivage.pdf | Keywords: Biogas, Hydrogen sulfur, Reactive absorption, Adsorption on activated carbon, Aspen Plus®, Hydrodynamics, Low temperatures, BioGNVAL pilot plant, Vehicle fuel …
List of figures
List of tables
22. Effect of temperature on the abatement of volatile organic silicon compounds.... Table 23. Comparison of the different biogas purification and upgrading technologies ........ Table 24. Performances, costs, advantages and disadvantages of separation processes [START_REF] Cloirec | Les Composés Organiques Volatils (COVs) dans l'Environnement[END_REF] Table 25. Conditions and composition of the raw biogas treated by BioGNVAL pilot plant ... Table 26. conditions of passage from one subsystem to another [START_REF][END_REF]
General introduction
Recent decades were accompanied by economic growth and prosperity for humanity. This growth gained from oil and natural gas production has been accompanied by an environmental pollution that may trigger irreversible changes in the environment with catastrophic consequences for humans. Moreover, issues related to the reduction of fossil reserves are still relevant, and the global primary energy demand is increasing, pushing the international community to pursue the development of renewable energies.
The law on energy transition plans to reduce the share of fossil fuel consumption in France to 30 % in 2030, while the share of renewable energy should be increased to 23 % in 2020 and 32 % in 2030 against [START_REF]Intergovernmental Panel on Climate Change[END_REF].1 % in 2012 as shown in the Fig. 1 [1].
Fig. 1: Distribution of final energy consumption in France [1]
Renewable energy is derived directly from natural phenomena. It takes many forms: sunlight, wind, wood heat, water power … The main renewable energies are: wind power, wave power, geothermal energy, solar energy, biomass, firewood and hydropower … According to the French Ministry of Energy [1], primary production of renewable energy rises to 22.4 Million tonnes of oil equivalent (Mtoe) in 2012. The distribution of renewabale energies production by sector is depicted in Fig. 2.
Among renewable energies, biogas is a possibility. It is storable, transportable, not intermittent and substitutable for fossil energies. These strengths justify the consolidation of this emerging sector and the preparation of its future development by ambitious public policies. The European primary energy production from biogas reached 5901 kilotonnes of oil equivalent (ktoe) in 2007, 7500 ktoe in 2008, 10086 ktoe in 2011 and 13379 ktoe in 2013.
Fig. 2: Distribution of production of renewable energies by sector [1]
Biogas production is at the heart of the policy priorities for 2020, the European Union aims for a production that will cover 50 % of transport sector needs. Biogas is a growing sector. Fig. 3 provides a quick review of biogas production in Europe in 2013 [START_REF] Eurobserv | Biogas barometer[END_REF]. Generally, this gas composed of methane and carbon dioxide, also contains other compounds as water, ammonia, volatile organic compounds and hydrogen sulfide.
Biogas can be valued in several applications such as production of heat and/or electricity, feed for fuel cells, injection into the natural gas grid and production of liquefied bio-methane. This last application presents an environmental and economic benefits. With hydrogen fuel, liquefied biomethane is one of the best fuel for the reduction of carbon dioxide emissions with a reduction potential up to 97 % compared to Diesel. It can effectively reduce emissions of greenhouse gases and the pollution, responsible for 42000 premature deaths annually in France [START_REF] Benkimoun | Diesel: 42000 morts prématurées chaque année en France[END_REF]. Biofuels (2.4) Hydropower ( 5) Firewood [START_REF] Rasi | Trace compounds of biogas from di_erent biogas production plants[END_REF] Heat pumps (1.4)
Wind power (1. Fig. 3: Biogas primary production in 2013 [START_REF] Eurobserv | Biogas barometer[END_REF] Liquefied bio-methane requires very low temperatures which may lead to solidification of impurities and thus facilities malfunctions. These impurities must be separated from the biogas. This implies implementation of a purification process to remove from the raw biogas all unwanted substances in order to maximize its methane content. In particular, the complete desulfurization of biogas is peremptory in order to ensure an optimal operation and a high purity of other compounds to valorize, such as carbon dioxide. Moreover, the presence of hydrogen sulfide in the wet biogas, is a poison for installations. In case of leakage, the presence of hydrogen sulphide in the biogas characterized by a rotten egg smell can be dangerous for operators working on the site. Hence, the need to remove all traces of this compound. Several technologies are used for the removal of H2S as adsorption on microporous solids, membrane technology, biological processes and absorption by means of liquid solvents.
The choice of the technique to implement is related to various parameters. The most important are the flow rate of biogas and hydrogen sulfide concentrations to treat. The solution must also respond to various economic, environmental and energetic imperatives as cost which must be reasonable, the threshold to reject which must be respected in an energy-efficient process. Therefore, there is no universal treatment method.
In this thesis, the choice fell on the use of a cryogenic method in order to combine biogas upgrading and biomethane liquefaction. The removal of H2S will be performed upstream of the process either by reactive absorption using an aqueous solution of sodium hydroxide (NaOH) in a structured packed column or by adsorption using activated carbon. These two technologies will be tested and compared in order to choose the most effective and the most suitable for the process set up.
Experiments were performed on a industrial pilot plant developed by Cryo Pur® company called "BioGNVAL". This pilot plant treats 85 Nm 3 /h of biogas from waste water treatment plant, which contains around 20 -100 ppm of H2S.
For absorption technology, the hydrodynamic of flows in structured packing columns was studied in order to develop a model able to predict realistically the key hydrodynamic parameters as pressure drop, liquid holdup and transition points but also effective interfacial area and global mass transfer coefficients.
The remainder of the study is based on simulations using Aspen Plus® V8.0 to study realistically the effectiveness of a structured packed column which uses sodium hydroxide as a chemical solvent for the selective removal of hydrogen sulfide from biogas. Results were compared with data from BioGNVAL pilot plant.
Finally, the dynamic aspect of the adsorption phenomenon is modeled, by predicting the breakthrough curve in the case of an adsorption column used for the removal of H2S.
Chapter 1: From biogas to biomethane
Résumé :
Après une introduction générale posant le contexte de la production de biogaz dans le paysage énergétique mondial, le premier chapitre est une introduction à la problématique de son obtention et de son usage sous une version raffinée en biométhane.
Le biogaz est produit par méthanisation lorsque des matières organiques sont décomposées dans des conditions bien définies et en absence d'oxygène. Il est constitué principalement de méthane et de dioxyde de carbone mais aussi de quantités variables de vapeur d'eau, de sulfure d'hydrogène et d'autres composés polluants. Afin de pouvoir utiliser le biogaz comme carburant pour véhicule, il doit être épuré en séparant le dioxyde de carbone et les autres composés contaminants du biogaz pour augmenter au maximum sa teneur en méthane.
Il existe différentes technologies utilisées dans le domaine du traitement des gaz. Les plus utilisées sont : l'absorption, l'adsorption, la technologie membranaire, la cryogénie, le traitement biologique et l'oxydation. Aujourd'hui, d'autres technologies sont développées telles que la biocatalyse, la photocatalyse et le plasma froid. Pour le moment, le manque d'informations à leur sujet limite leur application industrielle [START_REF] Biard | Contribution au développement d'un procédé de lavage chimique compact. Traitement du sulfure d'hydrogène par le chlore à l'échelle semi-industrielle et de COV odorants par oxydation avancée O3 / H2O2 à l'échelle du laboratoire[END_REF].
Introduction
Anaerobic digestion may be defined as the natural process of degradation by microorganisms under controlled conditions and in the absence of oxygen. This degradation results in the production of a gas mixture saturated with water outlet of the digester called biogas.
Anaerobic digestion is the result of four biochemical steps in which large carbon chains are converted into fatty acids and alcohols. These four steps are: Hydrolysis, acidogenesis, acetogenesis, methanogenesis [START_REF] De La Farge | Le biogaz, procédés de fermentation méthanique[END_REF].
Hydrolysis
It takes place at the beginning of the fermentation and makes use of exo-enzymes in order to decompose the organic matter into simple substances.
Acidogenesis
During this step, volatile fatty acids are formed. Carbon dioxide and hydrogen are also formed, they are used by microorganisms during the production of methane according to the chemical Reaction (R.1) shown below. The reaction enthalpy is about -567 kJ.mol -1 [START_REF] Zdanevitch | Etude de la composition du biogaz de méthanisation agricole et des émissions en sortie de moteur de valorisation[END_REF].
𝐶𝑂 2 + 4 𝐻 2 → 𝐶𝐻 4 + 2 𝐻 2 𝑂 (R.1)
Acetogenesis
This stage involves the production of acetate, an indispensable substrate for the synthesis of methane.
Methanogenesis
This last step results in the production of methane. It is ensured by the methanogenic bacteria, which can only use a limited number of carbon compounds, including acetate responsible for 70 % of methane production according to the chemical Reaction (R.2) shown below. The reaction enthalpy is about -130 kJ.mol -1 [START_REF] Zdanevitch | Etude de la composition du biogaz de méthanisation agricole et des émissions en sortie de moteur de valorisation[END_REF].
𝐶𝐻 3 𝐶𝑂𝑂𝐻 + 𝐻 2 𝑂 → 𝐶𝐻 4 + 𝐻 2 𝐶𝑂 3 (R.2)
The biogas needs to be purified and upgraded, which means that impurities are removed or valorized in order to produce a gas rich in methane called biomethane. There is a number of technologies available for this purpose as water scrubbing, membranes and pressure swing adsorption.
Biogas utilization
Biogas can be used in several applications. Sometimes it could be used raw, but almost always it has to be upgraded or as a minimum cleaned from its H2S content because the presence of this compound in the biogas even at very low concentrations could damage the installations.
Biogas can be used in all applications designed for natural gas such us production of heat and electricity known as combined heat and power (CHP), production of chemicals and/or proteins, fueling internal combustion engines and fuel cells, it may also be used as vehicle fuel or injected in the natural gas grids.
Direct combustion
The simplest method of biogas utilization is direct combustion. The biogas burner can be installed in heaters for production of hot water and hot air, in dryers for various materials, and in boilers for production of steam for process heat or power generation [START_REF] Walsh | Utilization of Biogas[END_REF]. Another limited application involves absorption heating and cooling to provide chilled water for refrigeration and hot water for industrial processes. These applications based on direct combustion do not require a high gas quality.
Biogas can also be used to fuel internal combustion engines to supply electric power for pumps, blowers, elevator and conveyors, heat pumps and air conditioners [START_REF] Walsh | Handbook on biogas utilization[END_REF].
Combined heat and power
Another application for biogas is CHP which involves the production of two or more forms of energy, generally electricity and thermal energy. This latter is generally the most used today but it could be a problem because the need of heat varies with season, and during summer for example, unused biogas is flared.
Injection into the natural gas grid
Upgrading of biogas to biomethane with a gas quality similar to natural gas and injecting it into the natural gas grid is an efficient way to integrate biogas into the energy sector. It allows the transport of large volume of biomethane and its utilization in wide areas where population is concentrated.
Vehicle fuel
Today, there is a big interest in using biogas as a vehicle fuel. But for this utilization, the raw biogas must be purified which means that contaminants are removed from biogas, and upgraded which means that CO2 is eliminated leading to a raise in the energy content of biogas. Finally, the biomethane needs to be liquefied by chilling it to very low temperatures (≈ -161 °C at atmospheric pressure). In order to prevent corrosion and solid formation, some requirements concerning component concentrations are needed before liquefaction. They are presented in Table 1. The H2S content must be lower than 4 ppm. This quantity is not in direct relation with corrosion or solidification aspects. This requirement ensures a high quality of biomethane and avoids odor problems caused by the presence of hydrogen sulfide.
Table 1. Concentration requirements before biogas liquefaction [8]
Compounds
Maximum concentration CO2 25 ppm H2S 4 ppm H2O 1 ppm Fig. 5 shows an example of use of biogas after its purification and upgrading using the Cryo Pur® system developed by Cryo Pur® Company. [START_REF][END_REF]
Fig. 5: Example of biogas utilization by Cryo Pur® Company
Biogas composition
The composition of biogas depends mainly on the type of substrates which are segmented according to their origin. The following classification is commonly used: Household and industrial waste. Sludge from sewage water treatment plants. Agricultural and agro-industrial waste.
Household and industrial waste
To treat this type of waste, they are buried in landfill sites. The anaerobic conditions created by the landfill are sufficient to induce methanogenesis. Biogas production from this type of waste is characterized by variations of flow rates and composition due to the variation of the feedstock. This biogas production is also dependent on the progression of degradation of waste, moisture and temperature. These parameters are not the same on the entire degradation zone leading to variations in the composition of biogas produced from the same landfill. This variability phenomenon is also noticeable on the concentration of methane in biogas which varies around 15 % during a year [START_REF] Rasi | Trace compounds of biogas from di_erent biogas production plants[END_REF]. Two factors explain this: the high biological activity in summer and the rise of temperature. These conditions complicate the valorization of this type of biogas.
Household waste sorting improves the yield of biogas production. This process by which waste is separated into different elements achieves higher yields compared to unsorted waste. The difference may reach 100 m 3 /t [START_REF] Görisch | La production de biogaz[END_REF]. Household and industrial wastes are not all fermentable. The share of these fermentable wastes represents 45 % maximum [START_REF] Deublein | Biogas from waste and renewable resources : an introduction[END_REF]. Table 2 shows the principal compounds present in household waste.
Sludge from sewage water treatment plants
Biological treatment of urban wastewater is a widespread process which generates significant amounts of activated sludge. In order to stabilize these latter, an anaerobic degradation is used. It provides a solution to the storage and treatment of sludge. A small part of the produced biogas ensures 100 % of the energy needs of the wastewater treatment plant. The by-products of anaerobic digestion, as the digestate can be valorized via fertilization and amendment of farmland.
Today, the injection into the natural gas grid of biomethane issued from sludge of wastewater treatment plant is subject to strong demand from local authorities. According to the French Ministry of Ecology, Sustainable Development and Energy, by 2020, more than 60 wastewater treatment plants could be provided with the necessary facilities for energy recovery of waste to allow the injection of 500 GWh/year of biomethane into the national gas grid, which is equivalent to the annual consumption of more than 40000 households [1].
Unlike the treatment of household and industrial waste, anaerobic digestion of sludge from wastewater treatment plants is a controlled process conducted under optimal conditions for biogas generation.
Agricultural and agro-industrial waste
According to the European renewable energies Observatory [START_REF] Eurobserv | Biogas barometer[END_REF], France has about 60 million tons of organic material that can be valued in biogas. These agricultural wastes are divided into two groups: liquid effluents and solid waste. For liquid effluents, dry matter and dry organic matter rates influence the production of biogas, as shown in Table 3. Digesters used for biogas production from agricultural and agro-industrial wastes are optimized systems, regulated and stable. According to Boulinguiez [START_REF] Boulinguiez | Purification de biogaz -Elimination des COV et des siloxanes[END_REF], the composition of biogas from digesters varies with an amplitude of ± 5 % during the year. Table 3 shows that shortening (Alimentary fat) is the substrate with the highest methanogenic potential.
Assuming complete reaction without formation of by-products, Buswell Equation (1) shows that the theoretical yield of methane production may be estimated from the base elemental composition of a substrate [START_REF] Buswell | Laboratory studies of sludge digestion[END_REF].
𝐶 𝑐 𝐻 ℎ 𝑂 𝑜 𝑁 𝑛 𝑆 𝑠 + 𝑦 𝐻 2 𝑂 → 𝑥 𝐶𝐻 4 + (𝑐 -𝑥) 𝐶𝑂 2 + 𝑛 𝑁𝐻 3 + 𝑠 𝐻 2 𝑆 𝑥 = 1 8 (4 𝑐 + ℎ -2 𝑜 -3 𝑛 -2 𝑠) 𝑦 = 1 4 (4 𝑐 -ℎ -2 𝑜 + 3 𝑛 + 3 𝑠) (1)
The composition of biogas shown in Table 4 is defined according to the main types of substrates previously presented.
Environmental and economic issues
Biogas is a very interesting energy from an ecological and economic point of view.
The environmental interest of reducing CO2 emissions coincides with the approaches adopted to the production and consumption of renewable energies. The environmental impact of production and valorization of biogas is easily demonstrable.
From an economic point of view, digesters allow to value all the types of substrates discussed in the previous section at attractive cost. In addition, the digest that was recovered at the outlet of the digester could be valorized as fertilizer.
Environmental issues
Anaerobic digestion is a natural phenomenon that rejects methane which is 23 times more potent as a greenhouse gas than carbon dioxide [START_REF]Intergovernmental Panel on Climate Change[END_REF]. The simple conversion by combustion of CH4 into CO2 reduced to 8 % the initial potential of greenhouse gas from biogas, emitted directly into the atmosphere. Carbon dioxide footprint could be further improved when the biogas is purified and upgraded. Moreover, the fossil energy used massively today releases large amounts of CO2 and it is not an inexhaustible resource. The valorization of biogas is therefore interesting for environmental protection by saving fossil fuels and avoiding methane emissions into the atmosphere. A medium agricultural digester allows the reduction of 1000 tons of CO2 each year [1].
Economic issues
The economic feasibility of a biogas sector depends mainly on the composition of the biogas.
The methane concentration in a biogas produced from household and industrial wastes is low as seen in Table 4. The CH4 concentrations do not exceed 55 %. Moreover, the biogas produced can vary over time and it contains large amount of minor compounds, complicating biogas valorization. Biogas production from this type of waste is expected to be limited in future years in Europe. Restrictions and European standards are becoming more stringent, which makes the use of this source, unprofitable [START_REF] Deublein | Biogas from waste and renewable resources : an introduction[END_REF]. The most advanced biogas applications as vehicle fuel production or injection into the natural gas grid are hardly possible from this source, which requires advanced purification treatments. Considering an anaerobic digestion unit treating 50000 t/year of household waste, the total costs vary between 50 and 95 €/t while the incomes do not exceed 30 €/t [START_REF] Boulinguiez | Purification de biogaz -Elimination des COV et des siloxanes[END_REF].
The economic balance of biogas production within wastewater treatment plants depends on the size and savings made on the sludge. This stable and controlled anaerobic digestion process is financially viable. The wastewater treatment plant "Aquapol" situated in Grenoble, France treats 88 000 000 m 3 of wastewater each year which is equivalent to 8000 dry tons of sludge processed per year. Part of the produced biogas is used for internal energy needs of the plant (8 GWh/year). The other part (14 GWh/year) will be injected in the natural gas grid after purification and compression [START_REF] Leclerc | STEP d'Aquapole : Présentation de la méthanisation et de l'injection au réseau[END_REF].
The establishment of biogas purification process from agricultural waste is particularly attractive due to the reliability of the resource. The introduction of energy crops among the substrates improves the performance of anaerobic digestion giving a new economic aspect for the production of biogas.
From biogas to liquid biomethane
An advanced purification of biogas allows achieving an adequate quality threshold for use as vehicle fuel. The major step of this treatment is the separation of CO2, in addition to dehumidification, desulfurization, reduction of oxygen and the removal of trace compounds in biogas. Qualities and tolerances in impurities, for the production of liquid biomethane to be used as vehicle fuel are listed in Table 5.
Table 5. Tolerances in impurities for the use of liquid biomethane as vehicle fuel [START_REF] Boulinguiez | Purification de biogaz -Elimination des COV et des siloxanes[END_REF]
Compounds
Unit Liquid biomethane used as vehicle fuel
Methane -CH4 wt. % > 96
Carbone dioxide -CO2 wt. % < 3
Oxygen -O2 wt. % < 3
Water -H2O mg.m -3 < 30
Hydrogen sulfide -H2S mg.m -3 < 5
Total sulfur mg.m -3 < 120
Organosulfur mg.m -3 < 15
Hydrocarbons mg.m -3 < 200
Critical size of particles μm < 1
Liquid biogas can be produced using a cryogenic upgrading technology, based on differences in condensation temperature for different compounds. It can also be produced by mean of a conventional technology connected with a small-scale liquefaction plant. When using the first method, the carbon dioxide comes as a by-product which could be used in external applications bringing in extra income to the biogas upgrading unit.
The environmental benefits provided by the passage from the diesel to biomethane are impressive. Used as a fuel, Bio-LNG enables a considerable reduction of polluting emissions and represents a genuine alternative to diesel: Zero emission of fine particles responsible for 42000 premature deaths annually in France. -70 % NOx emissions. -90 % CO2 emissions. -99 % hydrocarbons emissions. -50 % noise pollution.
Bio-LNG is a renewable energy produced from waste which makes it neutral in terms of greenhouse gas emissions. Its use allows the decarbonization of the energy mix. Bio-LNG is the only sustainable solution for long-distance haulage operated by heavy goods vehicles.
Conclusion
Biogas is produced when organic material is decomposed under anaerobic conditions. The main constituents are methane and carbon dioxide. To be able to use the raw biogas as a vehicle fuel it must be purified and upgraded, which means that impurities and CO2 respectively, are separated. There are a number of available upgrading technologies and the most commonly used are:
Absorption Adsorption Membranes Cryogenic technology
Other technologies exist such as oxidation and biological treatment. These techniques are known in the field of gas treatment. Today other technologies are being developed such as biocatalysis, photocatalysis and cold plasma. For the moment, the lack of information about them limits their industrial application [START_REF] Biard | Contribution au développement d'un procédé de lavage chimique compact. Traitement du sulfure d'hydrogène par le chlore à l'échelle semi-industrielle et de COV odorants par oxydation avancée O3 / H2O2 à l'échelle du laboratoire[END_REF].
The choice of the technology to be used to purify, upgrade and liquefy the biogas requires the knowledge of the thermodynamic properties of biogas and the representation of phase equilibrium. These thermodynamics aspects of biogas will be discussed in the next chapter.
Chapter 2: Thermodynamic aspects of biogas
Résumé :
Le choix de la technologie à utiliser pour purifier et liquéfier le biogaz exige la connaissance des propriétés thermodynamiques du biogaz.
Ce chapitre recense les propriétés thermodynamiques des fluides qui sont essentielles pour concevoir et optimiser les technologies de purification de biogaz. Il montre aussi que les propriétés thermophysiques du biogaz dépendent fortement de sa composition, en particulier des concentrations de méthane et de dioxyde de carbone. La présence de composés à faibles concentrations comme l'azote ou l'hydrogène sulfuré pourrait modifier les propriétés physiques du biogaz. Par exemple, les gaz d'enfouissement comprennent de petites quantités d'azote et d'oxygène qui affectent le comportement de phase du système CH4 -CO2.
Introduction
Thermodynamics can be used as a powerful tool for setting and evaluating processes used for purification and upgrading of biogas. Therefore, this study will focus on the thermodynamic investigation of biogas. It will present the thermodynamic properties of pure compounds and of the gas mixture (biogas) at a pressure of 1.103 bar. This pressure is considered because the biogas treated in the experimental part (Chapter 4) comes from the wastewater treatment plant at a pressure slightly above atmospheric pressure in order to avoid air infiltration into the biogas pipe.
Thermodynamic properties of pure component present in biogas
Biogas refers to a mixture of different molecules of gases as seen in Table 4. The thermodynamic aspects of the main components of biogas will be presented in this section.
Hydrogen sulfide
Hydrogen sulfide is a colorless gas with the characteristic foul odor of rotten eggs. It is very poisonous, corrosive, flammable and explosive. Its olfactory threshold varies between 0.7 and 200 g.m - 3 , depending on the sensitivity of each individual. The olfactory sensation is not proportional to the concentration of H2S in the air because it is possible that the smell felt at very low concentrations is attenuated or disappeared at high concentrations.
Hydrogen sulfide is created following the bacterial decomposition of organic matter in the absence of oxygen, such as in swamps and sewers. It also appears in volcanic gases and hot springs. Other sources of hydrogen sulfide are the industrial processes used in the oil and natural gas sectors, sewage treatment plants and factories producing pulp and paper … The thermos-physical properties of hydrogen sulfide are listed in Table 6.
Carbon dioxide
Carbon dioxide is a colorless and odorless gas which is naturally present in the Earth's atmosphere. The concentration of carbon dioxide in the atmosphere reached 405 ppm at the end of 2016, against only 283 ppm in 1839.
Carbon dioxide is produced by all aerobic organisms when they metabolize carbohydrate and lipids to produce energy by respiration [START_REF] Bosak | Science is …: A source book of fascinating facts, projects and activities[END_REF]. It is also produced by burning fossil fuels such as coal, natural gas and oil. Significant amounts of CO2 are also released by volcanoes. The thermos-physical properties of carbon dioxide are listed in Table 7. Huge amounts of methane are buried in the earth's crust in the form of natural gas and on the ocean floor in the form of methane hydrates. Moreover, mud volcanoes, landfills, and animal digestion release methane. The thermos-physical properties of methane are listed in Table 8. The vapor pressure is the basis of all equilibrium calculation. The vapor pressure curves for the three molecules of interest are shown in Fig. 6.
Fig. 6: Vapor pressure of the main components present in biogas (■) H2S critical point ; (▲) CO2 critical point ; (•) CH4 critical point ; (□) H2S triple point ; (Δ) CO2 triple point ; (○) CH4 triple point ; (----) H2S ; ( ____ ) CO2 ; ( …… ) CH4
The vapor pressures are calculated using Antoine Equation [START_REF] Eurobserv | Biogas barometer[END_REF]. The carbon dioxide vapor pressures above -76.36 °C were retrieved from the works of Kidnay [START_REF] Kidnay | Vapor-Liquid Equilibria in the Nitrogen + Carbon Dioxide + Propane System from 240 to 330 K at Pressures to 15 MPa[END_REF], Yarym-Agaev [START_REF] Yarym-Agaev | Vapor-Liquid Equilibrium and Volumetric Properties of the Liquid Phase of the gamma-Butyrolactone-Carbon Dioxide System at Increasing Pressures[END_REF], Miller [START_REF] Miller | The Joule-Thomson Effect in Carbon Dioxide[END_REF] and Del Rio [START_REF] Del Rio | The vapour pressure of CO2 from 194 to 243 K[END_REF]. As shown in Fig. 6, the vapor pressure curve of CO2 continues at temperatures lower than the triple point temperature (-56.56 °C) where it passes from Vapor -Liquid Equilibrium to Vapor -Solid Equilibrium. The constants used by Antoine Equation ( 2) are listed in Table 9 for each component. 2.3. Thermodynamic properties of the gas mixture (biogas)
log 𝑃 = 𝐴 -( 𝐵 𝑇 + 𝐶 ) (2)
Phase equilibrium behavior of biogas
In this section, the gas mixture is assumed to consist of methane (1) and carbon dioxide (2) because all the other impurities are present at very low concentrations depending on the type of substrate as shown in Table 4. Further, the upgrading process is generally made after purification that consists in the elimination of all these pollutants such as H2S, H2O and siloxanes.
The liquefaction of biomethane requires very low temperatures, therefore a cryogenic technology for biogas upgrading could be envisaged. This technology requires correct design of heat exchangers to achieve maximal purity of biomethane, that's why the knowledge of the pressure -temperature behavior of the binary mixture methane -carbon dioxide is essential [START_REF] Riva | Solid -Liquid -Vapor Equilibrium Models for Cryogenic Biogas Upgrading[END_REF]. Fig. 7 presents the phase equilibrium behavior for the methane -carbon dioxide system. Fig. 7: Pressure -Temperature equilibrium behavior for the CH4 -CO2 system [START_REF] Riva | Solid -Liquid -Vapor Equilibrium Models for Cryogenic Biogas Upgrading[END_REF] (
■) CH4 triple point ; (•) CH4 critical point ; (□) CO2 triple point ; (○) CO2 critical point ; (▲) mixture quadruple point ; (--) vapor-liquid critical locus ; (---) three-phase loci ; (-) pure compound phase equilibria
In the context of CO2 separation by solidification of CO2, the authors [START_REF] Riva | Solid -Liquid -Vapor Equilibrium Models for Cryogenic Biogas Upgrading[END_REF] have focused on the solid phase because thanks to the high triple point of carbon dioxyde as seen in Table 7, CO2 could be separated from methane directly from its gaseous phase into a solid phase for pressures close to the atmospheric pressure.
In order to understand the phase equilibrium behavior of the system CH4 (1) -H2S (2), Langè et al. [START_REF] Langè | Phase Behavior of System Methane + Hydrogen Sulfide[END_REF] have studied this aspect for temperatures from 70 K up to the critical tempearture of H2S and pressures up to 250 MPa as seen in Fig. 8.
Density and dynamic viscosity of biogas
All the thermophysical properties discussed in this section are calculated using the software REFPROP V9.0 which calculates the thermodynamic and transport properties of industrially important fluids and their mixtures. The equation of state for calculating these properties is the GERG (European Gas Research Group) -2008 equation [START_REF] Kunz | The GERG -2008 Wide-Range Equation of State for Natural Gases and Other Mixtures: An Expansion of GERG -2004[END_REF].
Depending on its composition, biogas has characteristics that it is interesting to investigate such as density and viscosity.
At a pressure of 1,103 bar, slightly higher than the atmospheric pressure, the variation of density as a function of temperature is shown in Fig. 9.
Fig. 9: Biogas density as a function of temperature ( ____ ) Air ; ( …… ) Biogas with 40 mol% of CO2 ; (----) Biogas with 35 mol% of CO2
As seen in Fig. 9, biogas is lighter than air. Moreover, its density depends on the carbon dioxide content. The density of a biogas rich on carbon dioxide will be greater than that of a biogas containing an inferior CO2 concentration. This is due to the molecular weight of CO2 which is greater than that of methane as seen in Tables 7 and8.
At a pressure of 1,103 bar, the evolution of the viscosity of the biogas as a function of temperature is depicted in Fig. 10. As density, the biogas viscosity increases with the CO2 concentration. This is due to the higher viscosity of carbon dioxide compared to methane, at equal pressure and temperature as shown in Tables 7 and8.
Thermal conductivity of biogas
The thermal conductivity is a physical property which describes the ability of gases to conduct heat. The phenomenon of heat conduction in gases is explained by the kinetic gas theory which treats the collisions between the molecules. The thermal conductivity depends on thermal capacity at constant volume and viscosity [START_REF] Shao | Engineering Heat Transfer[END_REF]. At a pressure of 1,103 bar, the variation of thermal conductivity of biogas as a function of temperature is shown in Fig. 11. As shown in Fig. 11, the thermal conductivity increases with temperature. This is due to collisions between the molecules of biogas that increase with temperature, resulting a thermal energy transfer increase.
Thermal capacities
The constant pressure heat capacity and the constant volume heat capacity are respectively related by the following thermodynamic Equations ( 3) and [START_REF] De La Farge | Le biogaz, procédés de fermentation méthanique[END_REF].
𝐶 𝑃 = ( 𝜕𝐻 𝜕𝑇 ) 𝑃 (3)
𝐶 𝑉 = ( 𝜕𝑈 𝜕𝑇 ) 𝑉 (4)
At a pressure of 1,103 bar, the variations of heat capacities of biogas as a function of temperature are shown in Fig. 12 and13. Fig. 12 and 13 show that biogas specific heat capacities strongly depend on the methane concentration unlike the density, which is inversely proportional to the concentration of methane in the biogas. According to Vardanjan et al. [START_REF] Vardanjan | Biogas specific heat capacity variations during upgrading[END_REF], an increase of methane concentration from 50 % to 75 % will result in the increase of heat capacity by 17 %. In Fig. 12 and13, heat capacities increased by 5 % after the passage of the methane concentration from 60 to 65 %.
Energy content in biogas
An important property of biogas is the lower heating value (LHV) which measures its energy value. The LHV of biogas is proportional to its methane content. For example, a biogas containing 70 mol% of methane at 15 °C and at atmospheric pressure has a LHV equal to 6.6 kWh/m 3 [START_REF] Gélin | Complete oxidation of methane at low temperature over noble metal based catalysts: a review[END_REF].
The energy value of biogas can also be evaluated by the Wobbe Index. Compared to the calorific value of biogas which has been treated by a lot of authors, only few studies have been conducted on the Wobbe index of biogas. According to the Danish Technological Institute (DTI) [35], a biogas containing 60 mol% of methane, 38 mol% of carbon dioxide and 2 mol% of others has a Wobbe index of 19.5 MJ/m 3 .
Conclusion
This chapter has allowed identifying the thermophysical properties of biogas. These properties are essential to design and optimize technologies for biogas purification and upgrading.
The thermophysical properties of biogas discussed in this chapter strongly depend on the composition of biogas, especially the concentrations of methane and carbon dioxide.
The presence of minor compounds such as H2O, N2, O2 and H2S could change the physical properties of biogas. For example, landfill gas includes small amounts of nitrogen and oxygen which affect the phase behavior of the CH4 -CO2 system.
The technologies used for purification and upgrading of biogas will be discussed in the following section and a particular attention will be paid to the elimination of hydrogen sulfide.
Chapter 3: From molecules to the process
Résumé :
La valorisation du biogaz nécessite la mise en oeuvre d'un procédé de purification qui consiste à éliminer du biogaz brut toutes les substances indésirables afin d'augmenter au maximum sa teneur en méthane.
Ce chapitre présente une étude comparative des méthodes connues de séparation de H2S du biogaz. La méthode de séparation devra permettre de réduire la teneur résiduelle en H2S à moins de 1 ppm.
La première technique de séparation étudiée est l'absorption dans des contacteurs gaz -liquide. L'accent a été porté sur les principes fondamentaux basés sur l'absorption avec des solvants de type physique et chimique ainsi que leur efficacité dans l'élimination de l'H2S.
La deuxième technique de traitement de l'H2S étudiée est l'adsorption sur des solides microporeux comme le charbon actif et les zéolithes.
Le troisième procédé discuté est la séparation membranaire dont la performance dépend de deux paramètres qui sont : la perméabilité et la sélectivité membranaires. En général, les membranes de perméabilité élevée présentent une faible sélectivité et inversement. Cette méthode présente quelques inconvénients comme le risque de rupture dû au gradient de pression qui constitue la force motrice, l'exposition à certains solvants qui peut endommager ou boucher la membrane, le prix élevé des membranes ou encore les pertes en méthane qui peuvent être considérables.
La dernière technologie séparative étudiée lors de ce chapitre est la condensation cryogénique basée sur la thermodynamique des équilibres de phases.
Cette synthèse bibliographique a permis de comparer les différents procédés selon plusieurs critères comme l'efficacité de séparation, l'impact environnemental et les coûts d'investissement et d'exploitation.
Finalement, le choix s'est porté sur les deux technologies suivantes utilisées pour l'élimination du sulfure d'hydrogène:
• Absorption chimique dans une colonne à garnissage structuré utilisant l'hydroxyde de sodium (NaOH) comme solvant.
• Adsorption dans un lit fixe de charbon actif.
Introduction
Biogas purification requires the removal of minor compounds despite their low amount relative to methane. Among these compounds, the removal of water vapor and ammonia are sufficiently documented in the literature and do not constitute a technical difficulty. However, the treatment of hydrogen sulfide present at low concentration is a challenge. This gas is formed when organic material containing sulfur is decomposed under anaerobic conditions. It is very corrosive on most metals which negatively affects the operation and viability of equipment especially pumps, heat exchangers and pipes. In the following section, conventional purification and upgrading technologies will be described.
Absorption technology
In chemistry, absorption is an operation by which a substance combined in one state is transferred into another substance of a different state. The most frequent use of absorption is the separation of a gas mixture by the absorption of part of the mixture in a solvent. The two phases are brought into contact in an absorption column and are allowed to exchange mass and energy across their common interface, where the flux of H2S transferred is calculated using Equation ( 5) by applying the two-film theory presented in Fig. 14. This theory assumes that the mass transfer resistance is located on the boundary layer on the gas side and the liquid side respectively. The mass transfer between the liquid and the vapor phases heavily depends on the effective interfacial area.
𝑁 𝐻 2 𝑆, 𝑧 = 𝑘 𝐺 𝑅 𝑇 (𝑝 𝐻 2 𝑆 -𝑝 𝐻 2 𝑆 * ) = 𝑘 𝐿 (𝐶 𝐻 2 𝑆 * -𝐶 𝐻 2 𝑆 ) (5)
Where the gas-phase and liquid-phase mass transfer coefficients are calculated using Equations ( 6) and ( 7) respectively.
𝑘 𝐺 = 𝐷 𝐻 2 𝑆,𝐺 𝛿 𝐺 (6)
𝑘 𝐿 = 𝐷 𝐻 2 𝑆,𝐿 𝛿 𝐿 [START_REF] Walsh | Handbook on biogas utilization[END_REF] Where: This theory assumes also that the thermodynamic equilibrium is reached in the interface. Knowing that the H2S concentration is low, therefore the Henry's law presented in Equation ( 8) is applicable.
NH2S, z [mol.s -1 .m -
𝑝 𝐻 2 𝑆 * = 𝐻 𝐻 2 𝑆 𝑥 𝐻 2 𝑆 (8)
Fig. 14: Two-film theory [START_REF] Kohl | Gas purification[END_REF] There are two types of absorption processes: physical absorption and chemical absorption, depending on whether there is any chemical reaction between the pollutant and the absorbent.
For example, when water absorbs oxygen from the air, a mass of the gas moves into the liquid, and no significant chemical reactions occur between the solvent and the solute. In this case, the process is commonly referred to as physical absorption.
Chemical absorption occurs, when a chemical reaction is carried out in the liquid phase, to dissolve the compound to be removed, and thus enhance the efficiency of the process. An example of chemical absorption is the process for absorbing CO2 and/or H2S with aqueous solution of sodium hydroxide. Chemical solvents are favored over physical solvents when the concentration of pollutants at low partial pressure has to be reduced to a very low level [START_REF] Mokhatab | Handbook of natural gas transmission and processing[END_REF]. However, if the impurity is available in the feed gas at high partial pressure, physical solvents might be preferred to chemical solvents [START_REF] Burr | A comparison of physical solvents for acid gas removal[END_REF] In the presence of a chemical reaction, the rate of absorption increases. The flux of H2S transferred depends on an acceleration factor E, and is expressed in Equation [START_REF][END_REF].
𝑁 𝐻 2 𝑆 = 𝐸 𝑘 𝐿 𝐶 𝐻 2 𝑆 * (9)
The acceleration factor E is equal to the ratio between the flux of H2S transferred in the presence of a chemical reaction and the flux transferred in the absence of a chemical reaction. This factor characterizes the importance of the chemical reaction on the transfer compared to the diffusion process. The addition of a chemical solvent such as sodium hydroxide for the removal of hydrogen sulfide allows a significant increase of the acceleration factor which becomes controlled by the reaction rate. It will also cause an increase in temperature because of the exothermicity of the reaction. For low concentrations, the increase in temperature will be small.
Physical solvents
The first solvent studied and used in absorption process is water. Hydrophilic compounds present in the biogas such as CO2 and H2S are absorbed better in water than the hydrophobic and non-polar compounds as methane.
Table 10 shows the solubility of some compounds of biogas in water. In order to improve the effectiveness of absorption processes, other solvents were tested for purification and upgrading of biogas. They allow the reduction of the size of columns, the energy used for the treatment and the volumes of solvents involved.
Table 10. Solubility of the main compounds of biogas in water [39]
Compound
Selexol® is a physical absorption process developed by Allied Chemical Corporation, then improved by Norton. Today, it is owned by Universal Oil Products (UOP). It is made up of Dimethyl Ethers of Polyethylene Glycol (DMEPG) whose molecular weight is about 272 g.mol -1 .
Rectisol® is one of the older physical absorption processes. It was developed by Linde and Lurgi in order to separate the acid gases present in the syngas from gasification of coal. It has also been used for separation of CO2 in the syngas of units producing hydrogen and ammonia. This process uses a methanol-based solvent (MeOH) whose chemical formula is CH3OH.
These solvents exhibit a high selectivity for H2S removal compared to the other compounds, in particular CO2. Other processes exist such as Purisol® based on the use of N-Methyl-2-Pyrrolidone (NMP), and Morphysorb® destined for the separation of acid gases at high concentrations, using N-Formyl-Morpholine (NFM). The main physical solvents are summarized in Table 11 where selected physical properties are compared. The solubility of a compound in a physical solvent is often expressed as the volume of gas absorbed by the solvent volume. The absorption capacity of H2S in physical solvents is frequently higher than that of CO2. As shown in Table 12, the solubility of H2S in NMP is very important. Moreover, this physical solvent has a high selectivity for H2S. Table 12 also shows a very high solubility of water in some physical solvents. This can be harmful to these physical absorbents because the presence of water in the gas phase will cause its accumulation in the solvent and therefore the reduction of the solvent absorption capacity. Operating at ambient temperature or higher should be avoided during use of some physical solvents in absorption operations because it leads to solvent losses by volatility. A chiller is necessary because the solubility of acid gases in physical solvents is favored by low temperatures. The two most volatile physical solvents are methanol and NMP. They use absorption columns with respective temperatures of -30 °C and -5 °C to prevent evaporation of the product and improve the solubilization of the acid gases. However, the DMEPG shows minimal evaporation losses at ambient temperature due to its very low vapor pressure. Furthermore, it becomes viscous at low temperatures. The physical solvents such as DMEPG and NFM must be used at ambient or higher temperatures.
Chemical solvents
Alkanolamines are most commonly used in acid gas absorption processes. Their molecular structure contains at least both amino (-N) and hydroxyl (-OH) functional groups. The hydroxyl functional group increases the solubility of acid gases in water and reduces the solvent vapor pressure. The amino functional group provides the necessary alkalinity in aqueous solution to ensure the absorption of acid gases.
At equilibrium, in aqueous solution, the reactions between the alkanolamines (R1R2R3N) and acid gases, particularly CO2 and H2S are described by the following chemical equilibria [START_REF] Archane | Etude de l'absorption des gaz acides dans des solvants mixtes[END_REF]:
Self ionization of water: 2𝐻 2 𝑂 ↔ 𝐻 3 𝑂 + + 𝑂𝐻 - (R.3)
Protonation of alkanolamine:
𝑅 1 𝑅 2 𝑅 3 𝑁 + 𝐻 3 𝑂 + ↔ 𝑅 1 𝑅 2 𝑅 3 𝑁𝐻 + + 𝐻 2 𝑂 (R.4)
Hydrolysis of hydrogen sulfide:
𝐻 2 𝑆 + 𝐻 2 𝑂 ↔ 𝐻𝑆 -+ 𝐻 3 𝑂 + (R.5)
Bisulfide ion dissociation:
𝐻𝑆 -+ 𝐻 2 𝑂 ↔ 𝑆 2-+ 𝐻 3 𝑂 + (R.6)
Carbon dioxide hydrolysis:
𝐶𝑂 2 + 2𝐻 2 𝑂 ↔ 𝐻𝐶𝑂 3 -+ 𝐻 3 𝑂 + (R.7)
Bicarbonate ion dissociation:
𝐻𝐶𝑂 3 -+ 𝐻 2 𝑂 ↔ 𝐶𝑂 3 2-+ 𝐻 3 𝑂 + (R.8)
Carbamate hydrolysis for primary and secondary amines:
𝑅 1 𝑅 2 𝑁𝐻 + 𝐻𝐶𝑂 3 -↔ 𝑅 1 𝑅 2 𝑁𝐶𝑂𝑂 -+ 𝐻 2 𝑂 (R.9)
The most used amines in industry are:
Primary amines: monoethanolamine (MEA) and diglycolamine (DGA). Secondary amines: diethanolamine (DEA) and diisopropanolamine (DIPA). Tertiary amines: méthyldiethanolamine (MDEA) and triethanolamine (TEA).
The triethanolamine was the first used in gas processing industry. Table 13 lists the physical properties of some of these chemical solvents. Fig. 15 shows the chemical structure of some alkanolamines. The primary, secondary or tertiary amines are distinguished according to the degree of substitution of the nitrogen atom. Fig. 15: Chemical structure of some alkanolamines [START_REF] Lecomte | Le captage du CO2, des technologies pour réduire les émissions de gaz à effet de serre[END_REF] Primary amines such as monoethanolamine (MEA) are very reactive with H2S and CO2, but they react also with other impurities present in the biogas, which leads to a significant energy requirement for regeneration. They are also susceptible to degradation and corrosion.
Secondary and tertiary amines require less energy for regeneration and are less susceptible to degradation. However, they are less reactive than primary amines and are therefore used for less demanding objectives in terms of purity.
Bottoms [START_REF] Bottoms | Process for separating acidic gases[END_REF] was the first in 1930 to study the amines in the gas processing industry. He filed the first patent for the process of absorption of acid gases by ethanolamines.
In 1997, Pani et al. [START_REF] Pani | Absorption of H2S by Aqueous Methyldiethanolamine Solution at 296 and 343 K[END_REF] experimentally studied the absorption of hydrogen sulphide by a MDEA solution in a temperature range varying between 296 and 343 K. They developed a device to determine the kinetics of absorption of acid gases by alkanolamine solutions, The H2S concentrations used vary between 0 and 0.44 moles of gas per mole of amine. A mass transfer model incorporating a reversible reaction was used to test the experimental flow absorbed and to determine the diffusion coefficient of the MDEA.
In 1984, Blauwhoff et al. [START_REF] Blauwhoff | A Study on the Reaction between CO2, and Alkanolamines in Aqueous Solutions[END_REF] investigated the selective absorption of hydrogen sulphide and have shown that it significantly reduces the cost of gas treatment, by reducing the CO2 flux transferred.
It is important to note that the solvents presented in this section are more suitable for the elimination of high H2S concentrations present in natural gas. Other chemical solvents more suitable for the treatment of biogas are available for the absorption of hydrogen sulfide: Aqueous solution of potassium carbonate (K2CO3), to which are added additives such as amines. This type of chemical solvent is used in many gas treatment processes such as Flexsorb HP and Catacarb processes, developed respectively by ExxonMobil® and Eickmeyer®. Aqueous solution of sodium hydroxide (NaOH), also known as caustic soda. Iron chelate solution Fe (III) according to the following reaction (R.10).
H / OH -CH2 -CH2 -N \ H HO \ CH2 \ CH2 \ N -H / CH2 / CH2 / HO HO \ CH2 \ CH2 \ N -CH2 -CH2 -OH / CH2 / CH2 / HO 𝐻 2 𝑆 + 𝐹𝑒 3+ → 𝑆 0 + 2 𝐹𝑒 2+ + 2 𝐻 + (R.10)
According Neumann and Lynn [START_REF] Neumann | Oxidative absorption of H2S and O2 by iron chelate solutions[END_REF], absorption of hydrogen sulfide by iron chelate solutions are advantageous for achieving high reaction rates (99.99 % H2S removal).
Hybrid solvents
There are processes that combine both physical and chemical absorption, implementing aqueous mixtures that contain water, amine and an organic solvent. This type of processes has been developed to treat gases containing significant fractions of acid gases. An example of such processes is the Hi-Pure process, where the gas to treat is put into contact, firstly with an aqueous solution of potassium carbonate, then with an aqueous solution of amine.
Patented by Shell, Sulfinol process uses an hybrid solvent containing a physical solvent called sulfolane, water and a chemical solvent. This latter determines the name of the mixture. It is called Sulfinol-D when DIPA is used as chemical solvent and Sulfinol-M when MDEA is used. Sulfinol-M is used for the selective removal of H2S in the presence of CO2.
Amisol is a process also used for the selective removal of H2S in the presence of CO2. It was developed by Bratzler and Doerges in 1974. The mixture used is composed of methanol as a physical solvent, and DEA or MEA as a chemical solvent.
Gas-liquid contactors
Application of the principle of absorption is based on contacting the gas and liquid phases in a gasliquid contactor. This latter, also called absorber, aims to achieve better mass exchange between the two phases in contact. The efficiency of a gas-liquid contactor is dependent on phenomena involved in the absorption process:
Transfer laws in the vicinities of interfaces, in particular the transfer coefficients and the interfacial area. Transport laws, in particular diffusion coefficients. The thermodynamic equilibrium at the interface, especially the solubility of acid gases in the solvent. The chemical reaction kinetics: the reaction schemes, the kinetic constants and orders of reactions.
Film thickness, residence time and flow regime also all have a vey important impact on the effeciency of the contactor.
The most common concept to evaluate the separation effeciency of packed columns is expressed in terms of Height Equivalent to a Theoretical Plate (HETP).
There is a large number of gas-liquid contactors in the industry, for mass and heat transfer between the two phases, as seen in Fig. 16. Generally, the gas and the liquid flow counter-currently in order to obtain significant concentration gradients and better absorption rate.
The gas-liquid contactors can be classified according to the dispersion mode of phases. Despite a few exceptions, the liquid phase is naturally the dispersed phase in gas treatment application. The choice of the absorber is mainly related to the physicochemical properties of the gas to be treated and to the chemical reactions involved, as well as gas and liquid flow rates implemented. Table 14 ranks the gas-liquid contactors according to the continuous phase, the fluid inclusion type and the main associated applications (See Fig. 16).
Manufacture of nitric acid
To promote the mass transfer, the absorbers are usually equipped with internal devices to generate the largest interfacial area in order to achieve better mass exchange between the two phases in contact.
In prior years, plate columns were heavily favored over packed columns. But, nowadays, packing columns are the most used in gas absorption applications. Only few specific applications with special design requirement can lead to different choices as in the case of very large flow rates or very soluble compounds where it is preferable to use plate or spray columns. Fig. 16: Main gas-liquid contactors [START_REF] Roquet | Modélisation et étude expérimentale de l'absorption réactive multiconstituants[END_REF] In a packed column, the gas and liquid normally flow counter currently as seen in Fig. 17. The liquid is sprayed from the top of the column to flow by gravity on the packing forming a large-area liquid film. The liquid enters in contact with the gas injected from the bottom of the column. Liquid flow must be sufficient to ensure uniform wetting of the packing and must not exceed a certain threshold in order to avoid flooding of the column.
The selection of the packing type and material is a very important issue in packed column design. The material should respect certain requirements as weight, pressure drop and especially corrosion resistance. It exists two types of packings: Those consisting of packing elements placed in a random disposition and those containing corrugated sheets arranged in an orderly manner. The first one is called random packing and the latter is called structured packing. Fig. 18 shows the two types of packing produced by Sulzer®.
Fig. 17: Schematic representation of a packed column
Today, structured packings are much more used than random packings. structured packings ensure a better transfer with a minimal pressure drop.
Adsorption technology
Adsorption is a growing process, that is increasingly used in biogas purification. It removes water vapor, odors and other impurities as hydrogen sulfide, from biogas streams. Adsorption is a surface phenomenon that occurs between a vapor or liquid phase and a solid. Molecules, ions or atoms forming a solid surface, are subjected to asymmetric forces that result in an attractive force field. This attractive force has a limited range, but enough to attract gas or liquid molecules located in the immediate vicinity of the interface. These are forces that cause fixing of molecules on the surface.
Adsorption is classified according to the nature of the interactions that allow the fixing of the adsorbate on the surface of the adsorbent. There are two types of adsorption, according to the nature of the interactions: physical and chemical adsorption.
Physical adsorption is a reversible phenomenon characterized by weak interaction forces as Van der Waals' forces, while chemical adsorption is usually irreversible involving strong binding energies.
In contrast to physical adsorption, chemisorption grows at high temperatures and causes the formation of a chemical compound on the surface of the adsorbent. Table 15 lists the criteria to differentiate between physical and chemical adsorption.
Mechanism of adsorption
During adsorption process, the fluid molecules bind to the surface of a solid following three steps, describing the mass transfer from the fluid phase to the solid surface.
During the first step called external diffusion, the molecules of the fluid phase migrate to the vicinity of the outer surface of the solid particles. To model the transfer of the fluid phase towards the outer surface of the solid phase, the Equation ( 10) is often used: The second step is called internal diffusion. It results from the transfer of fluid phase particles from the outside of the solid surface inside the pores. To simplify the problem to one spatial dimension, the pore is assumed spherical. The flow transferred into the pore is expressed by Equation [START_REF] Görisch | La production de biogaz[END_REF].
- 𝑑𝐶 𝑡 𝑑𝑡 = 𝑘 𝑓 ( 𝑎 𝑎𝑑𝑠 𝑉 ) (𝐶 𝑡 -𝐶 𝑒 ) (10) Where
𝐽 = -𝐷 𝑝 𝜀 𝑝 𝜏 𝜕𝐶 𝑝 𝜕𝑟 (11)
Where: The last step of the adsorption mechanism is the surface diffusion. It corresponds to the attachment of the vapor phase particles on the surface of the adsorbent. This step is very quick and independent of the overall process. The flux is then defined by Equation [START_REF] Deublein | Biogas from waste and renewable resources : an introduction[END_REF].
J [mol.m -2 .
𝐽 = -𝐷 𝑠 𝜀 𝑝 𝜏 𝜕𝑞 𝜕𝑟 (12)
Where Ds [m 2 .s -1 ] is the surface diffusivity and q [mol.kg -1 ] is the amount adsorbed.
Fig. 19 shows the different steps of the adsorption mechanism.
Materials used for H2S adsorption
The adsorbents are microporous solids, characterized by high surface area per unit weight, from 100 to over 2000 m 2 .g -1 . The classification of the International Union of Pure and Applied Chemistry (IUPAC) defines three kinds of pores by their size: Microporous materials have pore diameters of less than 2 nm. Mesoporous materials have pore diameters between 2 and 50 nm. Macroporous materials have pore diameters of greater than 50 nm.
The most widely used adsorbents in industry remain: activated carbon, zeolite, silica gel and activated alumina. Other materials rarely used today as red mud may have good adsorption performance in H2S removal.
Activated carbon
Activated carbon is characterized by a high degree of microporosity. This structure gives the activated carbon its very large surface area. This essential feature allows the activated carbon to be by far, the most widely used adsorbent in industry. Pores size between of 0.5 and 1 nm were found by Yan et al. [START_REF] Yan | Influence of Surface Properties on the Mechanism of H2S Removal by Alkaline Activated Carbons[END_REF] to have the best adsorption capacity. Activated carbon is characterized by a non-polar surface allowing it to preferentially adsorb nonpolar compounds. Activated carbon can be impregnated with potassium hydroxide (KOH) or sodium hydroxide (NaOH) which acts as catalysts to remove H2S, because non-impregnated activated carbon is a weak catalyst, and however, removes hydrogen sulfide at a much slower rate.
Bandosz [START_REF] Bandosz | On the adsorption/oxidation of hydrogen sulfide on activated carbons at ambient temperatures[END_REF] has shown that using low hydrogen sulfide concentrations, with a sufficient time in laboratory tests leads to comparable removal capacities of both impregnated and non-impregnated activated carbons. But in on-site applications, removal capacities vary greatly because of the presence of other constituents such as volatile organic compounds (VOCs) which may inhibit the removal capacity. According to Abatzoglou and Boivin [START_REF] Abatzoglou | A review of biogas purification processes[END_REF], the typical H2S adsorption capacities for respectively impregnated and non-impregnated activated carbons are 150 and 20 mg H2S/g of activated carbon.
Activated carbon is produced in different shapes and sizes depending on the application for which it is used:
Extruded activated carbon (EAC). Granular activated carbon (GAC). Powder activated carbon (PAC). Fig. 20 shows the different shapes of commercial activated carbons.
Fig. 20: Shapes of commercial activated carbons [53] Zeolites
Zeolites are microporous adsorbent materials. The size of the pores can be adjusted by ion exchange to catalyze selective reactions. According to the International Zeolite Association (IZA), in 2007, there are 176 crystal structures identified by a three letter code. The vast majority of these structures are synthetic while the rest exists in nature. The zeolites are particularly effective for removing polar compounds such as water and H2S, from non-polar gas streams, such as methane. Zeolites are low capacity adsorbents, with a surface area not exceeding 900 m 2 .g -1 but they have a good selectivity. Compared to activated carbon, they are less sensitive to heat.
The adsorption of H2S present in biogas is one of the application envisaged with zeolites. The measured adsorption capacities remain well below the purification yields obtainable with other systems. Yasyerli et al. [START_REF] Yasyerli | Removal of hydrogen sulfide by clinoptilolite in a fixed bed adsorber[END_REF] determined an adsorption capacity of 30 mg.g -1 with clinoptilolite, a natural zeolite during the treatment of a real biogas. Cosoli et al. [START_REF] Cosoli | Hydrogen sulphide removal from biogas by zeolite adsorption : Part i. GCMC molecular simulations[END_REF] reported an adsorption capacity of 40 mg.g -1 on synthetic zeolites, for H2S concentration of 1000 mg.m -3 .
Factors affecting the adsorption
Literature concerning the factors affecting adsorption dates back to 1914 [START_REF] Johns | The simulation of gold adsorption by carbon using a film diffusion model[END_REF]. The main factors affecting the adsorption rate are the temperature, the surface area and the porosity of the adsorbent, the competition between species, and the polarity of the adsorbent and the adsorbate.
Temperature
Fig. 21 shows that during adsorption processes, the adsorbed amount increases as the temperature decreases. Moreover, physisorption releases heat. So, as any exothermic reaction, it is favored by low temperatures.
Fig. 21: Effect of temperature on some adsorbents [START_REF] Uop | UOP Molecular Sieves[END_REF]
Adsorbents: (───) Molecular sieves ; ( ─ ─ ─ ) Activated alumina ; ( ……. ) Silica gel
Unlike physical adsorption, chemical adsorption requires higher temperatures, because it is an endothermic phenomenon.
Specific surface area
Adsorption performance increases with the specific surface area of the adsorbent. This proportionality was demonstrated by Bouchemal and Achour [START_REF] Bouchemal | Essais d'adsorption de la tyrosine sur charbon actif en poudre et en grain[END_REF] during adsorption study of tyrosine on activated carbon. Selectivity
The concept of selectivity is crucial in the design of adsorption processes. The presence of competitive species at the surface of the adsorbent decreases the capacity of each species to be adsorbed. However, The higher the selectivity, the easier would be the separation.
Pore size distribution
Adsorption is a surface phenomenon, hence the interest of porous structures. The porosity of the adsorbent material is therefore an important physical property. For example, the microporous activated carbon has a better adsorption capacity than the mesoporous activated carbon in the case of macromolecules. The thermal regeneration and impregnation could modify the pore volume of the adsorbent. The resolution of Equation ( 13) allows access to the porous distribution [START_REF] Evans | Capillary condensation and adsorption in cylindrical and slit-like pores[END_REF].
𝑁(𝑝 𝑝 0 ⁄ ) = ∫ 𝑁(𝑝 𝑝 0 , 𝑤 ⁄ ) 𝑓(𝑤) 𝑑𝑤 𝑤 𝑚𝑎𝑥 𝑤 𝑚𝑖𝑛 (13)
Where:
N (p/p0) [-], is the experimental adsorption isotherm. N (p/p0 , w) [-], is the local isotherm in a pore size of w. f (w) [-], is the pore distribution function. w [nm], is the opening dimension of a pore.
Molecular weight and structure
If the molecular weight of particles is low, it means that they are light and move faster than those with high molecular weight. The probability of being adsorbed is therefore much greater.
If the molecular structure of particle is large, pores are filled rapidly with low yields to saturation, causing the decrease of free sites for other molecules.
Polarity
For more affinity between the adsorbent and the adsorbate, they must have the same polarity [START_REF] Lesage | Etude d'un procédé hybride Adsorption / Bioréacteur à membranes pour le traitement des effluents industriels[END_REF]. For example, the structure of activated carbons is non-polar and therefore promotes the adsorption of nonpolar molecules. Hydrogen sulfide is a polar gas, it is adsorbed on the polar surfaces in the absence of water vapor. In the presence of water vapor in the gas, there is competitive adsorption to the advantage of water vapor which has a much higher partial pressure and which is much more polar than hydrogen sulfide.
Adsorption isotherms
In order to model the binding of a gas over a bed of adsorbent, it is necessary to choose a model to represent interactions between the gas and the solid. An adsorption isotherm is the curve presenting the static adsorption capacity of an adsorbate/adsorbent system at a given temperature. The curve presents the specific amount adsorbed, Na as a function of the relative pressure, P/P0 as seen in Fig. 22. According to the classification of IUPAC based on the one established by Brunauer [START_REF] Brunauer | [END_REF], there are six different isotherms profiles represented in Fig. 22.
The adsorption isotherm type I, is distinguished by the existence of a horizontal line, which results in the saturation of the adsorbent. This isotherm is characteristic of adsorbent having micropores, which are filled at low relative pressures. This is essentially a monolayer adsorption, often described by the Langmuir isotherm, where there may be strong interactions involved. The equation that describes the Langmuir isotherm is presented in Table 16.
The adsorption isotherm type II are widespread for non-porous or macroporous solids. The absence of a clearly identifiable inflection point on the curve corresponding to the filling of a monolayer and the absence of a continuous increase of the amount adsorbed are indicatives of energy heterogeneity of the surface regarding the interactions adsorbate / adsorbent.
Fig. 22: Classification of adsorption isotherms [IUPAC]
The adsorption isotherm type III corresponds to non-porous or macroporous solids. This isotherm is characterized by weak interactions adsorbate / adsorbent.
The adsorption isotherms type IV and V are characterized by a filling of mesopores, and a capillary condensation in the pores. The interactions adsorbate / adsorbent for type V isotherm are weaker than those of type IV.
The adsorption isotherm type VI is very rare. It is encountered in the case of very homogeneous surfaces.
Modeling of adsorption isotherms
Several models have been proposed to describe the experimental adsorption isotherms. Despite their common interest, the assumptions defined for each model are different such as those concerning interactions that hold the fluid molecules on the surface of the adsorbent. Most of these models are described in the literature such as: Freundlich model, Elovich model, Temkin model, Toth model and Langmuir model. The latter is the best known, and probably the most widely used to describe the adsorption isotherm. It was developed by Irving Langmuir in 1916 [START_REF] Langmuir | The constitution and fundamental properties of solids and liquids[END_REF], sixteen years before obtaining the Nobel Prize in chemistry. The Langmuir model assumes uniform energies of adsorption onto the surface and no transmigration of adsorbate in the plane of the surface [START_REF] Langmuir | The constitution and fundamental properties of solids and liquids[END_REF].
The Freundlich model assumes that as the adsorbate concentration increases, the concentration of adsorbate on the adsorbent surface also increases [START_REF] Freundlich | Uber die adsorption in losungen[END_REF]. This proportionality is explained by the Freundlich expression which is an exponential equation.
The Temkin model assumes that in adsorption, the binding energies are distributed uniformly, and that due to interactions between the adsorbent and the adsorbate, the heat of adsorption of all the molecules in the layer decreases linearly with coverage [START_REF] Temkin | Adsorption equilibrium and the kinetics of processes on nonhomogeneous surfaces and in the interaction between adsorbed molecules[END_REF].
The Elovich model is derived from a kinetic principle assuming that the adsorption sites increase exponentially with adsorption, implying a multilayer adsorption [START_REF] Elovich | Theory of adsorption from solutions of non electrolytes on solid (I) equation adsorption from solutions and the analysis of its simplest form, (II) verification of the equation of adsorption isotherm from solutions[END_REF].
The Toth model [START_REF] Toth | Calculation of the BET-compatible surface area from any type I isotherms measured above the critical temperature[END_REF] was developed based on an improvement of the Langmuir model to reduce the error between experimental and predicted data. This model is applied in the case of multilayer adsorption.
The equations defining the main models are shown in Table 16. ] is the variation of adsorption energy. K0 and K1 [m 3 .g -1 ] are the Temkin and Hill-de Boer equilibrium constants respectively. K2 [kJ.mol -1 ] is the energetic constant of the interaction between adsorbed molecules. KFG [m 3 .mol -1 ] is the Fowler-Guggenheim equilibrium constant. W [kJ.mol -1 ] is the interaction energy between adsorbed molecules.
Adsorption processes
There are two major classes of adsorption processes: the temperature swing adsorption processes (TSA) and the pressure swing adsorption processes (PSA).
Temperature swing adsorption processes
The temperature swing adsorption is the oldest cyclic adsorption process. It consists of two main phases: the adsorption phase and the desorption phase, during which the adsorber is heated. A precooling phase is therefore commonly added to bring the temperature to a level similar to that desired for adsorption.
The main advantage of temperature swing adsorption process compared to pressure swing adsorption is to desorb more easily species strongly adsorbed. For this reason, the temperature swing adsorption processes are used, for example, to capture volatile organic compounds present in many effluents.
Temperature swing adsorption is used whenever the energy required to regenerate a bed is sufficiently large that long high thermal cycles are needed given the strength of the bond between the adsorbate and adsorbent. For example, adsorption of H2O on zeolites.
However, significant time required to heat and cool the adsorber, prevent the use of temperature swing adsorption process in fast cycle. Moreover, the adsorption columns used for temperature swing adsorption cycles are large in size, which has an impact on the cost of the installation. However, unlike pressure swing adsorption processes, that use mechanical energy, temperature swing adsorption processes can use residual heat, which then reduce their operating cost.
Pressure swing adsorption processes
The pressure swing adsorption process was initially introduced as an alternative to temperature swing adsorption process. It is mainly used in separation of some gas species from a mixture of gases, this is another option that complements the traditional separation processes as absorption and cryogenic distillation. In a pressure swing adsorption process, the feed pressure is generally greater than atmospheric pressure. The regeneration pressure may be less than the atmospheric pressure, in this case the process is called vacuum swing adsorption (VSA).
Since the adsorption step is performed at a higher pressure than the pressure of desorption step, intermediate steps are necessary: a compression is required to move from the low to the high pressure at the end of the regeneration step. A decompression step is also necessary to reduce the pressure at the end of the adsorption step. These four steps are the elements of a basic cycle called the Skarstrom cycle. Steps which constitute this cycle, and the variation of the pressure as a function of the different phases, are shown in Fig. 23.
Fig. 23: Skarstrom cycle stages and pressure variations [68]
To ensure continuous production, the pressure swing adsorption process must have at least two separation columns. These columns suffer the four steps mentioned above, but with a temporal phase shift leading to a cyclic operation of the pressure swing adsorption process, in which one of the columns is regenerated, while the gas mixture is separated in the other one.
Currently, there are several hundred thousand pressure swing adsorption processes installed worldwide. Their size varies from 6 l.hr -1 , often for the production of medical grade oxygen with a purity of 90 % to 2000 m 3 .h -1 typically for the production of pure hydrogen at 99.999 % [START_REF] Shivaji | Pressure Swing Adsorption[END_REF].
For purification, temperature swing adsorption is generally the process of choice. For bulk separation, pressure swing adsorption is more suitable.
Membranes technology
A membrane can be defined as a physical barrier for the selective transport of chemical species. As seen in Fig. 24, it allows the restricted passage of one or more constituents. The flux passing through the membrane is called permeate, while the retained is called retentate or concentrate. One or the other of these two flows may be advantageous according to the intended application, and can therefore be used as final product [START_REF] Soni | A general model for membranebased separation processes[END_REF].
The driving force in a membrane can be a pressure gradient, a concentration gradient, a temperature gradient, or an electrochemical gradient. Thus, the membranes include a wide variety of materials and structures. Table 17 shows the main materials used by membranes manufacturers.
Fig. 24: Schematic representation of a membrane [71]
The membrane performance is evaluated for its ability to separate different species from a gas mixture, and transport a maximum quantity of gas at high speed. These two criteria are generally in competition, a membrane is more efficient when it presents the best compromise: flow / selectivity.
A membrane can be gaseous, liquid, solid or a combination thereof [START_REF] Lakshminarayanaiah | Equations of Membrane[END_REF]. It covers a wide range of applications such as ultrafiltration, microfiltration, reverse osmosis, pervaporation, electrodialysis and gas separation. In addition, other medical applications such as blood oxygenators and artificial kidneys require the use of membrane technology. Of the applications listed in Table 18, reverse osmosis and ultafiltration are the most widely used industrially. The two essential parameters in the operation of a membrane are: permeability and selectivity. These parameters are used to provide informations on the membrane, and to describe its performance on the transfer of material through the barrier and its ability to separate one or more chemical species in a gas mixture.
The selectivity expressed most often by separation factor is defined as the ratio of the compositions of components i and j in the permeate relative to the composition ratio of these components in the retentate.
𝑆 𝑓 𝑖,𝑗 = ( 𝑥 𝑖 𝑥 𝑗 ) 𝑝𝑒𝑟𝑚𝑒𝑎𝑡𝑒 ( 𝑥 𝑖 𝑥 𝑗 ) 𝑟𝑒𝑡𝑒𝑛𝑡𝑎𝑡𝑒 ( 14
)
The permeability is used to indicate the ability of the membrane to feed the permeate. In order to ensure an attractive performance, the permeability ∏i [mol.m -2 .Pa -1 .s -1 ] must be high to ensure a large transmembrane flux.
𝛱 𝑖 = 𝐽 𝑖 𝛥𝑃 𝑖 (15)
Where Ji [mol.m -2 .s -1 ] is the transferred flux and ΔPi [Pa] is the partial pressure difference of the constituent i through the membrane.
Generally, the membranes with a high permeability have a low selectivity and vice versa.
Membranes for gas separation
The membranes can be classified according to different viewpoints. They can be divided according to the nature: biological or synthetic, according to the morphology, or according to the structure.
Several scientific journals classify membranes into porous and non-porous membranes depending on the structure of the material. In general, a membrane may be thick or thin, and its structure may be homogeneous or heterogeneous with a transfer mechanism which may be active or passive [START_REF] Chen | Développement de nouvelles membranes à base de polyimide pour la séparation CO2 / CH4[END_REF]. Fig. 25 shows the main types of membrane. The figure shows a thin interface that forms the membrane. This interface can be homogeneous at the molecular level, that is to say, completely uniform in composition and structure. It can also be heterogeneous, comprising for example pores.
Fig. 25: Diagram of the main types of membranes [75]
The main types of membranes used today for gas separation are membranes with dense polymeric materials, where transfers follow a solute solubility and diffusion mechanism based on Fick's law.
𝐽 𝑖 = 𝐷 𝑖𝑗 𝜕𝑐 𝑖 𝜕𝑥 (16)
The structure of a porous membrane is like a sponge, it is also very similar to a conventional filter [START_REF] Chen | Développement de nouvelles membranes à base de polyimide pour la séparation CO2 / CH4[END_REF]. Most of the materials used are characterized by tortuous and interconnected pores, whose precise geometry is inaccessible. Table 19 shows the different categories of membranes according to the size of their pores provided by the International Union of Pure and Applied Chemistry.
Geometric configuration of membranes
The geometric design is an essential step in any membrane process as this factor defines the active area of the module. The first membranes used are: flat sheet and tubular membranes. Today, these systems are still available, but their use has declined because of their low efficiency and high cost. They are mostly replaced by spiral wound, and hollow fibers membranes. Fig. 26 shows the four membrane contactors mentioned above.
The flat sheet membranes are the oldest and simplest to use. They have planar configuration and are mainly rectangular, though other geometries exist for membrane modules designed to rotate. Modules may be stacked to provide a double deck.
Tubular membranes consist of tubes having an inner diameter between 4 and 25 mm [START_REF] Boucif | Modélisation et simulation de contacteurs membranaires pour les procédés d'absorption de gaz acides par solvant chimique[END_REF]. They are based on a simple technology, easy to use and clean, but they are large energy consumers for a low exchange surface area per unit volume.
Spiral wound membranes consist of a flat sheet membrane coiled on itself around a perforated tube, which collects the residue. As seen in Fig. 26.c, the feed flows axially in the channels, while the permeate flows along a spiral path towards the porous tube [START_REF] Boucif | Modélisation et simulation de contacteurs membranaires pour les procédés d'absorption de gaz acides par solvant chimique[END_REF]. The hollow fiber modules consist of a bundle of hollow fibers of outer diameter less than 1 mm. The main advantage of hollow fiber membrane is their compactness, due to the membrane's high packing density. Also, They can be operated at very high pressures, due to the absence of membrane support. One disadvantage of hollw fiber membrane is the pressure drop. Hence the importance of fiber length criterion in the design of a separation unit.
Overall, the selection of a given configuaration should be addressed individually based on membrane properties and the throughput rates desired [START_REF] Ohya | Polyimide membranes: applications, fabrications and properties[END_REF]. Table 20 shows the characteristics of different geometries of membranes.
Biogas purification by membrane processes
In the case of biogas, separation of compounds is often limited to three species: methane, carbon dioxide and hydrogen sulfide. Polyurethane membranes show significant selectivity between methane and hydrogen sulfide compared to that of methane and carbon dioxide. The other types of membranes such as polyimide, polyamide, polysulfone and cellulose acetate membranes show a significant selectivity between methane and carbon dioxide, at the expense of selectivity between methane and hydrogen sulfide. Today, despite the limited studies and few results, it is possible to obtain methane concentrations above 95 % in the retentate. However, this residue may contain significant hydrogen sulfide concentrations. But the main disadvantage of the membrane process remains the methane loss, with methane weight percentages which can reach 15 % in the permeate. In addition, the membrane processes require treatment upstream, to separate the volatile organic compounds and water vapor. The membrane's resistance to breaking due to the pressure gradient is one important technical limitation. Exposure to certain solvents and materials causes the membrane to get either damaged or blocked up. These limitations are of great importance since membranes usually are expensive.
Cryogenic technology
The term cryogenic refers to the science of very low temperatures. The cryogenic separation process consists on passing the pollutant from the gas phase to the liquid or solid phase by lowering the temperature, in order to separate it from the carrier gas which is the methane. The pollutant is recovered, and will then be destroyed or valorized for possible use. This technology is based on the thermodynamics of phase equilibria. The thermodynamic equilibrium between the different phases results in a graph called phase diagram, which generally uses pressure and temperature as variables.
To purify and upgrade biogas with the cryogenic technology, the gas is chilled and the differences in condensation or solidification temperatures for different compounds are used to separate impurities and carbon dioxide from biogas, which can be seen in Table 21. The technology can be used to upgrade biogas by cooling it at atmospheric pressure in order to seperate carbon dioxide at temperatures related to the CO2 partial pressures upstream and downstream the refrigeration unit, typically from -90 °C to -120 °C. Then, the biogas is chilled to produce liquid biogas (LBG) at temperatures which depend on the pressure: -120 °C at 1.5 MPa to -162 °C at atmospheric pressure.
To ensure a high purity of the products and an optimal operation, all traces of hydrogen sulfide should be removed upstream of the process using one of the conventional technologies presented above.
Depending on the temperature of the process, different purity grades can be reached. A lower temperature results in a higher removal efficiency. A study was performed to condense the volatile organic silicon compounds (VOSiC) contained in biogas. This process involves cooling the biogas at different temperatures, in order to evaluate the effect of cold on volatile organic silicon compounds removal. Table 22 shows the results obtained by different authors using this process at different temperatures. Production of liquid biogas is a suitable upgrading technology for landfill gas, which usually consists of significant amount of nitrogen, hard to separate from methane with conventional technologies. However, when the methane is liquefied, nitrogen can be separated due to its lower condensation temperature [START_REF] Benjaminsson | Nya renings -och uppgraderingstekniker för biogas[END_REF].
Table 22. Effect of temperature on the abatement of volatile organic silicon compounds
Condensation temperature [°C] Abatement rate of VOSiC [%] References
Cooling biogas to very low temperatures is energy intensive but in some occasions the product is more valuable. If the biogas production plant is situated on the countryside, far from the end users, it is more space efficient to transport biogas in its liquid state. Today pressurised (200 bar) gas is delivered in gas vessels stored on a mobile compressed biogas (CBG) storage, leading to transportation of a huge share of steel, compared to gas [START_REF] Pettersson | LCNG-stuidemöjligheter med LNG i fordonsgasförsörjningen i Sverige[END_REF].
Producing liquid biogas also leads to a renewable fuel available for heavy duty vehicles. The fuel can be stored as liquid biogas on the vehicle, which increase the driving distance per tank. The requirement is that the vehicle is running frequently, otherwise liquid biogas will vapourize and the methane will be vented to the atmosphere [START_REF] Johansson | Production of liquid biogas, LBG, with cryogenic and conventional upgrading technology. Description of systems and evaluations of energy balances[END_REF].
An advantage of the cryogenic technology is that it does not need any water or solvent to function, although it requires external cooling equipment such as a refrigeration system.
Choice of the separation process
Various technologies are available in order to purify and upgrade biogas. Water scrubbing and pressure swing adsorption dominated the market until 2008. But lately, membrane separation units and chemical scrubbers have increased their market share as seen in Fig. 27.
Fig. 27: Most technologies used for the purification and upgrading of biogas [International Energy Agency]
The choice of the technology to be used depends on multiple parameters, such as the final use, the incentives, the flow rate, the nature and diversity of species present, and the concentrations to treat. The solution chosen has to meet different requirements, both technical and economic. Other criteria can sometimes be decisive: environment, maintenance, temperature and pressure … There is therefore, no universal treatment technology.
Table 23 provides an overview of the different biogas purification and upgrading techniques. It indicates the methane concentration in the purified gas, loss of methane and the substances used in the process such as water and chemicals solvents.
The most important criterion for environmental impact of technology is methane losses. The portion of methane which slips away from raw biogas because of the separation technology itself, contributes to global warming, so regulations of most European countries require that the methane slip has to be burnt. The two main selling points of a biogas treatment unit, are its efficiency and cost. Table 24 compares the main purification and upgrading technologies according to these two criteria. Operating and investment costs are very variable depending on the technology, for abatement performance, often above 90 %.
Generally, the more expensive technologies, both in investment and in operation are those of oxidation, but during the work of this thesis, only anaerobic processes are studied. -Possible pollutant recovery.
-High operating costs, related to the liquid phase in general.
-Generating a polluted aqueous effluent.
-Can generate additional separation operations.
Adsorption on activated carbon 80 -90 7 -55 0.7 -2.4
-Very easy to use.
-Tolerates flow variations.
-Possible pollutant recovery.
-Add an operating cost, associated with the regeneration of the adsorbent.
Condensati on 50 -90 5 -37 1.4 -8.2 -Possible pollutant recovery.
-Icing Possibility
The investment costs of the different technologies do not differ greatly, especially at high flow rates. They are presented in Fig. 28. [START_REF] Bauer | Biogas upgrading -Review of commercial technologies[END_REF] Separation technologies: ( __ Δ __ ) Chemical absorption (Amines) ; (--□--) Water wash ; (-. ○-. ) Membranes ; ( … + … ) Adsorption (PSA)
Fig. 28: Comparison of investment costs of different biogas purification and upgrading technologies
Chemical absorption using amines as chemical solvent is slightly more expensive in terms of investment, and membrane separation process is less costly for low flow rates. This investment cost criterion begins to converge for all technologies at higher flow rates.
Conclusion
This chapter has introduced the various technologies used for the purification and upgrading of biogas. These separation methods were compared according to several criteria such as separation efficiency, the environmental impact and investment and operating costs.
The work of this thesis is part of the project led by the company Cryo Pur®, which aims to create an innovative biogas cryogenic purification and upgrading process for the production of a renewable fuel and liquid carbon dioxide.
For carbon dioxide capture, Cryo Pur® company has developed a new technology which consists of anti-sublimating the carbon dioxide on a low temperature surface (from -90 °C to -120 °C), thus transforming CO2 directly from its gaseous phase into a solid phase frosted on the cold surface [START_REF] Clodic | CO2 capture by anti-sublimation -Thermo-economic process evaluation[END_REF]. Having regard to the need to liquefy the biogas at low temperatures to be used as vehicle fuel, the best alternative to the conventional technologies is therefore to upgrade biogas with cryogenic technology.
Biogas upgrading depends on the concentrations of hydrogen sulfide present. Indeed, this compound must be completely eliminated upstream of the process to ensure high quality products and to prevent corrosion of equipment such as heat exchangers used in the cryogenic process. Hence the need to use a process with a very high efficiency, to be able to eliminate all the hydrogen sulfide present in the biogas. On the whole, the lowest methane losses are indicated for chemical absorption and adsorption processes, and the highest one relates to membranes and water wash. Finally, the choice was focused on the following two technologies used for the removal of hydrogen sulfide:
Chemical absorption in a structured packed column using sodium hydroxide (NaOH) as solvent.
Adsorption in a fixed bed using activated carbon.
The next chapter will present the technology developed by Cryo Pur® Company for purification and upgrading of biogas.
Introduction
Experiments were performed on the industrial demonstrator "BioGNVAL" treating 85 Nm 3 /h of biogas from the Valenton water treatment plant, the second biggest in France run by the SIAAP (Public society serving the Paris region). The demonstrator shown in Fig. 29 was developed by the Cryo Pur® Company. It was built in partnership with SUEZ as part of the BioGNVAL project, and partially funded by the 'Invest in the Future' program run by the ADEME (French Environment and Energy Management Agency). GNVert (Engie) and IVECO are also partners in the BioGNVal project, providing the Bio-LNG distribution station and the heavy goods vehicle Flex Fuel gas / Bio-LNG respectively.
Fig. 29: BioGNVAL demonstrator located at Valenton water treatment plant [9]
The BioGNVAL pilot plant uses a cryogenic method to purify and liquefy biogas efficiently without loss of methane and without emitting greenhouse gases. The system generates two products from biogas: liquid bio-methane and bioCO2 at purity level greater than 99.995 % respecting EIGA (European Industrial Gases Association) specifications [START_REF]Carbon Dioxide Source Certification, Qualify Standards and Verification[END_REF].
The general principle of operation of the BioGNVAL pilot plant is depicted in Fig. 30. It takes place in three main stages: Pretreatment or purification which is to remove trace compounds present in the biogas as hydrogen sulfide, water vapor and siloxanes.
CO2 capture or biogas upgrading which consists in separating carbon dioxide from biogas. The content of carbon dioxide in biogas is typically greater than 30 %. Liquefaction of the biomethane after purification and upgrading biogas.
Fig. 30: Schematic representation of Cryo Pur® system [9]
This chapter is divided into two sections: The first part consists of a general presentation of the pilot plant. The second part is dedicated to the operating principle of the subsystems. The raw biogas, whose conditions and composition are presented in Table 25, comes from the anaerobic digester through the line of biogas. The latter distributes the biogas in all sub-systems, starting with the desulfurization subsystem, until the liquefaction subsystem. Conditions at the outputs of each sub-system in terms of composition and temperature are defined. If one of the conditions is not met, the biogas is routed through the biogas treatment line to the flare. These conditions are shown in Table 26. Once the full treatment is performed, the liquefied biogas is stored in a mobile container presented in Fig. 31. The operating principle of reactive absorption is simple. The system mainly comprises a structured packed column, a water circuit and two heat exchangers, used to cool biogas and the liquid phase. Fig. 32 shows the apparatus setup for the desulfurization process.
General presentation of the BioGNVAL pilot plant
Table 25. Conditions and composition of the raw biogas treated by BioGNVAL pilot plant
Conditions
Table 26. conditions of passage from one subsystem to another [9]
Passage
The biogas is saturated with water vapor at the input of the pilot plant. It firstly passes through a heat exchanger (Green line) to cool the gas phase and condense a portion of the water vapor contained therein. A phase separator recovers the condensed water vapor and sends it to a drainage tank. Thereafter, the biogas enters the bottom of the absorption column where it is contacted in a countercurrent with the washing water sprayed from the top (Violet line). This water is neutralized by an aqueous solution of sodium hydroxide. The injected quantity is controlled by a pH meter placed on the tank TK-230-04. This tank showed in Fig. 32 provides the column with the liquid phase. The liquid phase is recirculated by a pump and is cooled by an exchanger to a temperature slightly higher than 2 °C to prevent freezing.
Biogas finally exits from the top of the column at a temperature of about 5 ° C. The cooling duty is provided by the liquid phase on the surface of the packing in direct contact within the column. High efficient mass and heat transfer between the liquid and the biogas are achieved thanks to the packing surface. In order to test the adsorption technology, the absorption process could be bypassed. Therefore, the biogas passesthrough two fixed bed adsorption columns placed in series, as seen in Fig. 33.
Fig. 33: Piping and instrumentation diagram of adsorption subsystem for the removal of hydrogen sulfide [9] Equipment: CV (Control Valve) ; FV (Flow Valve) ; MV (Manual Valve) ; PSV (Pressure Safety Valve) ; BL (Blower) ; F (Filter) ; HX (Heat Exchanger) ; SP (Separator) ; CU (Condensing Unit) ; TK (Tank). Instrumentation: FT (Flow Transmitter) ; TE (Temperature Element) ; AT (Analyzer Transmitter) ; XY (Limit Switch)
These two columns packed with impregnated activated carbon, provide a continuous treatment of hydrogen sulphide. To further improve the functioning of activated carbon, a small amount of oxygen is added to oxidize the hydrogen sulfide, which will make larger molecules and allow to block them into the pores.
Dehumidification and siloxanes icing subsystem
The biogas leaves the desulfurization subsystem at 5 °C and with a H2S content lower than 1 ppm. Dehumidification and cooling the biogas continue until -40 °C. At this temperature, only 125 ppm of water vapor remains in the biogas and the heaviest Siloxanes are removed.
Biogas cooling continues until a temperature of -87 ° C. This eliminates the siloxanes, as well as water vapor, whose content is reduced to less than 1 ppm at the outlet of the subsystem.
To ensure a continuous operation and to prevent the accumulation of ice that could block the passage of biogas, the chillers used contain two evaporators placed in parallel to alternate operation in frosting and defrosting mode.
Carbon dioxide capture subsystem
After having been purified of hydrogen sulphide, water vapor and siloxane, the biogas is fed into the CO2 capture subsystem to be upgraded. The biogas is now composed of methane and carbon dioxide at atmospheric pressure, that's to say a carbon dioxide partial pressure below its triple pressure. After cooling the biogas to -120 ° C in this subsystem, the carbon dioxide thus undergoes the phenomenon of antisublimation, which means that it is transformed directly from its gaseous phase into a solid phase frosted on the cold surface of the heat exchanger as seen in Fig. 34.
To allow continuous operation of the upgrading system, two heat exchangers are used. When the first is frosting the carbon dioxyde, the second operates in defrost mode. Thus carbon dioxide is recovered in the liquid phase and then stored in a cryogenic vessel. This strategy allows recovery of CO2 at a very high purity level (99.995 %) which could be used for industrial and food applications.
Biogas liquefaction subsystem
After purification and upgrading of biogas, the biomethane is sent to the liqufaction subsystem which is composed of a compression unit, a cooling and liquefaction unit and a storage unit for the liquefied biomethane.
At the outlet of the upgrading unit, a gas analyser ensures that the biomethane produced is composed of less than 2.5 % of residual carbon dioxide. Once the required biomethane quality reached, it is supplied to the liquefaction subsystem to be compressed and liquefied. Compression of biomethane increases its liquid-vapor saturation temperature, which means its liquefaction at a high enough temperature level reducing electrical consumption of the refrigeration machine. Fig. 34: Carbon dioxide antisublimation [START_REF][END_REF]
Experimental results concerning the removal of hydrogen sulfide by chemical absorption using sodium hydroxide
The main objectives of the experiments on the absorption column is to maintain the concentration of hydrogen sulfide between 0 and 1 ppm at the outlet of the column throughout the testing period, for a content at the entrance equal to 20 ppm. The biogas temperature at the outlet of the column should be slightly greater than 2 ° C whatever its temperature at the inlet.
These objectives have been reached on the demonstrator. The hydrogen sulfide content at the outlet of the absorption column was maintained between 0 and 1 ppm as seen in Fig. 35. The biogas temperature at the outlet of the packing column was maintained between 5 and 6 °C even when the biogas temperature at the inlet reaches 35 °C as shown in Fig. 36.
The concentrations measurement are made using a biogas analyser, model "Gas 3200 R Biogas" bought from "Gas Engineering and Instrumentation Technologies Europe" GEIT® company. The measurements of H2S concentration are made with 3-electrodes electrochemical cell designed for biogas applications with several measuring range: from 0 -50 ppm to 0 -9999 ppm. The sensitivities of measurement tools used in the experiments are shown in Table 27. Sodium hydroxide was used as chemical solvant for the removal of hydrogen sulfide. The weight percentage of NaOH in water is equal to 30.5 wt%. Averaged over all experiments, sodium hydroxide consumption was assessed at 6 l/h (solution containing 30.5 wt% of NaOH and 69.5 wt% of H2O). This consumption was calculated with an indicator allowing the NaOH level measurement in the tank TK-240-02 (See Fig. 32). This consumption is very excessive relative to the theoretical consumption of 1.35 l/hr of commercial caustic soda. This overconsumption is explained by desorption of hydrogen sulfide, because the reaction between hydrogen sulphide and sodium hydroxide is reversible (R.11). It leads to the formation of sodium sulfide (Na2S) that is unstable in water. Fig. 37 shows the desorption phenomenon observed during experiments. Indeed, stopping the injection of sodium hydroxide at 13:45:00 (See Fig. 37) shows that the hydrogen sulfide content at the outlet of the column is greater than its content at the inlet. This explains the overconsumption of the solvent. To overcome this problem, the injection of sodium hypochlorite (NaOCl) has been proposed to prevent the regeneration of hydrogen sulfide, and to obtain soluble by-products in water (Na2SO4 and NaCl) as seen in Reaction 2. The injections of sodium hydroxide and sodium hypochlorite will respectively be controlled by the pH and conductivity measurements. This solution will reduce the consumption of the solvents.
𝑁𝑎𝑂𝐻 + 𝐻
Conclusion
This section has presented two different processes (absorption and adsorption) used for the removal of H2S in order to reduce its content to less than 1 ppm throughout the operation.
The other undesirable components present in the biogas such as H2O, siloxanes and CO2 are captured through cryo-condensation, freezing each component.
The CO2 is retrieved in liquid form at a high level of purity enabling revalorization. Once purified, the biomethane fulfills the characteristics necessary to be used as fuel for heavy goods vehicles (HGV).
To allow the absorption column to operate at full capacity, an hydrodynamic study is necessary. This study will optimise the flow rates involved and ensure a better mass transfer in the packing column, with a lower consumption of the liquid phase and a lower pressure drop. This study which will be presented in the next chapter, is also of great interest for the design of packed columns. Ce chapitre compare trois modèles existants utilisés pour la prédiction des paramètres hydrodynamiques dans des colonnes à garnissage structuré.
Ces modèles sont utilisés pour évaluer la perte de pression, la rétention liquide, l'aire interfaciale effective, les coefficients de transfert de masse et les points de transition. Les résultats de ces modèles sont comparés à des données expérimentales afin de choisir celui avec le meilleur ajustement.
Les comparaisons ont été effectuées en utilisant deux systèmes : Air -Eau et Air -Kerosol 200 et un garnissage structuré de type Flexipac 350Y.
Le modèle choisi est basé sur des corrélations semi-empiriques contenant des constantes et des exposants définis selon des mesures expérimentales. Pour rendre le modèle plus représentatif du système d'intérêt (biogaz contenant du H2S / solution aqueuse d'hydroxyde de sodium), ces constantes ont été modifiées et certains exposants ont été ajustés en fonction de la vitesse superficielle du liquide et la densité.
Une fois le modèle modifié, les résultats de la perte de pression ont été comparés aux données expérimentales obtenues sur le démonstrateur BioGNVAL. Les résultats obtenus sont en bon accord mais il est judicieux de noter que ce modèle peut perdre de sa précision en variant les applications.
Par conséquent, ce modèle est idéal pour prédire avec précision les trois régions opérationnelles d'une colonne à garnissage structuré à petite échelle utilisée pour des applications de biogaz ou de gaz naturel.
Introduction
Today, in modern absorption columns, structured packings are widely used, thanks to their higher capacity and lower pressure drop compared to random packings. Structured packings were used for the first time in 1950 [START_REF] Strigle | Packed tower design and applications, Random and structured packings[END_REF]. They are in continuous development to expand their use and improve their efficiency. They provide a large surface area for the liquid and gas phases to be in direct contact within the column. High efficient mass transfer between the two phases is achieved thanks to the packing surface. This work compares three existing models used for the prediction of hydrodynamic parameters in structured packing columns. These models are used to evaluate pressure drop, liquid holdup, effective interfacial area, mass transfer coefficients and transition points. The results obtained with these models are compared to experimental data in order to choose the one with the best fit. Comparisons were made using Flexipac 350Y structured packing and two systems: Air -Water and Air -Kerosol 200. The model chosen is based on semi-empirical correlations using constants and exponents defined according to experimental measurements. To adapt the model to biogas application and to make it more representative of the system of interest, these constants were optimized and some exponents have been adjusted. Once the model modified, the results of pressure drop were compared to data from BioGNVAL pilot plant.
Theoretical principles
In a packed column, hydrodynamics and mass transfer processes occur simultaneously. They are correlated and the link parameter is liquid holdup hL defined as the volume of the liquid per unit volume of the column. Equation ( 17) defined by Chan and Fair [START_REF] Chan | Industrial and Engineering Chemistry Process Design and Development[END_REF] for sieve trays illustrated the relation between the two processes.
𝑘 𝑉 𝑎 𝑒 = 316 𝐷 𝑉 0,5 (1030 𝑓 + 867 𝑓 2 ) ℎ 𝐿 0,5 (17)
Regarding the hydrodynamic analysis, increasing the velocity of liquid and gas, results in an increase of the liquid holdup and the thickness of the liquid film which leads to an increase in pressure drop.
About mass transfer analysis, increasing liquid holdup causes the enlargement of the interfacial area leading to higher mass transfer rates.
The curve which represents the evolution of the pressure drop or the liquid holdup as a function of the gas capacity factor Fc is divided by two points (loading and flooding points) into three operating regions as seen in Fig. 38.
The liquid flow rate is not influenced by the counter-current flow of the gas in the preloading region, as can be seen in Fig. 38.b which presents the evolution of the liquid holdup as a function of the gas capacity factor defined by Equation [START_REF] Biard | Contribution au développement d'un procédé de lavage chimique compact. Traitement du sulfure d'hydrogène par le chlore à l'échelle semi-industrielle et de COV odorants par oxydation avancée O3 / H2O2 à l'échelle du laboratoire[END_REF].
𝐹 𝑐 = 𝑢 𝑉 × √𝜌 𝑉 ( 18
)
The loading point represented by the line AA in Fig. 38 is reached when the slope of the liquid holdup curve starts to increase, or when the wet pressure drop curve starts to deviate from the pressure drop in a dry column. The flooding point is represented by the line BB in Fig. 38. It is the point where the slope of pressure drop and liquid holdup curves tends toward infinity. Therefore, it is necessary to predict accurately the transition points because they characterize the capacity of a packing column. According to Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], under-predicting the flooding point will prevent the column to operate at its optimal conditions, and its capacity could be very low. However, over-predicting the flooding point may lead to higher pressure drop which could be problematic.
As reported by Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], the main disadvantage of this model is that it requires six specific constants for each type of packing. The ones needed for Flexipac 350Y and for some other types of packing, are presented in Table 28. 20) and ( 21) illustrated in Table 29 are used by Billet and Schultes to calculate the effective interfacial area respectively at loading point, in the loading region and at the flooding point.
Table 29. Effective interfacial area in packing columns using Billet and Schultes model [98]
Parameter Correlation
Effective interfacial area at loading point
( 𝑎 𝑒 𝑎 ) 𝑙𝑝 = 1.5 (𝑎 𝑑 ℎ ) -0.
The Billet and Schultes model is composed of several correlations that describe liquid holdup and pressure drop in the preloading, loading and flooding regions. Velocities and liquid holdup at loading and flooding points are calculated using the equations listed in Table 30.
Table 31 presents the correlations used by Billet and Schultes to calculate the liquid holdup in the loading region. This property depends on the liquid holdup in the preloading region and at the flooding point. The first one is theoretically derived from a force balance, while the second is purely empirical. The liquid holdup in the preloading region does not depend on the gas properties. It is only a function of the liquid properties and its velocity, as seen in Equation [START_REF] Kunz | The GERG -2008 Wide-Range Equation of State for Natural Gases and Other Mixtures: An Expansion of GERG -2004[END_REF]. As stated in the thesis of Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], the hydraulic area of the packing accounts for the surfaces that were not completely wetted by the liquid flow.
The equations used to calculate pressure drop are listed in Table 32.
f s 𝑓(𝑠) = ( ℎ 𝐿 ℎ 𝐿,𝑙𝑝 ) 0,3 exp ( 𝑅𝑒 𝐿 200 ) (44)
The expression of dry pressure drop is obtained by applying a force balance. The wall factor K is used to take into account the free spaces more available at the wall. The constant Cp used to calculate the resistance coefficient ψ0 characterizes the geometry of the packing.
For the wetted packing column, Equation ( 43) used to calculate pressure drop replaces the void fraction (ε) by an effective void fraction (ε -hL) which depends on liquid holdup, reducing the volume available for the gas flow. This equation introduces a wetting factor fw to account for any change in the surface of the packing caused by the wetting action [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF].
SRP model
The SRP (Separations Research Program) model [START_REF] Fair | A comprehensive model for the performance of columns containing structured packings[END_REF] was developed at the University of Texas [100]. The latest version of this model was published in the work of Fair et al. in 2000 [96]. According to Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], the SRP model considers the void fraction as a series of wet columns where the gas flow passes through. Unlike the Billet and Schultes model, the geometry depends on the angle and dimensions of corrugations.
To calculate liquid holdup and effective interfacial area, the SRP model uses a correction factor that takes into account the packing surface that is not completely wetted by the liquid flow.
The prediction of the effective interfacial area is based on a simple equation that depends on the liquid holdup correction factor and a surface enhancement factor as seen in Table 33. The surface enhancement factor is equal to 0.35 for stainless steel sheet metal packing [100].
Table 33. Effective interfacial area in packing columns using SRP model [START_REF] Fair | A comprehensive model for the performance of columns containing structured packings[END_REF] Parameter Correlation
Reynolds number
𝑅𝑒 𝐿 = 𝑠 𝑢 𝐿 𝜌 𝐿 𝜇 𝐿 (46)
Froude number
𝐹𝑟 𝐿 = 𝑢 𝐿 2 𝑠 𝑔 (47)
Weber number
𝑊𝑒 𝐿 = 𝑠 𝜌 𝐿 𝑢 𝐿 2 𝑔 𝜎 𝐿 (48)
Solid -liquid film contact angle 𝐹𝑜𝑟 𝜎 𝐿 ≤ 0,055 𝑁. 𝑚 -1 𝑐𝑜𝑠𝛾 = 0,9 𝐹𝑜𝑟 𝜎 𝐿 > 0,055 𝑁. 𝑚 -1 𝑐𝑜𝑠𝛾 = 5,211 × 10 -16,835 𝜎 𝐿 (
Correction factor 𝐹 𝑡 = 29,12 𝑠 0,359 (𝑊𝑒 𝐿 𝐹𝑟 𝐿 ) 0,15 𝜀 0,6 𝑅𝑒 𝐿 0,2 (𝑠𝑖𝑛𝛳) 0,3 (1 -0,93 𝑐𝑜𝑠𝛾)
Effective interfacial area
𝑎 𝑒 𝑎 = 𝐹 𝑡 𝐹 𝑆𝐸 (51)
The SRP model uses the effective gravity which takes into account forces that oppose the flow of the liquid film over the packing. These forces are caused by the pressure gradient, buoyancy and shear stress in the gas phase [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF]. An iterative approach exploiting this effective gravity is used to calculate liquid holdup. The calculation steps followed for predicting liquid holdup in a packing column are shown in Table 34.
Initial condition for the iterative approach
( 𝛥𝑃 𝛥𝑧 ) 𝑖𝑡𝑒𝑟 = ( 𝛥𝑃 𝛥𝑧 ) 𝑑 (53)
Iterative approach
ℎ 𝐿 = ( 4 𝐹 𝑡 𝑠 ) 2 3 [ 3 𝜇 𝐿 𝑢 𝐿 𝜌 𝐿 𝜀 𝑔 𝑒𝑓𝑓 𝑠𝑖𝑛𝛳 ] 1 3 𝛥𝑃 𝛥𝑧 = ( 𝛥𝑃 𝛥𝑧 ) 𝑑 [1 -ℎ 𝐿 (71,35 𝑠 + 0,614)] 5 (54)
Convergence
𝐼𝑓 𝛥𝑃 𝛥𝑧 ≠ ( 𝛥𝑃 𝛥𝑧 ) 𝑖𝑡𝑒𝑟 → ( 𝛥𝑃 𝛥𝑧 ) 𝑖𝑡𝑒𝑟 = 𝛥𝑃 𝛥𝑧
and restart from ( 52)
𝐼𝑓 𝛥𝑃 𝛥𝑧 ≈ ( 𝛥𝑃 𝛥𝑧 ) 𝑖𝑡𝑒𝑟 → 𝐶𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑐𝑒 (55)
The constants A and B used to calculate the pressure drop in a dry column depend on the type of the packing. For metal structured packings, A and B are equal to 0.177 and 88.77 respectively [101]. Table 35 presents the equations used for the prediction of pressure drop in preloading and loading regions.
Table 35. Pressure drop in packing columns using SRP model [START_REF] Fair | A comprehensive model for the performance of columns containing structured packings[END_REF] Parameter Correlation
Liquid film thickness 𝛿 = ( 3 𝜇 𝐿 𝑢 𝐿 𝑎 𝑔 𝜌 𝐿 𝑠𝑖𝑛𝛳 ) 1 3 (56)
Gas flow channel diameter
𝑑 ℎ𝑉 = (𝑏 ℎ -2 𝑠 𝛿 ) 2 𝑏 ℎ [( 𝑏 ℎ -2 𝑠 𝛿 2 ℎ ) 2 + ( 𝑏 ℎ -2 𝑠 𝛿 𝑏 ) 2 ] 0,5 + 𝑏 ℎ -2 𝑠 𝛿 2 ℎ (57)
Gas capacity factor at loading point
𝐹 𝑐,𝑙𝑝 = [0,053 𝑔 𝑑 ℎ𝑉 𝜀 2 (𝑠𝑖𝑛𝛳) 1,15 (𝜌 𝐿 -𝜌 𝑉 ) ( 𝑢 𝐿 𝑢 𝑉 √ 𝜌 𝐿 𝜌 𝑉 ) -0,25 ] 0,5 (58)
Pressure drop enhancement factor
𝐹 𝑙 = 3,8 ( 𝐹 𝑐 𝐹 𝑐,𝑙𝑝 ) 2 𝑠𝑖𝑛𝛳 ( 𝑢 𝐿 2 𝑔 𝑑 ℎ𝑉 𝜀 2 ) 0,13 (59)
Pressure drop in preloading region
( 𝛥𝑃 𝛥𝑧 ) 𝑝𝑙 = ( 𝛥𝑃 𝛥𝑧 ) 𝑑 ( 1 [1 -ℎ 𝐿 (71,35 𝑠 + 0,614)] ) (60)
Pressure drop in loading region
( 𝛥𝑃 𝛥𝑧 ) = 𝐹 𝑙 ( 𝛥𝑃 𝛥𝑧 ) 𝑝𝑙 (61)
Delft model
The Delft model [START_REF] Olujić | Predicting the efficiency of corrugated sheet structured packings with large specific surface area[END_REF] was developed in a joint academic project between Montz Company and Delft University of Technology. The Delft model considers that all the packing surface area is wetted by the liquid film [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF]. The prediction of the effective interfacial area with the Delft model is based on an empirical correlation presented in Equation [START_REF] Langmuir | The constitution and fundamental properties of solids and liquids[END_REF].
𝑎 𝑒 = 𝑎 (1 -𝛺) (1 + 𝐴 𝑢 𝐿 𝐵 ) (62)
According to Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], Ω is equal to 0.1 for Montz Packing and for most packing with holes as Flexipac and Mellapak. A and B are constants specific to the type and size of the packing. For example, these two constants are respectively equal to 2.143 x 10 -6 and 1.5 for Montz® Packing B1-250 [100].
The Delft model introduces a new expression to define the effective liquid flow angle as seen in Equation [START_REF] Freundlich | Uber die adsorption in losungen[END_REF].
𝛼 𝐿 = 𝑎𝑟𝑐𝑡𝑎𝑛 [ 𝑐𝑜𝑠(90 -𝜃) 𝑠𝑖𝑛(90 -𝜃) 𝑐𝑜𝑠 [𝑎𝑟𝑐𝑡𝑎𝑛 ( 𝑏 2 ℎ )] ] (63)
This model uses a simple function for predicting liquid holdup consisting on the product of the specific surface of the packing and the thickness of the liquid film.
ℎ 𝐿 = 𝛿 𝑎 [START_REF] Temkin | Adsorption equilibrium and the kinetics of processes on nonhomogeneous surfaces and in the interaction between adsorbed molecules[END_REF] The expression of the liquid film thickness is the same adapted by the SRP model except that it uses the effective liquid flow angle.
For the prediction of the pressure drop, the Delft model uses the same equations as the SRP model. The only difference is situated in the preloading region. As reported by Paquet [START_REF] Friedrich | Establishing a facility to measure the efficiency of structured packing under total reflux[END_REF], the Delft model assumes that the gas flows in a regular zigzag pattern through the packed column. It uses three parameters which contribute to the calculation of the pressure drop in the preloading region. The details of calculation of pressure drop in the preloading region are summarized in Table 36.
Table 36. Pressure drop in preloading region using Delft model [97]
Parameter Correlation
Effective gas velocity
𝑢 𝑉,𝑒 = 𝑢 𝑉 𝜀 (1 -ℎ 𝐿 ) 𝑠𝑖𝑛𝜃 (65)
Effective liquid velocity 𝑢 𝐿,𝑒 = 𝑢 𝐿 ℎ 𝐿 𝜀 𝑠𝑖𝑛𝛼 𝐿 [START_REF] Toth | Calculation of the BET-compatible surface area from any type I isotherms measured above the critical temperature[END_REF] Relative Reynolds number for gas phase 𝑅𝑒 𝑉𝑟 = 𝜌 𝑉 𝑑 ℎ𝑉 (𝑢 𝑉,𝑒 + 𝑢 𝐿,𝑒 ) 𝜇 𝑉
Effective Reynolds number for gas phase
𝑅𝑒 𝑉𝑒 = 𝜌 𝑉 𝑑 ℎ𝑉 𝑢 𝑉,𝑒 𝜇 𝑉 (68)
Fraction of the flow channel occupied by the liquid phase
𝜑 = 2 𝑠 2 𝑠 + 𝑏 (69)
Fraction of the channels ending at the column wall
𝜓 = 2 𝛱 𝑎𝑟𝑐𝑠𝑖𝑛 ( ℎ 𝑝𝑒 𝑑 𝑐 𝑡𝑎𝑛𝜃 ) + 2 ℎ 𝑝𝑒 𝛱 𝑑 𝑐 2 𝑡𝑎𝑛𝜃 (𝑑 𝑐 2 - ℎ 𝑝𝑒 2 𝑡𝑎𝑛 2 𝜃 ) 0,5 (70)
Gas/Liquid friction coefficient
𝜉 𝐺𝐿 = [-2 𝑙𝑜𝑔 10 [ 𝛿 3,7 𝑑 ℎ𝑉 - 5,02 𝑅𝑒 𝑉𝑟 𝑙𝑜𝑔 10 ( 𝛿 3,7 𝑑 ℎ𝑉 + 14,5 𝑅𝑒 𝑉𝑟 )]] -2 (71)
Gas/Gas friction coefficient 𝜉 𝐺𝐺 = 0,722 (𝑐𝑜𝑠𝜃) 3,14
Direction change factor for bulk zone
𝜉 𝑏𝑢𝑙𝑘 = 1,76 (𝑐𝑜𝑠𝜃) 1,63 (73)
Direction change coefficient for wall zone 𝜉 𝑤𝑎𝑙𝑙 = 34,19 𝑢 𝐿 0,44 (𝑐𝑜𝑠𝜃) 0,779 + 4092 𝑢 𝐿 0,31 + 4715 (𝑐𝑜𝑠𝜃) 0,445 𝑅𝑒 𝑉𝑒 [START_REF] Chen | Développement de nouvelles membranes à base de polyimide pour la séparation CO2 / CH4[END_REF] Coefficient for gas/liquid friction losses
𝜍 𝐺𝐿 = 𝜉 𝐺𝐿 𝜑 ℎ 𝑝𝑏 𝑑 ℎ𝑉 𝑠𝑖𝑛𝜃 (75)
Coefficient for gas/gas friction losses
𝜍 𝐺𝐺 = 𝜉 𝐺𝐺 ℎ 𝑝𝑏 𝑑 ℎ𝑉 𝑠𝑖𝑛𝜃 (1 -𝜑) (76)
Coefficient for losses caused by direction change
𝜍 𝐷𝐶 = ℎ 𝑝𝑏 ℎ 𝑝𝑒 (𝜉 𝑏𝑢𝑙𝑘 + 𝜓 𝜉 𝑤𝑎𝑙𝑙 ) (77)
Pressure drop in preloading region
∆𝑃 𝑝𝑙 = ∆𝑃 𝐺𝐺 + ∆𝑃 𝐺𝐿 + ∆𝑃 𝐷𝐶 = 1 2 𝜌 𝑉 𝑢 𝑉,𝑒 2 (𝜍 𝐺𝐺 + 𝜍 𝐺𝐿 + 𝜍 𝐷𝐶 ) (78)
Models evaluation
The three models introduced in the previous section are evaluated and compared in order to choose the most effective in the prediction of hydrodynamic properties. To achieve this, the models are compared using two systems: Air / Water and Air / Kerosol 200. These systems have been chosen because of the lack of experimental data in the open literature concerning the system of interest (biogas with H2S / aqueous solution of sodium hydroxide). Kerosol is a paraffin, characterized by a low surface tension and high viscosity as seen in Table 37. "200" refers to its boiling point (200 °C).
The differences in liquid surface tension, density and viscosity between water and Kerosol 200 allow comparison of models for different conditions, highlighting the effects on pressure drop and liquid holdup. The experimental data were retrieved from the work of Erasmus [100]. The type of packing used for this comparison is Flexipac® 350Y. This packing is different with respect to the one used in the BioGNVAL pilot plant (Montz® B1-420), but no literature data are available for this last. The dimensions for the packing Flexipac® 350Y are outlined in Table 38 and the relative constants used by Billet and Schultes are shown in Table 28.
Pressure drop and liquid holdup
In Fig. 39, the experimentally determined pressure drop and liquid holdup over Flexipac® 350Y [100] are compared to the results obtained with the models using an Air -Water system. Fig. 39.a shows that SRP and Billet and Schultes models are accurate in predicting the pressure drop in preloading region (Fc < 1.9). The Delft model predicts the correct shape of the pressure drop curve, but compared to experimental data, the results obtained are not realistic.
Although the results are not accurate, Fig. 39.b shows that the model by Billet and Schultes is the best in predicting liquid holdup in a structured packed column. The Delft model assumes that the liquid holdup is not influenced by the gas velocity, which explains the constant shape of the curve. The modified Billet and Schultes model shown in Fig. 39 will be presented in section 5.4. The average absolute deviations between predictive models (Billet & Schultes, SRP and Delft) and experimental results for pressure drop and liquid holdup are shown in Table 39.
Table 39. Deviation between predictive models and experimental data
Model
Effective interfacial area
In a packed column, the gas and the liquid phases are brought into contact and exchange mass and energy across their common interfacial area. The effective interfacial area accounts for the dead area that does not actively take part in the mass transfer process.
Fig. 40 shows the results of the effective interfacial area obtained with the three models, and compared to experimental data. Fig. 40 shows that most models overpredict the effective interfacial area. The Delft model assumes that the liquid load does not influence the effective interfacial area which presents 90 % of the overall specific area of Flexipac® 350 Y as seen in Fig. 40.c.
Compared to the Delft model, the SRP model predicts the right slope of the curve. However, for liquid loads above 16 m.h -1 , the predicted effective interfacial area becomes larger than the packing specific surface.
The Billet and Schultes model [START_REF] Billet | A physical model for the prediction of liquid hold-up in two-phase counter-current columns[END_REF] is accurate in predicting the effective interfacial area.
The evaluation of the three models shows that the Billet and Schultes model predicts hydrodynamic parameters more accurately than SRP and Delft models. Therefore, the model by Billet and Schultes is retained for the further study.
Changes made to Billet and Schultes model and results
The Billet and Schultes model was developed for random packings, then it was extended to cover a limited number of commercially available structured packings.
To make this model more realistic and more accurate in predicting hydrodynamic parameters for structured packings, some constants and exponents defined according to experimental observations and used in the correlations were modified as function of liquid load and density. The constants and exponents to modify were selected following a sensitivity analysis. The values of the constants and exponents have been optimized by minimization of an objective function based on the deviations between modelling and experimental results. The modifications made to Equations ( 31), ( 36), ( 43) and ( 44) are shown in Tables 40, 41 and 42 for liquid holdup and pressure drop. These equations are reminded below by highlighting the modified constants.
ℎ 𝐿,𝑝𝑙 = 𝑪 𝟏 ( 𝜇 𝐿 𝑎 2 𝑢 𝐿 𝜌 𝐿 𝑔 ) 1 3 ( 𝑎 ℎ 𝑎 ) 2 3 (79)
𝑑𝑃 𝑑𝑧 = 𝑪 𝟑 𝜓 𝐿 𝑓 𝑤 𝑎 (𝜀 -ℎ 𝐿 ) 3 𝐹 𝑐 2 2 1 𝐾 (81)
In order to improve predictions, Equations ( 31), ( 36), ( 43) and ( 44) were slightly modified based on the experimental results of Erasmus [100], but using only three values of liquid load (uL = 6 m/h, uL = 20.5 m/h and uL = 35.5 m/h) for regression set. The modifications made to equations are colored in red. Statistical deviation between experimental data and the modified model results are presented in Table 43 for both systems. After validation of the modified model, it was used to predict pressure drop on a real structured packing column used for the removal of H2S from biogas. The results between experimental data obtained from BioGNVAL pilot plant and the refined model are shown in Table 44. The specific constant Cp for pressure drop over Montz B1-420 packing was set to 0.14 by fitting it on experimental data. The difference between the two results could be explained by pressure drop in the piping which does not contain packing.
Conclusion
This chapter evaluated three semi-empirical models for prediction of hydrodynamic parameters used for an industrial application concerning biogas purification: Billet and Schultes, SRP and Delft. Flexipac® 350Y structured packing was considered here. Its capacity is closely related to hydrodynamics and mass transfer characteristics. That is why, the performances of these hydrodynamic models were investigated and compared based on existing experimental data, and the choice was finally made on the model of Billet and Schultes. The correlations of this model were improved in order to develop an accurate prediction of hydrodynamic parameters in a structured packing column. This model allows to precisely predicting the key hydrodynamic parameters: liquid holdup, pressure drop, effective interfacial area and especially the two transition points: loading and flooding points.
The results of pressure drop using the modified model were compared to those obtained on BioGNVAL pilot plant. Good agreement was obtained with experimental data. It is wise to note that this model may lose generality with varying applications but for the activities of interest, it wins precision. Therefore, this predictive model is ideal to predict accurately the three operating regions of a small scale structured packing column used for biogas or natural gas applications. It would allow the design of structured packing columns without the need of experimental data collected on a pilot plant. The operative conditions of the existing columns could also be optimized using the modified model to operate at full capacity.
After the hydrodynamic study, it will be interesting to investigate the thermodynamic properties in order to accurately predict the efficiencies of H2S separation from biogas. This study based on simulations using the chemical process optimization software Aspen Plus® V8.0 will be presented in the following section.
Chapter 6: Comparison of experimental and simulation results for the removal of H2S from biogas by means of sodium hydroxide in structured packed columns
Résumé :
Ce travail est basé sur des simulations utilisant le simulateur de procédés Aspen Plus V8.0 et les résultats sont comparés à des données expérimentales issues de la littérature et du démonstrateur « BioGNVAL ». Le modèle « Rate-based » a été considéré lors des simulations afin de déterminer l'efficacité de séparation pour différentes conditions opératoires. Ce modèle a été adopté car les résultats expérimentaux ont montré une concentration constante de NaOH dans la phase liquide en fonction du temps. Une approche γ/φ est employée pour décrire l'équilibre vapeur -liquide : le modèle Electrolyte Non-Random Two-Liquid « ENRTL » est utilisé pour représenter les non-idéalités de la phase liquide, alors que l'équation d'état de Redlich-Kwong est utilisée pour le calcul des propriétés de la phase vapeur. Afin d'étudier de façon réaliste l'efficacité des colonnes à garnissage utilisant de la soude comme solvant pour l'élimination sélective de l'H2S, le modèle thermodynamique a été vérifié et validé en se basant sur différentes données expérimentales disponibles. Pour une estimation rigoureuse de la solubilité des gaz dans le solvant, les constantes de Henry obtenues ont été comparées aux données expérimentales issues du rapport de recherche RR-48 du Gas Processors Association [105] ainsi qu'à une équation semi-empirique proposée par Harvey [START_REF] Harvey | Semiempirical correlation for Henry's constants over large temperature ranges[END_REF]. Des propriétés physiques comme les capacités thermiques des composants purs, utilisées dans les calculs de transfert de matière et de chaleur ont été aussi vérifiées. L'équation développée par Aly et Lee [START_REF] Aly | Fluid Phase Equil[END_REF] a été adoptée pour calculer les capacités calorifiques et les résultats obtenus ont été comparés aux données expérimentales fournies par Elliott et Lira [START_REF] Elliott | Introductory chemical engineering thermodynamics[END_REF]. Le calcul de la masse volumique de la phase liquide dans Aspen Plus a été effectué en fonction de la composition massique de NaOH dans la soude à 25 °C. Les résultats obtenus ont été vérifiés grâce aux données expérimentales de Herrington et al. [109]. Concernant la viscosité de la phase liquide, un modèle correctif adapté pour les électrolytes appelé modèle de « Jones-Dole » [110] a été appliqué dans Aspen Plus. Les paramètres de ce modèle correctif ont été optimisés pour les ions HCO3 -, Na + et CO3 2-en utilisant respectivement les données expérimentales de viscosité des systèmes KHCO3 -H2O [START_REF] Palaty | Viscosity of diluted aqueous K2CO3 / KHCO3 solutions[END_REF], NaOH -H2O [START_REF] Vargaftik | Dictionary of thermophysical properties of gases and liquids[END_REF] et K2CO3 -H2O [114]. Ce modèle est remplacé par celui de Breslau et Miller [111] dans le cas où la concentration des électrolytes dépasse 0,1 mol/l. Les résultats de ces modèles ont été comparés aux données expérimentales de Klochko et Godneva [START_REF] Klochko | Electrical conductivity and viscosity of aqueous solutions of NaOH and KOH[END_REF]. Pour le calcul de la tension superficielle de la phase liquide, le modèle de Onsager-Samaras a été considéré dans Aspen Plus et les résultats de ce dernier ont été vérifiés en se basant sur les données expérimentales présentées dans le travail de Gel'perin et al. [START_REF] Gel'perin | The relation between the surface tension of aqueous solutions of inorganic substances and concentration and temperature[END_REF]. Les réactions chimiques impliquées ont été spécifiées. Elles sont toutes à l'équilibre chimique, à l'exception de la réaction entre le CO2 et l'ion hydroxyde (OH -) qui est cinétiquement contrôlée. Pour les réactions d'équilibre, les constantes d'équilibres ont été calculées en s'appuyant sur des données d' Edwards et al. [118] sauf pour les réactions entre H2S et OH -et entre HS -et OH -où les constantes d'équilibres relatives ont été calculées en utilisant l'énergie libre de Gibbs faute de données dans la littérature. Pour la réaction cinétique entre CO2 et OH -, l'équation d'Arrhenius a été considérée dans Aspen Plus où le facteur de fréquence et l'énergie d'activation ont été extraits du travail de Pinsent et al. [START_REF] Pinsent | The kinetics of combination of carbon dioxide with hydroxide ions[END_REF]. Les résultats des constantes de vitesse obtenues en fonction de la température ont été vérifiées à l'aide des résultats expérimentaux de Faurholt [120]. Après la validation des paramètres physico-chimiques, les simulations ont été réalisées en utilisant les mêmes conditions employées dans le démonstrateur « BioGNVAL » comme présenté dans le tableau suivant.
Introduction
This chapter was treated at an international stay conducted within the "Group on Advanced Separation Processes & GAS Processing" at Politecnico di Milano. This group is led by the Professor Laura Pellegrini who is an expert internationally recognized in process simulation and using the software Aspen Plus®.
Purification of biogas particularly requires the removal of hydrogen sulfide, which negatively affects the operation and viability of equipment especially pumps, heat exchangers and pipes, causing their corrosion. Several methods described in chapter 2 are available to eliminate hydrogen sulfide from biogas. Herein, reactive absorption in structured packed column by means of chemical absorption in aqueous sodium hydroxide solutions is considered. This study is based on simulations using Aspen Plus™ V8.0, and comparisons are done with data from BioGNVAL pilot plant treating 85 Nm 3 /h of biogas which contains about 30 ppm of hydrogen sulfide. The rate-based model approach has been used for simulations in order to determine the efficiencies of separation for different operating conditions. To describe vapor-liquid equilibrium, a γ/ϕ approach has been considered: the Electrolyte Non-Random Two-Liquid (ENRTL) model has been adopted to represent non-idealities in the liquid phase, while the Redlich-Kwong equation of state has been used for the vapor phase. In order to validate the thermodynamic model, Henry's law constants of each compound in water have been verified against experimental data. Default values available in Aspen Plus™ V8.0 for the properties of pure components as heat capacity, density, viscosity and surface tension have also been verified. Reactions involved in the process have been studied rigorously. Equilibrium constants for equilibrium reactions and the reaction rate constant for the kinetically controlled reaction between carbon dioxide and the hydroxide ion have been checked. Results of simulations of the pilot plant purification section show the influence of low temperatures, concentration of sodium hydroxide and hydrodynamic parameters on the selective absorption of hydrogen sulfide.
Aspen Plus® simulations
The aim of these simulations in Aspen Plus™ V8.0 is to study realistically the effectiveness of a structured packed column which uses sodium hydroxide as a chemical solvent for the selective removal of hydrogen sulfide.
Fig. 43: Flowsheet of the absorption process simulated using Aspen Plus®
Unlike amines, sodium hydroxide is not regenerable but it is very effective in removing low contents of H2S [START_REF] Kohl | Gas purification[END_REF].
Although the liquid solution is recycled, experimental data showed that the NaOH consumption is quite constant. NaOH concentration can be assumed to be constant over time in the liquid phase. This assumption justifies the use of the model "Rate-based" for the study.
The rate-based modeling approach is realistic compared to the traditional equilibrium-stage modeling approach that has been employed extensively in the process industries over the decades. The rate-based models assume that separation is caused by mass transfer between the contacting phases, and use the Maxwell-Stefan theory to calculate mass transfer rates [START_REF] Taylor | Real-World Modeling of Distillation[END_REF]. Conversely, the equilibrium-stage models assume that the contacting phases are in equilibrium with each other, which is an inherent approximation because the contacting phases are never in equilibrium in a real column. The rate-based modeling approach has many advantages over the equilibrium-stage modeling approach. The rate-based models represent a higher fidelity, more realistic modeling approach and the simulation results are more accurate than those attainable from the equilibrium-stage models [START_REF] Chen | A Rate-Based Process Modeling Study of CO2 Capture with Aqueous Amine Solutions using aspenONE Process Engineering[END_REF].
The Electrolyte Non-Random Two-Liquid model proposed by Chen and Song [START_REF] Chen | Generalized electrolyte-NRTL model for mixed-solvent electrolyte systems[END_REF] is used for calculating the liquid phase properties, while the Redlich-Kwong equation of state is used to calculate the vapor phase properties. This model is verified and validated using various experimental data from literature.
Coefficients aij, bij, cij, dij and eij are summarized in Table 45 for each system.
Where Ps,j is the vapor pressure of the component j, and T * is the reduced temperature. They are calculated respectively using the expressions [START_REF] Harasimowicz | Application of polyimide membranes for biogas purification and enrichment[END_REF]
Where Tc,water is the critical temperature of water.
Coefficients aij, bij and cij used by Harvey [START_REF] Harvey | Semiempirical correlation for Henry's constants over large temperature ranges[END_REF] in Equation ( 84) are summarized in Table 46 for each system. The obtained results are similar, with no major differences. Fig. 44, 45 and 46 show the adequacy of the results for the systems CH4 -H2O, CO2 -H2O and H2S -H2O respectively. The average absolute deviation is equal to 1.2 % for CH4 -H2O system, 1.9 for CO2 -H2O system and 7.8 % for H2S -H2O system.
Validation of heat capacity for carbon dioxide
Some physical properties as heat capacities of pure components used for heat and mass transfer modelling were also checked. The ideal gas heat capacity equation [START_REF] Cloirec | Les Composés Organiques Volatils (COVs) dans l'Environnement[END_REF] developed by Aly and Lee [START_REF] Aly | Fluid Phase Equil[END_REF] is used for Aspen Plus® simulations. Fig. 47 shows for example the comparison of results obtained with the adopted model [START_REF] Aly | Fluid Phase Equil[END_REF] in Aspen Plus® with experimental data [START_REF] Elliott | Introductory chemical engineering thermodynamics[END_REF].
𝐶 𝑝 * 𝑖𝑔 = 𝐶 1𝑖 + 𝐶 2𝑖 ( 𝐶 3𝑖 𝑇 ⁄ sinh(𝐶 3𝑖 𝑇 ⁄ ) ) 2 + 𝐶 4𝑖 ( 𝐶 5𝑖 𝑇 ⁄ cosh(𝐶 5𝑖 𝑇 ⁄ ) ) 2 (87)
The values of the constants of equation ( 87) used to calculate the heat capacities of carbon dioxide are listed in Table 47. [START_REF] Elliott | Introductory chemical engineering thermodynamics[END_REF] The deviations between experimental and calculated results of heat capacity of carbon dioxide are presented in Fig. 48.
NaOH mass fraction
The deviations between experimental and calculated results of liquid density are depicted in Fig. 50.
Fig. 50: Deviation between experimental and calculated results of liquid density
Validation of liquid viscosity of NaOH -H2O
For the liquid viscosity, a corrective model for electrolytes called "Jones-Dole" is applied in Aspen Plus®. This model uses the mass fraction of the solvent in the liquid phase. This model is presented in Equation ( 88) [110].
𝜇 𝑙 = 𝜇 𝑠𝑜𝑙𝑣 (1 + ∑ ∆𝜇 𝑐𝑎 𝑙 𝑐𝑎 ) (88)
Where μsolv is the viscosity of the liquid solvent mixture, calculated using the Andrade model and Δμ l ca is the contribution to the viscosity correction due to apparent electrolyte ca (cation-anion).
In Aspen Plus®, the ENRTL model calculates the viscosity of the liquid solvent mixture by default using the modified Andrade Equation [START_REF] Clodic | CO2 capture by anti-sublimation -Thermo-economic process evaluation[END_REF].
ln 𝜇 𝑙 = ∑ 𝑓 𝑖 ln 𝜇 𝑖 * 𝑙 + ∑ ∑(𝑘 𝑖𝑗 𝑓 𝑖 𝑓 𝑗 + 𝑚 𝑖𝑗 𝑓 𝑖 2 𝑓 𝑗 2 ) 𝑗 𝑖 𝑖 ( 89
)
Where fi is by default the mole fraction of the component i. kij and mij are binary parameters, they allow accurate representation of complex liquid mixture viscosity. kij and mij are given respectively by Equations ( 90) and [START_REF] Strigle | Packed tower design and applications, Random and structured packings[END_REF]. When the electrolyte concentration exceeds 0.1 M, Aspen Plus® uses the Equation (92) of Breslau and Miller instead of that of Jones and Dole [111].
𝑘 𝑖𝑗 = 𝑎 𝑖𝑗 + 𝑏 𝑖𝑗 𝑇 (90)
𝑚 𝑖𝑗 = 𝑐 𝑖𝑗 + 𝑑 𝑖𝑗 𝑇 (91)
∆𝜇 𝑐𝑎 𝑙 = 2.
Where Ve is the effective volume. It is given by Equation [START_REF] Chan | Industrial and Engineering Chemistry Process Design and Development[END_REF].
𝑉 𝑒 = 𝐵 𝑐𝑎 -0.002 2.6
For salts involving univalent ions (93.a)
𝑉 𝑒 = 𝐵 𝑐𝑎 -0.011 5.06
For other salts (93.b)
Where Bca is calculated using Equation [START_REF] Schultes | Research on mass transfer columns "Old hat or still relevant ?[END_REF].
𝐵 𝑐𝑎 = (𝑏 𝑐,1 + 𝑏 𝑐,2 𝑇) + (𝑏 𝑎,1 + 𝑏 𝑎,2 𝑇) (94)
c a ca is the concentration of apparent electrolyte ca. It is calculated using Equation [START_REF] Billet | A physical model for the prediction of liquid hold-up in two-phase counter-current columns[END_REF].
𝑐 𝑐𝑎 𝑎 = 𝑥 𝑐𝑎 𝑎 𝑉 𝑚 𝑙 (95)
Where x a ca is the mole fraction of the apparent electrolyte ca and V l m is the molar volume of the liquid mixture calculated by the Clarke model.
The electrolyte correction model parameters were improved for the ion HCO 3-using KHCO3 -H2O viscosity data [START_REF] Palaty | Viscosity of diluted aqueous K2CO3 / KHCO3 solutions[END_REF]. The regression of parameters for Na + ion was performed with viscosity data of NaOH -H2O system [START_REF] Vargaftik | Dictionary of thermophysical properties of gases and liquids[END_REF]. For ion CO3 2-, parameters have been optimized considering experimental data for the K2CO3 -H2O system [114]. For other ions, values provided by Aspen Plus® database were used. Fig. 51 shows the fit between experimental data and Aspen Plus® results for the viscosity of the liquid phase as a function of the mass fraction of sodium hydroxide. Data of this comparison are carried out at a temperature of about 25 °C. [START_REF] Klochko | Electrical conductivity and viscosity of aqueous solutions of NaOH and KOH[END_REF] The deviations between experimental and calculated results of liquid viscosity are shown in Fig. 52.
Fig. 51: Comparison between model and experimental data for liquid viscosity of NaOH -H2O ( ____ ) Aspen Plus® ; (♦) Experimental values
Validation of surface tension of NaOH -H2O
To calculate the liquid phase surface tension, Aspen Plus® uses the model of Onsager-Samaras presented by Equation [START_REF] Fair | A comprehensive model for the performance of columns containing structured packings[END_REF]. Results obtained were compared to experimental data found in literature, as shown in Fig. 53 [START_REF] Gel'perin | The relation between the surface tension of aqueous solutions of inorganic substances and concentration and temperature[END_REF].
𝜎 = 𝜎 𝑠𝑜𝑙𝑣 + ∑ 𝑥 𝑐𝑎 𝑎 𝑐𝑎 ∆𝜎 𝑐𝑎 ( 96
)
Where σsolv is the surface tension of the solvent mixture calculated using the General Pure Component Liquid Surface Tension Model [START_REF] Horvath | Handbook of Aqueous Electrolyte Solutions[END_REF] and Δσca is the contribution to the surface tension correction due to apparent electrolyte ca, calculated using Equation [START_REF] Olujić | Predicting the efficiency of corrugated sheet structured packings with large specific surface area[END_REF].
∆𝜎 𝑐𝑎 = 80 𝜀 𝑠𝑜𝑙𝑣 𝑐 𝑐𝑎 𝑎 𝑙𝑜𝑔 [ 1.13 × 10 -13 (𝜀 𝑠𝑜𝑙𝑣 𝑇) 3 𝑐 𝑐𝑎 𝑎 ] ( 97
)
Where ɛsolv is the dielectric constant of the solvent mixture. [START_REF] Gel'perin | The relation between the surface tension of aqueous solutions of inorganic substances and concentration and temperature[END_REF] The deviations between experimental and calculated results of surface tension are presented in Fig. Fig. 54: Deviation between experimental and calculated results of surface tension
Fig. 53: Comparison between model and experimental data for liquid phase surface tension of 5 wt% NaOH aqueous solution ( ____ ) Aspen Plus® ; (♦) Experimental values
Validation of chemical parameters
All the reactions involved in the process have been specified. These reactions are assumed to be in chemical equilibrium. Only the reaction between carbon dioxide and hydroxyl ion is kinetically controlled. The reactions defined in Aspen Plus® are presented in the following expressions.
The calculation of the temperature-dependent equilibrium constants requires the knowledge of coefficients A, B and C, which were taken from the work of Edwards et al [118]. Coefficients used for Reactions (R.13) to (R.17) are presented in Table 48.
A temperature-dependent expression was proposed in order to define coefficients A, B and C for Reaction (R.18). The values of the defined coefficients are presented in Table 49. The same method has been adopted in order to validate the equilibrium constant for Reaction (R.19).
For kinetic-controlled reactions (R.24) and (R.25), the power law expression (100) is adopted by Aspen Plus®. k and E parameters are given in Table 50 for reactions (R.24) and (R.25), knowing that the concentration is based on the molarity. The reaction rate constants for the kinetic reaction between CO2 and OH -have been verified against experimental data [120]. Fig. 56 shows the good agreement between the results. [120] The deviations between Arrhenius equation ( 88) and experimental results are depicted in Fig. 57.
Simulation results
After validation of physicochemical parameters, Aspen Plus® simulations of the packed absorption column of the pilot plant have been performed using the Rate-based model. Details of the column used in the experiments, as well as the description of the given gas and liquid inlets are defined in Table 51. The type of packing used for this comparison is Flexipac® 350Y. This packing is different with respect to the one used in the pilot BioGNVAL (Montz® B1-420), but no literature data are available for this last. When designing a packed column, it is desired to minimize the flow of liquid to reduce the consumption of water and the energy needed by the pump for its circulation. However, the flow must allow the absorption of H2S and reduce its content to less than 1 ppm. This is a very important parameter since it has an influence on the thermodynamic and hydrodynamic conditions.
In 2007, Sanchez et al. showed that with a fixed air flow, an increase in liquid flow rate will improve the velocity of transfer [START_REF] Sanchez | Hydrodynamic and mass transfer in a new concurrent two-phase flow gas-liquid contactor[END_REF]. Fig. 58 obtained using Aspen Plus® demonstrates that increasing the liquid flow improves the absorption of H2S. However, a mass flow rate of 240 kg/h is sufficient to eliminate the totality of hydrogen sulfide with a minimal pressure loss.
As shown in Fig. 58, chemical conditions strongly influence the transfer percentage. The absorption rate increases with the concentration of sodium hydroxide in the liquid phase.
Fig. 58: Influence of the liquid flow on the absorption of hydrogen sulfide
Sodium hydroxide concentrations: ( ____ ) 0.5 g/l ; (----) 1.9 g/l Table 52 shows that the increase in liquid flow causes a rise in pressure drop. This increase is limited, and shows that this parameter does not depend too much on the liquid flow.
The results obtained on the BioGNVAL pilot plant were compared to those from the modified Billet and Schultes model. These results confirm the precision of this model for activities of interest. The dependence between the pressure drop and the gas flow rate is much more important as shown in Fig. 59, because the gas has the continuous phase in the packed column. [BioGNVAL, 2015] The deviations between experimental and calculated results of pressure drop are presented in Fig. 60. When hydrogen sulfide is absorbed into a sodium hydroxide solution, it can react directly with hydroxyl ions by a proton transfer reaction as seen in Reaction R. [START_REF] Biard | Contribution au développement d'un procédé de lavage chimique compact. Traitement du sulfure d'hydrogène par le chlore à l'échelle semi-industrielle et de COV odorants par oxydation avancée O3 / H2O2 à l'échelle du laboratoire[END_REF]. Compared to the diffusion phenomena, this reaction is extremely rapid and can be considered instantaneous. Since hydrogen sulfide is absorbed more rapidly than carbon dioxide by aqueous sodium hydroxide solutions, partial selectivity can be attained when both gases are present as seen in Fig. 61. Selectivity is favored by short gas-liquid contact times and low temperatures [START_REF] Kohl | Gas purification[END_REF].
Fig. 59: Influence of gas flow rate on the pressure drop ( ____ ) Modified Billet and Schultes model ; (♦) Experimental values
Fig. 61: Influence of the concentration of NaOH in the removal of H2S and CO2 Compound: ( ____ ) CO2 ; (----) H2S
A key parameter affecting the overall performances of the absorption unit is the temperature, since it affects physicochemical properties (such as the solubility of acidic compounds in the aqueous phase, according to the Henry's law) and the chemical reactions in the liquid phase. Fig. 62 shows a good agreement between the results of Aspen Plus® simulations and those obtained from the pilot plant. The deviations between experimental and calculated results of hydrogen sulfide concentration at the outlet of the column are depicted in Fig. 63.
Conclusion
The thermodynamic model used in the simulations (Electrolyte NRTL) was validated with experimental data from the literature. Simulations were performed in order to study the influence of temperatures, chemical and hydrodynamic parameters on H2S absorption. The simulation results were compared to experimental data obtained on the BioGNVAL pilot plant. The comparison was successful and shows that the two results are in good agreement. The model allows predicting realistically the separation efficiencies of H2S in biogas.
The simulation results confirm the observations made on the demonstrator. The NaOH aqueous solution is effective for the removal of H2S in a packing column. The removal efficiency reaches values higher than 99.5 % throughout the operation period of the demonstrator. The use of NaOCl is important to prevent the accumulation of H2S in the aqueous solution by creating an irreversible reaction.
From a practical point of view, the use of hazardous substances (NaOH and NaOCl) requires an operator to manipulate them, which complicates the commercialization of the technology. Furthermore, salt precipitation may occur and can cause blockage of pumps and heat exchangers.
Despite the advantages of absorption technology, this system will be bypassed in order to test another promising technology that requires less financial means and which does not use hazardous chemical products. This technology is adsorption using activated carbon. It will be discussed in the next chapter.
Chapter 7: Modeling hydrogen sulfide adsorption onto activated carbon
Résumé :
Le dernier chapitre concerne la modélisation du procédé d'adsorption du H2S sur charbon actif. Un modèle dynamique a été développé pour modéliser la courbe de percée du système H2S -charbon actif.
La courbe de percée est utilisée pour décrire l'évolution spatio-temporelle de la concentration de H2S en phase gazeuse. Ce type de modélisation de la dynamique de la colonne d'adsorption est basé sur la définition des bilans de matière : un bilan de masse de la phase gazeuse où le transfert par convection domine et un bilan massique des particules adsorbantes où le transfert par diffusion domine.
Au cours de cette étude, les différents coefficients de transfert de masse impliqués dans le processus ont été estimés dans les mêmes conditions dans lesquelles les expérimentations sur le démonstrateur "BioGNVAL" ont eu lieu.
Les simulations ont été effectuées seulement pour des concentrations élevées en H2S car une unité d'adsorption industrielle qui traite de faibles concentrations en H2S atteint la saturation dans une période de quelques mois (≈ 3 mois). La simulation de cette période avec un pas de temps de 0,01 s exige des temps de calculs énormes qui ne sont pas supportés par la machine de travail. De plus, l'augmentation du pas de temps entraînera la divergence des calculs.
Les résultats obtenus avec le modèle développé devraient être comparés aux données expérimentales afin d'ajuster le coefficient global de transfert de masse et les paramètres d'équilibre donnés par l'isotherme d'adsorption.
Introduction
This chapter deals with modeling the breakthrough curve in the case of hydrogen sulfide adsorption onto activated carbon. Physical adsorption using activated carbon is a traditional technology widely used for the removal of H2S. The adsorption efficiency depends on several factors, such as relative humidity, temperature, concentration of H2S in biogas and characteristics of the activated carbon. To improve efficiency, the activated carbon may be impregnated with sodium hydroxide (NaOH), potassium hydroxide (KOH), sodium carbonate (Na2CO3), sodium bicarbonate (NaHCO3), potassium iodide (KI) or potassium permanganate (KMnO4).
Operating conditions of the adsorption column for the removal of hydrogen sulfide
The adsorbent chosen for this study is activated carbon impregnated with a base (NaOH or KOH) dedicated to the elimination of hydrogen sulfide. The selected commercial adsorbent called "Airpel Ultra DS" is provided by Desotec® Company. Its properties are shown in Table 53. The extruded activated carbon was chosen because it causes a lower pressure drop than granular and powder activated carbons.
The micropore volume is an important parameter for the kinetics of hydrogen sulfide adsorption. Compared to the other commercial adsorbents, the "Airpel Ultra DS" contains a high volume of micropores.
The hydrogen sulfide adsorption capacity is also influenced by other operating conditions. One of the important parameters is the relative humidity. As seen in Fig. 64 presenting the breakthrough curve of H2S, the best hydrogen sulfide adsorption capacities are obtained for values of relative humidity between 55 % and 100 % with an optimal value of 85 %. The breakthrough curve presents the evolution of concentration of the pollutant to be removed (H2S) as a function of time. It predicts the time required for the saturation of the activated carbon in the adsorption bed.
The adsorption using impregnated activated carbon is improved by injection of a small quantity of oxygen. The amount injected is generally of the order of 4 times the amount of hydrogen sulfide present in the biogas. The oxidation of hydrogen sulfide will make larger molecules to block them in the micropores. 𝑃𝑒 𝑝 = 𝑑 𝑝 𝑢 𝑉 𝐷 𝐻 𝑠 𝑆-𝐵𝑖𝑜𝑔𝑎𝑠 [START_REF] Gel'perin | The relation between the surface tension of aqueous solutions of inorganic substances and concentration and temperature[END_REF] The H2S molecular diffusion coefficient is calculated using The Equation [START_REF] Horvath | Handbook of Aqueous Electrolyte Solutions[END_REF] proposed by Wilke and Fairbanks [START_REF] Wilke | Diffusion coefficients in multicomponent gas mixtures[END_REF]. This equation is derived from the theories of Maxwell and Stefan. [START_REF] Horvath | Handbook of Aqueous Electrolyte Solutions[END_REF] Equations ( 118), ( 119), ( 120) and ( 121) are used to estimate the diffusivities for the different binary systems at low pressure. These equations developed by Slattery and Bird [START_REF] Slattery | Calculation of the diffusion coefficient of dilute gases and of the self-diffusion coefficient of dense gases[END_REF] from a kinetic theory are shown in Table 56.
𝐷
Internal mass transfer coefficient
The hydrogen sulfide molecules are now located on the surface of the activated carbon. The diffusion mechanisms govern the transport of hydrogen sulfide molecules into the pores of activated carbon. In porous materials, diffusion mechanisms are of four types: molecular diffusion caused by collisions between molecules, Knudsen diffusion caused by collisions of the molecules with the walls of the pore, surface diffusion caused by the electrostatic forces exerted by the walls on the molecules and Poiseuille diffusion caused by the difference in total pressure across a particle. The equations used to estimate these diffusivities are listed in Table 57.
The values obtained for the dimensionless numbers and the diffusion coefficients are presented in Table 58. As seen in Table 58, the molecular diffusion is the dominant transport mechanism. The Poiseuille diffusivity can be neglected because the pressure drop over a particle is very small [START_REF] Ruthven | Principles of adsorption and adsorption processes[END_REF].
Knudsen diffusion and surface diffusion contribute to global internal diffusion. This contribution is very small compared to the contribution of external diffusion. Therefore, only external diffusion will be considered. The external mass transfer coefficient calculated using Equation ( 101) is equal to 0.021 m.s -1 .
Modeling of breakthrough curves
The breakthrough curve is used to describe the spatiotemporal evolution of the concentration of H2S in the gas phase. This type of modeling the dynamics of the adsorption column is based on the definition of mass transfer balances: a mass balance of the gas phase where the transfer by convection dominates and a mass balance of the adsorbent particles where the diffusion transfer dominates [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF].
Several hypotheses govern the flow of the biogas in the absorption column: The properties and the superficial velocity of the biogas are assumed constant throughout the adsorption column and the porosity of the adsorption column is considered uniform [START_REF] Boulinguiez | Procédé d'adsorption et régénération électrothermique sur textile de carbone activé : Une solution pour la problématique des COV dans des gaz à fort potentiel énergétique[END_REF]. Based on the assumptions outlined, the mass balance of the adsorption column can be equated as seen in Equation [START_REF] Wilke | Mass Transfer in the Flow of gases through Granular Solids extended to Low Modified Reynolds Numbers[END_REF]. This equation has been used by several authors as Ruthven [START_REF] Ruthven | Principles of adsorption and adsorption processes[END_REF], Suzuki [START_REF] Suzuki | Adsorption Engineering[END_REF], Hwang et al. [START_REF] Hwang | Adsorption and thermal regeneration of methylene chloride vapor on an activated carbon bed[END_REF], Brosillon et al. [START_REF] Brosillon | Mass Transfer in VOC Adsorption on Zeolite: Experimental and Theoretical Breakthrough Curves[END_REF], Yang [START_REF] Yang | Adsorbents: Fundamentals and Applications[END_REF], Boulinguiez [START_REF] Boulinguiez | Procédé d'adsorption et régénération électrothermique sur textile de carbone activé : Une solution pour la problématique des COV dans des gaz à fort potentiel énergétique[END_REF] and Sigot [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF]. -3 ] is the density of activated carbon. ɛ [-] is the void fraction of the adsorption column. q [g.kg -1 ] is the amount of H2S adsorbed per kg of activated carbon.
-
The mass balance Equation [START_REF] Wilke | Mass Transfer in the Flow of gases through Granular Solids extended to Low Modified Reynolds Numbers[END_REF] takes into account the axial dispersion, convection, the accumulation of the gas phase and the overall mass transfer by adsorption [START_REF] Boulinguiez | Procédé d'adsorption et régénération électrothermique sur textile de carbone activé : Une solution pour la problématique des COV dans des gaz à fort potentiel énergétique[END_REF]. To simplify the problem, initially the axial diffusion coefficient is neglected. The mass balance presented by Equation ( 126) is simplified.
𝑢 𝐿 𝜕𝐶 𝜕𝑧 + 𝜕𝐶 𝜕𝑡 + 𝜌 𝜀 𝜕𝑞 𝜕𝑡 = 0 (127)
Considering a single transfer resistance material represented by the linear driving force model, the flux transferred may then be expressed relative to the solid phase concentration as shown in Equation [START_REF] Kataoka | Mass Transfer in Laminar Region between Liquid and Packing Materials Surface in the Packed-Bed[END_REF], or relative to the gas phase concentration as shown in Equation [START_REF] Doytchava | Mass Transfer from Solid Particles to Fluid in Granular Bed[END_REF]. By combining the equation ( 127) and ( 129), the expression ( 136) is obtained [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF].
𝑢
To simplify the system of equations ( 136) and [START_REF] Suzuki | Adsorption Engineering[END_REF], three time constants associated to mass transfer τ1, τ2 and τ3 [s -1 ] are highlighted. The system of equation becomes [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF]:
1
Where:
1 𝜏 1 = 𝑢 𝐿 (
In order to solve the system of equations numerically, it is necessary to discretize it by means of Euler method which uses the finite difference quotient. The system of equations thus obtained is given by the Expressions (143) and (144). The boundary and initial conditions are given respectively by Equations ( 145) and (146) [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF]. After the construction of the discrete form of the analytical mass balance equations, the input parameters for the simulation of the breakthrough curve are listed in Table 60. The simulation was performed for high hydrogen sulfide concentrations (5, 7.5 and 10 mol%) as seen in Table 60. The breakthrough curves obtained are presented respectively in Fig. 67, 68 and 69. An industrial adsorption column with such dimensions (0.8 m in height and 0.8 m in diameter) and treating low concentrations of H2S (< 100 ppm) reaches saturation after few months (≈ 3 months).
The simulation of such period with a time step of 0.01 s demands enormous time calculations that are not supported by the working machine. In addition, increasing the time step will cause the divergence of calculations.
Conclusion
Based on the mass balance equations, a dynamic model has been developed to simulate the breakthrough curve for the system H2S -Activated carbon. To simulate the dynamic behavior of an adsorption column for the removal of low concentrations of H2S, a powerful calculation tool is necessary.
The results obtained with the model developed should be compared to experimental data in order to adjust the overall mass transfer coefficient and equilibrium parameters given by the Langmuir isotherm. The axial dispersion coefficient should be added in the mass balance equation to evaluate its contribution on the overall mass transfer coefficient.
This study has highlighted the analytical difficulties due to low concentrations of H2S. It has also showed the importance of the experimental work to develop an accurate model which could be a reliable design tool for biogas purification units.
Conclusions and perspectives
This thesis is part of the innovative project led by the Cryo Pur® society that aims to develop a technology to purify and upgrade biogas in order to be used as a liquid fuel or injected into the natural gas grid. The purpose of this work is to ensure a continuous removal of hydrogen sulfide upstream of the process because of the risks of toxicity, corrosion and odors. This thesis has allowed drawing scientific and technical conclusions.
From a technical point of view, two desulfurization technologies were tested: chemical absorption in a packing column using sodium hydroxide as an aqueous solvent and adsorption using impregnated activated carbon in a fixed bed. Both techniques have been experimented on a demonstrator "BioGNVAL" developed by Cryo Pur® Society which treats 85 Nm 3 /h of real biogas from the sewage treatment plant of Valenton. The general process is characterized by the following conditions:
The concentrations of methane and carbon dioxide are about ≈ 60 mol% of CH4 and ≈ 35 mol% of CO2. The process operates at a pressure slightly higher than the atmospheric pressure to prevent the infiltration of oxygen. The desulfurization of biogas must be complete to ensure a high quality of CO2 and biomethane. The concentration of H2S in the biogas is low and generally varies between 10 and 100 ppm.
Both technologies have shown satisfactory separation efficiency greater than 99.5 %. The bibliographic study has allowed choosing an efficient absorption process that uses sodium hydroxide as a chemical solvent for the selective separation of hydrogen sulfide. This method is based on contacting the biogas to be treated with the aqueous NaOH in a structured packing column.
Despite the advantages of this process, it also has some drawbacks. For example, in the presence of CO2, the aqueous solution of sodium hydroxide cannot be regenerated because there is formation of sodium carbonate (Na2CO3) precipitate. Moreover, when the tanks of NaOH and NaOCl reach their lowest level, they must be filled by an operator who has to handle hazardous products. This operation is an obstacle to the commercialization of the process which must be automated without the repetitive intervention of an operator. Nevertheless for high H2S concentrations for processes which are continuously operated with personnel, this absorption process can be adopted.
To overcome these drawbacks for fully automated biogas purification equipment, the adsorption technology using impregnated activated carbon was tested. It has achieved similar separation efficiencies to the absorption technology without the use of hazardous products and without the need for an operator to ensure continuous operation of the process. Furthermore, the investment and operating costs of the adsorption process are lower than those of the absorption process.
From a scientific point of view, experimental data collected on the BioGNVAL pilot plant were used to develop a new hydrodynamic model which predicts accurately the key hydrodynamic parameters in a structured packing column: liquid holdup, pressure drop, effective interfacial area and the two transition points: loading and flooding points. It also allows the design and optimization of structured packing columns to operate at full capacity. This model was developed based on an existing model: Billet and Schultes. Some constants and exponents present in the correlations of this last model were developed following a sensitivity analysis and then optimized based on experimental data to finally implement a new model adapted to predict precisely the hydrodynamic parameters of a small scale structured packing column used for biogas or natural gas applications.
After the hydrodynamic study, simulations using the chemical process optimization software Aspen Plus® V8.0 have been performed in order to determine the efficiencies of separation for different operating conditions. The rate-based model approach has been used. To describe vapor-liquid equilibrium, a γ/ϕ approach has been considered: the Electrolyte Non-Random Two-Liquid (ENRTL) model has been adopted to represent non-idealities in the liquid phase, while the Redlich-Kwong equation of state has been used for the vapor phase.
In order to improve the simulation of absorption, Henry's law constants of the main biogas components (CH4, CO2 and H2S) in water, the properties of pure components, equilibrium constants for the equilibrium reactions and the reaction rate constant for the kinetically controlled reaction between carbon dioxide and the hydroxide ion have been verified against experimental data. After the verification and modification of the thermodynamic model, the results of the simulations were compared to experimental data obtained on the demonstrator "BioGNVAL". The comparison shows that the two results are in good agreement. In connection with this work, it would be interesting to study the perspective of incorporating the hydrodynamic model developed, in the database of Aspen Plus.
For adsorption technology, a dynamic model has been developed to simulate the breakthrough curve for the system H2S -Activated carbon. During this study, the different mass transfer coefficients involved in the process have been estimated under the same conditions in which the experiments on "BioGNVAL" pilot plant were held.
The results obtained with the model developed should be compared to experimental data in order to adjust the overall mass transfer coefficient and equilibrium parameters given by the adsorption isotherm. Another perspective is to consider the reaction phenomena due to the impregnation of the activated carbon in order to have a prediction tool able to manage the design of adsorption columns. It will be also interesting to reinsert the axial dispersion coefficient in the mass balance equation to evaluate its contribution.
Fig. 1 :Fig. 2 :Fig. 3 :Fig. 4 :Fig. 5 :Fig. 6 :Fig. 9 :Fig. 10 :Fig. 11 :Fig. 12 :Fig. 13 :Fig. 20 :Fig. 22 :Fig. 23 :Fig. 24 :Fig. 25 :Fig. 26 :Fig. 27 :Fig. 30 :Fig. 31 :Fig. 32 :Fig. 33 :Fig. 34 :Fig. 36 :
12345691011121320222324252627303132333436 Fig. 1: Distribution of final energy consumption in France [1] ................................................ Fig. 2: Distribution of production of renewable energies by sector [1] ................................... Fig. 3: Biogas primary production in 2013 [2] .......................................................................... Fig. 4: Simplified diagram of production of biomethane ......................................................... Fig. 5: Example of biogas utilization by Cryo Pur® Company [9].............................................. Fig. 6: Vapor pressure of the main components present in biogas ......................................... Fig. 7: Pressure -Temperature equilibrium behavior for the CH4 -CO2 system [29] ............. Fig. 8: Pressure -Temperature equilibrium behavior for the CH4 -H2S system [30] ............. Fig. 9: Biogas density as a function of temperature ................................................................ Fig. 10: Biogas viscosity as a function of temperature ............................................................. Fig. 11: Thermal conductivity of biogas as a function of temperature .................................... Fig. 12: Heat capacity at constant pressure of biogas as a function of temperature .............. Fig. 13: Heat capacity at constant volume of biogas as a function of temperature ................ Fig. 14: Two-film theory [36] .................................................................................................... Fig. 15: Chemical structure of some alkanolamines [43] ......................................................... Fig. 16: Main gas-liquid contactors [48] ................................................................................... Fig. 17: Schematic representation of a packed column ........................................................... Fig. 18.a: Random packing Nutter Ring [Sulzer] ....................................................................... Fig. 18.b : Structured packing Mellapak [Sulzer] ...................................................................... Fig. 19: Transport mechanism of the adsorbate molecules on the adsorbent surface ........... Fig. 20: Shapes of commercial activated carbons [53]............................................................. Fig. 21: Effect of temperature on some adsorbents [57] ......................................................... Fig. 22: Classification of adsorption isotherms [IUPAC] ........................................................... Fig. 23: Skarstrom cycle stages and pressure variations [68] .................................................. Fig. 24: Schematic representation of a membrane [71] .......................................................... Fig. 25: Diagram of the main types of membranes [75] .......................................................... Fig. 26: The geometric configurations of membrane contactors [76] ..................................... Fig. 27: Most technologies used for the purification and upgrading of biogas [International Energy Agency] ......................................................................................................................... Fig. 28: Comparison of investment costs of different biogas purification and upgrading technologies [88] ...................................................................................................................... Fig. 29: BioGNVAL demonstrator located at Valenton water treatment plant [9] .................. Fig. 30: Schematic representation of Cryo Pur® system [9] ..................................................... Fig. 31: Simplified flowsheet of the BioGNVAL pilot plant [9] ................................................. Fig. 32: Schematic diagram of the absorption subsystem for elimination of hydrogen sulfide [9] ............................................................................................................................................. Fig. 33: Piping and instrumentation diagram of adsorption subsystem for the removal of hydrogen sulfide [9] ................................................................................................................ Fig. 34: Carbon dioxide antisublimation [9] ............................................................................. Fig. 35: Variation of the H2S content at the inlet and at the outlet of the absorption column .................................................................................................................................................. Fig. 36: Variation of the biogas temperature at the inlet of the demonstrator and at the outlet of the absorption column .............................................................................................. Fig. 37: Observation of the desorption phenomenon of H2S ................................................... Fig. 38.a: Pressure drop evolution in a packing column .......................................................... Fig. 38.b: Liquid holdup evolution in a packing column ...........................................................
Fig. 39 .Fig. 43 :Fig. 44 :Fig. 45 :Fig. 49 :Fig. 50 :Fig. 51 :Fig. 52 :Fig. 54 :Fig. 59 :Fig. 60 :Fig. 61 :Fig. 62 :Fig. 63 :Fig. 64 :Fig. 65 :
39434445495051525459606162636465 Fig. 39.a. Pressure drop evaluation for liquid load uL = 20.5 m/h ........................................... Fig. 39.b. Liquid holdup evaluation for liquid load uL = 20.5 m/h ............................................ Fig. 40.a. Prediction of effective interfacial area by Billet and Schultes model for the system Air / Kerosol 200 ....................................................................................................................... Fig. 40.b. Prediction of effective interfacial area by SRP model for the system Air / Kerosol 200 ............................................................................................................................................ Fig. 40.c. Prediction of effective interfacial area by Delft model for the system Air / Kerosol 200 ............................................................................................................................................ Fig. 41. Liquid holdup and pressure drop with an Air -Water system using Flexipac® 350Y packing ................................................................................................................................... Fig. 42. Liquid holdup and pressure drop with an Air -Kerosol 200 system using Flexipac® 350Y packing ........................................................................................................................... Fig. 43: Flowsheet of the absorption process simulated using Aspen Plus® ......................... Fig. 44: Henry coefficients for CH4 -H2O system ................................................................... Fig. 45: Henry coefficients for CO2 -H2O system ................................................................... Fig. 46: Henry coefficients for H2S -H2O system ................................................................... Fig. 47: Comparison between model and experimental data for heat capacity for carbon dioxide .................................................................................................................................... Fig. 48: Deviation between experimental and calculated results of carbon dioxide heat capacity................................................................................................................................... Fig. 49: Comparison between model and experimental data for liquid density of NaOH -H2O ................................................................................................................................................ Fig. 50: Deviation between experimental and calculated results of liquid density ............... Fig. 51: Comparison between model and experimental data for liquid viscosity of NaOH -H2O ......................................................................................................................................... Fig. 52: Deviation between experimental and calculated results of liquid viscosity ............. Fig. 53: Comparison between model and experimental data for liquid phase surface tension of 5 wt% NaOH aqueous solution .......................................................................................... Fig. 54: Deviation between experimental and calculated results of surface tension ............ Fig. 55: Comparison of results of equilibrium constant for reaction R.18 ............................. Fig. 56: Comparison of results of reaction rate constant for reaction R.24 .......................... Fig. 57: Deviation between experimental and calculated results of reaction rate constant (R.24) ...................................................................................................................................... Fig. 58: Influence of the liquid flow on the absorption of hydrogen sulfide ......................... Fig. 59: Influence of gas flow rate on the pressure drop ....................................................... Fig. 60: Deviation between experimental and calculated results of pressure drop .............. Fig. 61: Influence of the concentration of NaOH in the removal of H2S and CO2 .................. Fig. 62: Influence of the temperature of liquid in the absorption of hydrogen sulfide ......... Fig. 63: Deviation between experimental and calculated results of H2S concentration leaving the packing column ................................................................................................................ Fig. 64: Influence of relative humidity in the adsorption of hydrogen sulfide using Airpel Ultra DS [53] .................................................................................................................................... Fig. 65: Schematic representation of an adsorption column [122] ....................................... Fig. 66: Influence of Langmuir equilibrium parameters on adsoption sites occupied .......... Fig. 67: Breakthrough curve simulated for H2S molar percentage of 5 % ............................. Fig. 68: Breakthrough curve simulated for H2S molar percentage of 7.5 % .......................... Fig. 69: Breakthrough curve simulated for H2S molar percentage of 10 % ...........................
Fig. 4 :
4 Fig. 4: Simplified diagram of production of biomethane
and C [-]: Component specific constants.
Fig. 8 :
8 Fig. 8: Pressure -Temperature equilibrium behavior for the CH4 -H2S system [30] (─) three-phase equilibrium boundaries ; (•••) critical curves ; (--) SVE, VLE and SLE of CH4 and H2S ; (■) quadruple point QP2 ; (▲) quadruple point QP1 ; (Δ) Upper Critical EndPoint UCEP1 ; (□) Upper Critical EndPoint UCEP2
Fig. 10 :
10 Fig. 10: Biogas viscosity as a function of temperature ( ____ ) Air ; ( …… ) Biogas with 40 mol% of CO2 ; (----) Biogas with 35 mol% of CO2
Fig. 11 :
11 Fig. 11: Thermal conductivity of biogas as a function of temperature ( ____ ) Air ; ( …… ) Biogas with 40 mol% of CO2 ; (----) Biogas with 35 mol% of CO2
Fig. 12 :Fig. 13 :
1213 Fig. 12: Heat capacity at constant pressure of biogas as a function of temperature ( …… ) Biogas with 60 mol% of CH4 ; (----) Biogas with 65 mol% of CH4
2 ]: flux of H2S transferred by unit area. pH2S [Pa]: partial pressure of H2S in the gas phase. p * H2S [Pa]: partial pressure of H2S at the interface. CH2S [mol.m -3 ]: concentration of H2S in the liquid phase. C * H2S [mol.m -3 ]: concentration of H2S at the interface. δG and δL [m]: thickness of the stagnant film on the gas side and the liquid side respectively. DH2S,G and DH2S,L [m 2 .s -1 ]: H2S diffusion coefficients in gas phase and liquid phase respectively.
Fig. 18
18 Fig. 18.a: Random packing Nutter Ring [Sulzer]
: Ct [mol.m -3 ]: the concentration of the compound in the fluid phase. Ce [mol.m -3 ]: the concentration of the compound at the surface of the adsorbent. kf [m.s -1 ]: the external mass transfer coefficient. aads [m 2 ]: the useful surface area for external transfer. V [m 3 ]: Volume of the adsorption bed.
s -1 ]: the transferred flux. Dp [m 2 .s -1 ]: the pore diffusion coefficient. ɛp [-]: the porosity. τ [-]: the tortuosity. Cp [mol.m -3 ]: the compound concentration in the pore. r [m]: radius of the pore.
Fig. 19 :
19 Fig. 19: Transport mechanism of the adsorbate molecules on the adsorbent surface
(a) Flat sheet membrane (b) Tubular membrane (c) Spiral wound membrane (d) Hollow fiber membrane
Fig. 26 :
26 Fig.26:The geometric configurations of membrane contactors[START_REF] Boucif | Modélisation et simulation de contacteurs membranaires pour les procédés d'absorption de gaz acides par solvant chimique[END_REF]
Fig. 31
31 Fig. 31 shows the general operating principle of the BioGNVAL demonstrator. It consists of 7 subsystems: Chiller. Desulfurization subsystem, either by chemical absorption in a structured packing column using sodium hydroxide, or by adsorption onto activated carbon in a fixed bed respectively. Biogas dehumidification and siloxanes icing subsystem. Subsystem for the capture of carbon dioxide. Biogas liquefaction subsystem. Biogas treatment line with flaring output.
Fig. 31 :
31 Fig.31: Simplified flowsheet of the BioGNVAL pilot plant[START_REF][END_REF]
Fig. 32 :
32 Fig. 32: Schematic diagram of the absorption subsystem for elimination of hydrogen sulfide [9] Equipment: CV (Control Valve) ; HX (Heat Exchanger) ; CU (Condensing Unit) ; TK (Tank) ; P (Pump) ; VP (Vaccum Pump) ; Instrumentation: FT (Flow Transmitter) ; TT (Temperature Transmitter) ; TE (Temperature Element) ; PT (Pressure Transmitter) ; PDT (Pressure Difference Transmitter) ; PHT (pH Analyzer Transmitter) ; LT (Level Transmitter) ; GD (Gas Detector)
Fig. 36 :
36 Fig. 36: Variation of the biogas temperature at the inlet of the demonstrator and at the outlet of the absorption column
Fig. 37 :
37 Fig. 37: Observation of the desorption phenomenon of H2S
:24 13:55:12 14:24:00 14:52:48 15:21:36 15:50:24 16:19:12 16:48:00 17:16:48
Fig. 39 .
39 Fig. 39.a. Pressure drop evaluation for liquid load uL = 20.5 m/h Models: ( __ Δ __ ) Billet & Schultes ; (--□--) SRP ; (-. ○-. ) Delft ; ( … + … ) Billet & Schultes modified "Section 5" ; Experimental values: (♦) [100]
Fig. 40 .Fig. 40 .
4040 Fig. 40.a. Prediction of effective interfacial area by Billet and Schultes model for the system Air / Kerosol 200 Models: ( __ Δ __ ) Billet & Schultes ; (♦) Experimental values [100]
Fig. 41 .Fig. 42 .
4142 Fig.[START_REF] Descamps | Etude de la capture du CO2 par absorption physique dans les systèmes de production d'électricité basés sur la gazéification du charbon intégrée à un cycle combiné[END_REF]. Liquid holdup and pressure drop with an Air -Water system using Flexipac® 350Y packing
6. 2 . 1 .
21 Validation of the Temperature-Dependent Henry's constant for CH4 -H2O, CO2 -H2O and H2S -H2O systems For a rigorous estimation of gas solubility in the solvent, the Henry's constants obtained with Aspen Plus® should be verified by experimental data. The Henry's constants based on the mole fraction scale are taken from Aspen Plus® databanks for the gaseous components (CH4, CO2 and H2S) with water. The temperature dependence of the Henry's constants used by Aspen Plus® is represented by: ln 𝐻 𝑖𝑗 = 𝑎 𝑖𝑗 + 𝑏 𝑖𝑗 𝑇 + 𝑐 𝑖𝑗 ln 𝑇 + 𝑑 𝑖𝑗 𝑇 + 𝑒 𝑖𝑗 𝑇 2
Fig. 44 :Fig. 45 :Fig. 46 :
444546 Fig. 44: Henry coefficients for CH4 -H2O system ( __ □ __ ) Aspen Plus® ; (--Δ--) Harvey equation [106] ; (♦) Experimental values[105]
Fig. 48 : 6 . 2 . 3 .
48623 Fig. 48: Deviation between experimental and calculated results of carbon dioxide heat capacity6.2.3. Validation of liquid density of NaOH -H2OThe calculations of the density of the liquid phase have been verified depending on the mass fraction of sodium hydroxide at 25 ° C. The results obtained with Aspen Plus ™ are in good agreement with experimental data [109] as shown in Fig.49.
Fig. 49 :
49 Fig. 49: Comparison between model and experimental data for liquid density of NaOH -H2O ( ____ ) Aspen Plus® ; (♦) Experimental values [109]
Fig. 52 :
52 Fig. 52: Deviation between experimental and calculated results of liquid viscosity
Fig. 55 :
55 Fig. 55: Comparison of results of equilibrium constant for reaction R.18 ( ____ ) Aspen Plus® ; (----) Equation (99) ; ( …… ) Proposed equation
Fig. 57 :
57 Fig.57: Deviation between experimental and calculated results of reaction rate constant (R.[START_REF] Miller | The Joule-Thomson Effect in Carbon Dioxide[END_REF]
Fig. 60 :
60 Fig. 60: Deviation between experimental and calculated results of pressure drop
Fig. 62 :
62 Fig. 62: Influence of the temperature of liquid in the absorption of hydrogen sulfide ( ____ ) Aspen Plus® ; (♦) Experimental values[BioGNVAL, 2015]
Fig. 63 : 2 S
632 Fig. 63: Deviation between experimental and calculated results of H2S concentration leaving the packing column
Fig. 64 :
64 Fig. 64: Influence of relative humidity in the adsorption of hydrogen sulfide using Airpel Ultra DS [53] Relative humidity: ( …… ) 0 % ; (-.. -..) 33 % ; (----) 58 % ; (-. -. -) 85 % ; ( ____ ) 100 %
Fig. 67 :Fig. 68 :
6768 Fig. 67: Breakthrough curve simulated for H2S molar percentage of 5 %
Table of contents
of Abstract .............................................................................................
Erreur ! Signet non défini.
Sc Schmidt number -
Sh Nomenclature T Sherwood number Temperature -K
a TB A Tc ae uL b uL,e B uL,lp CFl uV uV,e Ch uV,Fl CL uV,lp Clp V WeL Cp z Specific geometric packing surface area Boiling temperature Constant used for the calculation of pressure drop Critical temperature Effective interfacial area Superficial liquid velocity Length of the corrugation base Effective liquid velocity Constant used for the calculation of pressure drop Superficial liquid velocity at loading point Superficial gas velocity Specific packing constant for calculation of hydrodynamic parameters at Effective gas velocity flooding point Specific packing constant for hydraulic area Superficial gas velocity at flooding point Specific packing constant for mass transfer calculation in liquid phase Superficial gas velocity at loading point Gas mass flow Specific packing constant for calculation of hydrodynamic parameters at Liquid Weber number loading point Specific packing constant for pressure drop calculation Unit length K m 2 /m 3 K -m/s m 2 m/s m m/s -m/s -m/s m/s -m/s -kg/h ---m
CV DV 𝑑𝑃 𝑑𝑧 Specific packing constant for mass transfer calculation in gas phase Gas-phase diffusion coefficient Pressure drop -m 2 /s Pa/m
d dh dhV 𝑑𝑃 ( 𝑑𝑧 ) 𝑑 Packing diameter Hydraulic diameter Diameter of the gas flow channel Dry pressure drop m m m Pa/m
( dp f Fc 𝑑𝑃 ) 𝑑𝑧 𝑝𝑙 Fc,lp αL FrL γ Fl δ Ft ε FSE ζDC fw ζGG g ζGL geff θ h µL hL µV hL,Fl νL hL,lp ξbulk hL,pl ξGG K ξGL kG ξwall kL ρL L ρV M σL nFl σW nlp φ P Ψ0 Pc ΨFl Pe ΨL ReL Ψ'L ReV ψlp ReV,e Ω Particle diameter Approach to flood Gas capacity factor Pressure drop in preloading region Gas capacity factor at loading point Liquid flow angle Liquid Froude number Solid -liquid film contact angle Enhancement factor for pressure drop calculation Liquid film thickness Correction factor Void fraction Surface enhancement factor for effective interfacial area calculation Coefficient for losses caused by direction change Wetting factor Coefficient for gas/gas friction losses Gravitational constant Coefficient for gas/liquid friction losses Effective gravitational constant Corrugation angle Corrugation height Dynamic viscosity of the liquid phase Liquid holdup Dynamic viscosity of the gas phase Liquid holdup at flooding point Kinematic viscosity of the liquid phase Liquid holdup at loading point Direction change coefficient in the bulk zone Liquid holdup in preloading region Gas/Gas friction coefficient Wall factor Gas/Liquid friction coefficient Gas-phase mass transfer coefficient Direction change coefficient near the wall Liquid-phase mass transfer coefficient Liquid density Liquid mass flow Gas density Molar mass Surface tension of the liquid phase Exponent for calculation of liquid holdup at flooding point Surface tension of water Exponent for calculation of liquid holdup at loading point Fraction of the flow channel occupied by the liquid phase Pressure Resistance coefficient for dry pressure drop calculation Critical pressure Resistance coefficient for pressure drop calculation at flooding point Peclet number Resistance coefficient for wet pressure drop calculation Liquid Reynolds number Resistance coefficient for wet pressure drop calculation Gas Reynolds number Resistance coefficient for pressure drop calculation at loading point Effective Reynolds number for gas phase Fraction of the packing surface occupied by holes m -Pa/m (m/s) (kg/m 3 ) 0.5 ° (m/s) (kg/m 3 ) 0.5 ° -m --------m/s 2 ° m/s 2 kg/m.s m kg/m.s m 3 /m 3 m 2 /s m 3 /m 3 -m 3 /m 3 -m 3 /m 3 ---m/s kg/m 3 m/s kg/m 3 kg/h N/m kg/mol N/m ----Pa -Pa -------
ReV,r Relative Reynolds number for gas phase
s Length of the corrugation side m
Table 1 .
1 Concentration requirements before biogas liquefaction [8] .....................................
Table 2 .
2 Composition and characterization of household waste [13] .....................................
Table 3 .
3 Production yields for agricultural and agro-industrial substrates[START_REF] Boulinguiez | Purification de biogaz -Elimination des COV et des siloxanes[END_REF] ......................
Table 4 .
4 Biogas composition depending on the type of substrate [14] ...................................
Table 5 .
5 Tolerances in impurities for the use of liquid biomethane as vehicle fuel[START_REF] Boulinguiez | Purification de biogaz -Elimination des COV et des siloxanes[END_REF] .........
Table 6 .
6 Thermo-physical properties of hydrogen sulfide [19] ................................................
Table 7 .
7 Thermo-physical properties of carbon dioxide [19] ...................................................
Table 8 .
8 Thermo-physical properties of methane [19] ............................................................
Table 9 .
9 Constants used by Antoine Equation for the calculation of H2S, CO2 and CH4 vapor pressures ..................................................................................................................................
Table 10 .
10 Solubility of the main compounds of biogas in water[START_REF] Lide | Handbook Chemistry and physics[END_REF] ...................................... Table 11. Properties of physical solvents [40] ......................................................................... Table 12. Solubility of gases in physical solvents at 25 °C and 0.1 MPa [41] ...........................
Table 13 .
13 Physical properties of some chemical solvents [43] ................................................
Table 14 .
14 Classification of main gas-liquid contactors [48]......................................................
Table 15 .
15 Criteria to differentiate between physical and chemical adsorption[START_REF] Ruthven | Principles of adsorption and adsorption processes[END_REF] ...............
Table 16 .
16 Equations governing adsorption isotherm models, and their linear forms[START_REF] Hamdaoui | Modeling of adsorption isotherms of phenol and chlorophenols onto granular activated carbon. Part I. Two-parameter models and equations allowing determination of thermodynamic parameters[END_REF] .....
Table 17
17
. The membrane materials used by manufacturer [71] ............................................. Table 18. Classification of membrane separation processes [73] ........................................... Table 19. Distribution of membranes according to pore size [IUPAC] .................................... Table 20. Characteristics of the different geometries of membranes [76] ............................. Table 21. Condensation or solidification temperatures, at atmospheric pressure, for the different compounds present in biogas ................................................................................... Table
Table 31 .
31 Liquid holdup in preloading and loading regions [98] ..............................................
.....................................
Table 27. Sensitivities of measurement tools .......................................................................... Table 28. Constants for the Billet and Schultes model [98] ..................................................... Table 29. Effective interfacial area in packing columns using Billet and Schultes model [98] Table 30. Liquid holdup and velocities at loading and flooding point [99] ..............................
Table 32 .
32 Pressure drop in packing columns using Billet and Schultes model [98] .................
Table 33 .
33 Effective interfacial area in packing columns using SRP model [96] ........................
Table 34 .
34 Liquid holdup in packing columns using SRP model [101] ....................................... Table 35. Pressure drop in packing columns using SRP model [96] .........................................
Table 36 .
36 Pressure drop in preloading region using Delft model [97] .....................................
Table 37 .
37 Physical properties of the systems tested [100] ......................................................
Table 38 .
38 Dimensions of Flexipac® 350Y [100] .........................................................................
Table 39 .
39 Deviation between predictive models and experimental data ................................
Table 40 .
40 Changes made to calculate liquid holdup ................................................................
Table 41
. Changes made to calculate Pressure drop for liquid density less than 900 kg.m -3 . Table
42
. Changes made to calculate Pressure drop for liquid density higher than 900 kg.m -3 ................................................................................................................................................
Table 43 .
43 Statistical deviation between the modified model and experimental data for pressure drop and liquid holdup predictions .........................................................................
Table 44 .
44 Comparison between modified correlations and experimental data for the prediction of pressure drop in a structured packing column ................................................
Table 45 .
45 Coefficients used by Aspen Plus® to calculate Henry's constant ...........................
Table 46 .
46 Coefficients used by Harvey to calculate Henry's constants ..................................
Table 47 .
47 Values of the constants used by Equation (87) to calculate the heat capacity of carbon dioxide ........................................................................................................................
Table 48 .
48 Coefficients used in the calculation of the equilibrium constant ..........................
Table 49 .
49 Coefficients used in the calculation of the equilibrium constant of Reaction (R.6) ................................................................................................................................................ Table 50. Parameters k and E for kinetic-controlled reactions [119] ....................................
Table 51 .
51 Details of the simulated process ............................................................................ Table 52. Influence of liquid flow rate on the pressure drop ................................................ Table 53. Activated carbon properties used for this study [53] ............................................
Table 54
54
. Biogas composition and operating conditions of the adsorption process ............. Table 55. Sherwood number estimation [123] ...................................................................... Table 56. Estimation of the binary diffusion coefficients [135] ............................................. Table 57. Equations used to estimate internal diffusion coefficients ................................... Table 58. Estimation of dimensionless numbers and diffusion coefficients ......................... Table 59. Boundary and initial conditions of the system ....................................................... Table 60. Input parameters for the simulation of the breakthrough curve ..........................
Table 2 .
2 Composition and characterization of household waste[START_REF] Suez | Etude biogaz -etat des lieux et potentiel du biométhane carburant[END_REF]
Compounds Composition [wt. %] Dry matter [wt. %] Dry organic matter [wt. %]
Putrescible 33.0 44 77
Papers 11.7 68 80
Cardboards 12.0 70 80
Various incombustible 8.5 90 1
Tetra brik 8.5 70 60
Glasses 5.4 98 2
Plastic 4.9 85 90
Green waste 4.5 50 79
Textiles 4.2 74 92
Metals 3.7 90 1
Special waste 2.0 90 1
Various fuels 1.6 85 75
Table 3 .
3 Production yields for agricultural and agro-industrial substrates [
14] Substrates Dry matter [wt. %] Dry organic matter [wt. %] Production yield [m 3 /t of dry organic matter] Production yield [m 3 /t of substrate]
Pig manure 6 to 25 75 to 80 300 to 450 22 to 60
Cow manure 8 to 30 80 to 82 350 to 700 21 to 168
Chicken manure 10 to 60 67 to 77 300 to 800 20 to 180
Horse manure 28 25 500 35
sheep manure 15 to 25 80 to 85 350 50 to 74
Grass silage 26 to 80 67 to 98 500 to 600 87 to 440
Hay 86 to 93 83 to 93 500 356 to 432
corn straw 86 72 500 310
Foliage 85 82 400 279
Sorghum 25 93 700 162
Helianthus annuus 35 88 750 231
Distillation residues 12 90 430 77
Brewery waste 15 to 21 66 to 95 500 50 to 100
Vegetable waste 5 to 20 76 to 90 600 23 to 108
oleaginous residues 92 97 600 536
Fruit residues 40 to 50 30 to 93 450 to 500 60 to 232
press cake 88 93 5550 450
slaughterhouse waste 15 80 to 90 450 58
bakery waste 50 80 to 95 450 665
Shortening waste 99 99 1200 1117
Table 4 .
4 Biogas composition depending on the type of substrate [
14] Compounds Unit Biogas from household and industrial waste Biogas from wastewater treatment plants Agricultural biogas
CH4 % mol. 40 -55 65 -75 45 -75
CO2 % mol. 25 -30 20 -35 25 -55
N2 % mol. 10 0 -5 0 -5
O2 % mol. 1 -5 0.5 0 -2
NH3 % mol. Traces Traces 0 -3
COV mg.Nm -3 < 2500 < 3000 < 1500
H2S mg.Nm -3 < 3000 < 4000 < 10000
Table 6 .
6 Thermo-physical properties of hydrogen sulfide[19]
Properties Unit Value
Molar mass g.mol -1 34.08
Auto-ignition temperature °C 270
Solubility in water (1.013 bar and 0 °C) vol/vol 4.67
Solid phase
Melting point °C -85.7
Latent heat of fusion (1.013 bar at melting point) kJ.kg -1 69.73
Liquid phase
Boiling point at 1.013 bar °C -60.3
Vapor pressure at 20 °C bar 17.81
Liquid phase density (1.013 bar at boiling point) kg.m -3 949.2
Latent heat of vaporization (1.013 bar at boiling point) kJ.kg -1 546.41
Gas phase
Gas phase density (1.013 bar and 15 °C) kg.m -3 1.45
Viscosity (1.013 bar and 0 °C) Pa.s 1.13 x 10 -5
Thermal conductivity (1.013 bar and 0 °C) mW.m -1 .K -1 15.61
Specific volume (1.013 bar and 25 °C) m 3 .kg -1 0.7126
Heat capacity at constant pressure (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.0346
Heat capacity at constant volume (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.026
Critical point
Critical temperature °C 99.95
Critical pressure bar 90
Critical density kg.m -3 347.28
Triple point
Triple point temperature °C -85.45
Triple point pressure bar 0.232
Table 7 .
7 Thermo-physical properties of carbon dioxide[19]
Properties Unit Value
Molar mass g.mol -1 44.01
Concentration in air Vol % 0.0405
Solubility in water (1.013 bar and 0 °C) vol/vol 1.7163
Solid phase
Melting point °C -56.57
Latent heat of fusion (1.013 bar at melting point) kJ.kg -1 204.93
Solid phase density kg.m -3 1562
Liquid phase
Boiling point °C -78.45
Vapor pressure at 20 °C bar 57.291
Liquid phase density (19.7 bar at -20 °C) kg.m -3 1256.74
Gas phase
Gas phase density (1.013 bar and 15 °C) kg.m -3 1.87
Viscosity (1.013 bar and 0 °C) Pa.s 1.37 x 10 -5
Thermal conductivity (1.013 bar and 0 °C) mW.m -1 .K -1 14.67
Specific volume (1.013 bar and 25 °C) m 3 .kg -1 0.5532
Heat capacity at constant pressure (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.0374
Heat capacity at constant volume (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.0289
Critical point
Critical temperature °C 30.98
Critical pressure bar 73.77
Critical density kg.m -3 467.6
Triple point
Triple point temperature °C -56.56
Triple point pressure bar 5.187
2.2.3. Methane
Methane is a hydrocarbon which is naturally present in the Earth's atmosphere at very low
concentrations (1.82 ppm in 2012) [21].
Table 8 .
8 Thermo-physical properties of methane[19]
Properties Unit Value
Molar mass g.mol -1 16.043
Auto-ignition temperature °C 595
Solubility in water (1.013 bar and 2 °C) vol/vol 0.054
Solid phase
Melting point °C -182.46
Latent heat of fusion (1.013 bar at melting point) kJ.kg -1 58.682
Liquid phase
Boiling point at 1.013 bar °C -161.48
Liquid phase density (1.013 bar at boiling point) kg.m -3 422.36
Gas phase
Gas phase density (1.013 bar at boiling point) kg.m -3 1.816
Viscosity (1.013 bar and 0 °C) Pa.s 1.0245 x 10 -5
Thermal conductivity (1.013 bar and 0 °C) mW.m -1 .K -1 30.57
Specific volume (1.013 bar and 25 °C) m 3 .kg -1 1.5227
Heat capacity at constant pressure (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.0358
Heat capacity at constant volume (1.013 bar and 25 °C) kJ.mol -1 .K -1 0.0274
Critical point
Critical temperature °C -82.59
Critical pressure bar 45.99
Critical density kg.m -3 162.7
Triple point
Triple point temperature °C -182.46
Triple point pressure bar 0.117
Table 9
9
. Constants used by Antoine Equation for the calculation of H2S, CO2 and CH4 vapor pressures
Components A B C Temperature range [K] References
H2S 4.43681 829.439 -25.412 138.8 -212.8 [26]
H2S 4.52887 958.587 -0.539 212.8 -349.5 [26]
CO2 6.81228 1301.679 -3.494 154.26 -195.89 [27]
CH4 3.9895 443.028 -0.49 90.99 -189.99 [28]
Table 11 .
11 Properties of physical solvents[START_REF] Bucklin | Comparison of Fluor Solvent and Selexol Processes[END_REF]
Selexol® Purisol® Rectisol® Morphysorb® Fluor solvent®
Solvent Dimethyl Ethers of Polyethylene Glycol N-Methyl-2-Pyrrolidone Methanol N-Formyl-Morpholine Propylene Carbonate
Chemical formula (CH3O(CH2CH2O)xCH3) 3 ≤ x ≤ 9 C5H9NO CH3OH C5H9NO2 C4H6O3
Maximum temperature 175 65
[°C]
Vapor pressure at 25 °C [kPa] 0.097 32 1.3 (at 20 °C) 9 3
Viscosity at 25 °C [Pa.s] 5.8 1.65 0.6 9.5 3
Boiling point [°C] 240 202 110 242 240
Melting point [°C] -23 / -29 -24 -98 21 -48
Molar mass [g.mol -1 ] 280 99 32 115.3 102
Table 12 .
12 Solubility of gases in physical solvents at 25 °C and 0.1 MPa[START_REF] Descamps | Etude de la capture du CO2 par absorption physique dans les systèmes de production d'électricité basés sur la gazéification du charbon intégrée à un cycle combiné[END_REF]
vol.vol -1 DMEPG PC NMP MeOH (at -25° C)
H2 0.013 0.0078 0.0064 0.0054
N2 0.02 0.0084 0.012
O2 0.026 0.035 0.02
CO 0.028 0.021 0.021 0.02
CH4 0.066 0.038 0.072 0.051
CO2 1 1 1 1
NH3 4.8 23.2
H2S 8.82 3.2 10.2 7.06
SO2 92.1 68.6
H2O 730 300 4000
Table 13 .
13 Physical properties of some chemical solvents[START_REF] Lecomte | Le captage du CO2, des technologies pour réduire les émissions de gaz à effet de serre[END_REF]
MEA DGA DEA DIPA TEA
Chemical formula HOC2H4NH2 H(OC2H4)2NH2 (HOC2H4)2NH (HOC3H6)2NH (HOC2H4)3N
Molecular weight 61.08 105.14 105.14 133.19 148.19
Boiling point [°C] 170.50 221.11 269.00 248.72 360.00
Melting point [°C] 10.5 -12.5 28.0 42.0 22.4
Viscosity [cP] 24 (20 °C) 40 (15.6 °C) 350 (20 °C) 870 (30 °C) 1013 (20 °C)
Table 14 .
14 Classification of main gas-liquid contactors[START_REF] Roquet | Modélisation et étude expérimentale de l'absorption réactive multiconstituants[END_REF]
Gas-liquid contactors Continuous phase Fluid inclusion type Main applications
Bubble column Liquid Bubbles Oxidation / Chlorination
Gas-liquid agitated vessel Liquid Bubbles Oxidation / Fermentation
Spray tower Gas Drops and liquid films Gas scrubbing dust-laden
Packed column Gas Drops and liquid films Gas scrubbing
Venturi tube Gas Drops and liquid films Gas scrubbing dust-laden
the flow is stratified,
with entrained
Plate column Liquid bubbles in the liquid and a mist or spray of
liquid droplets in the
vapour
Table 15 .
15 Criteria to differentiate between physical and chemical adsorption[START_REF] Ruthven | Principles of adsorption and adsorption processes[END_REF]
Criteria Physical adsorption Chemical adsorption
Process temperature Low High
Type of bond Van der Waals Chemical
Interaction forces 30 to 40 kJ.mol -1 80 to 800 kJ.mol -1
Layers Monolayers or multilayers Monolayers only
Kinetic Fast and reversible Slow and irreversible
Desorption Easy Difficult
Table 16 .
16 Equations governing adsorption isotherm models, and their linear forms[START_REF] Hamdaoui | Modeling of adsorption isotherms of phenol and chlorophenols onto granular activated carbon. Part I. Two-parameter models and equations allowing determination of thermodynamic parameters[END_REF] [g 1-(1/n) .m(3/n) .kg -1 ] is the Freundlich constant indicative of the relative adsorption capacity of the adsorbent. KT [-] is the Toth equilibrium constant. n [-] is the Freundlich constant indicative of the intensity of the adsorption. mT[-] is the Toth model exponent. θ [-] = qe / qm, is the surface coverage. ΔQ [kJ.mol -1
Isotherm Equation Linear form
Langmuir 𝑞 𝑒 = 𝑞 𝑚 𝑏 𝐶 𝑒 1 + 𝑏 𝐶 𝑒 1 𝐶 𝑒 = 𝑏 ( 𝑞 𝑚 𝑞 𝑒 -1)
Freundlich 𝑞 𝑒 = 𝐾 𝐹 𝐶 𝑒 1 𝑛 ⁄ ln 𝑞 𝑒 = ln 𝐾 𝐹 + 1 𝑛 ln 𝐶 𝑒
Temkin 𝜃 = 𝑅 𝑇 𝛥𝑄 ln 𝐾 0 𝐶 𝑒 𝜃 = 𝑅 𝑇 𝛥𝑄 (ln 𝐾 0 + ln 𝐶 𝑒 )
Elovich 𝑞 𝑒 𝑞 𝑚 = 𝐾 𝐸 𝐶 𝑒 𝑒 (- 𝑞 𝑒 𝑞 𝑚 ) ln 𝑞 𝑒 𝐶 𝑒 = ln 𝐾 𝐸 𝑞 𝑚 - 𝑞 𝑒 𝑞 𝑚
Kiselev 𝑘 1 𝐶 𝑒 = 𝜃 (1 -𝜃) (1 + 𝑘 𝑛 𝜃) 1 𝐶 𝑒 (1 -𝜃) = 𝑘 1 (𝑘 𝑛 + 1 𝜃 )
Fowler-Guggenheim 𝐾 𝐹𝐺 𝐶 𝑒 = 𝜃 1 -𝜃 𝑒 ( 2 𝜃 𝑊 𝑅 𝑇 ) ln [ 𝐶 𝑒 (1 -𝜃) 𝜃 ] = 2 𝜃 𝑊 𝑅 𝑇 -ln 𝐾 𝐹𝐺
Hill-de Boer 𝐾 1 𝐶 𝑒 = 𝜃 1 -𝜃 𝑒 ( 1-𝜃 𝜃 - 𝐾 2 𝜃 𝑅 𝑇 ) ln [ 𝐶 𝑒 (1 -𝜃) 𝜃 ] - 𝜃 1 -𝜃 = -(ln 𝐾 1 + 𝐾 2 𝜃 𝑅 𝑇 )
Toth 𝑞 𝑒 𝑞 𝑚𝑇 = ( 𝐾 𝑇 1 𝐶 𝑒 + 𝐶 𝑒 𝑚𝑇 ) 𝑚𝑇 1 ln ( 𝑚𝑇 𝑞 𝑒 𝑞 𝑚𝑇 𝑚𝑇 -𝑞 𝑒 𝑚𝑇 ) = 𝑚𝑇(ln 𝐶 𝑒 + ln 𝐾 𝑇 )
Where :
qe [g.kg -1 ] is the amount of solute adsorbed per unit weight of adsorbent at equilibrium.
qm [g.kg -1 ] is the maximum adsorption capacity.
qmT [g.kg -1 ] is the Toth maximal adsorption capacity.
b [m 3 .g -1 ] is the Langmuir constant related to the free energy of adsorption. Ce [g.m -3 ] is the equilibrium concentration of the solute in the bulk solution. C0 [g.m -3 ] is the initial concentration of the solute in the bulk solution. KF
Table 17 .
17 The membrane materials used by manufacturer[START_REF] Szymczyk | Les procédés de filtration membranaire appliqués au traitement des eaux dans : Traitement et épuration des eaux industrielles polluées -Procédés membranaires, bioadsorption et oxydation chimique[END_REF]
The membrane materials manufacturers
Cellulose acetate Grace
Hydrin C Zeon
Pebax Atochem
Polyacrylate Röhm
Polydimethylsiloxane Wacker, GKSS
Polyhydantoin Bayer
Polyetherimide General Electric
polyethersulfone Bayer, BASF, Monsanto
Table 18 .
18 Classification of membrane separation processes[START_REF] Bowen | Properties of microfiltration membranes: Adsorption of bovine serum albumin at polyvinylidene fluoride membranes[END_REF]
Application Driving force Separation size range
Microfiltration Pressure gradient 10 -0.1 µm
Ultrafiltration Pressure gradient < 0.1 µm -5 nm
Reverse osmosis Pressure gradient < 5 nm
Electrodialysis Electric field gradient < 5 nm
Dialysis Concentration gradient < 5 nm
Table 19 .
19 Distribution of membranes according to pore size[IUPAC]
Membrane type Pores size (Å) Physical mechanism Application
Dense Diffusion Reaction, gas separation
Microporous ≤ 20 Micropore diffusion Gas separation
Ultrafiltration,
Mesoporous 20 -500 Knudsen diffusion Nanofiltration, Gas
separation
Macroporous ≥ 500 Molecular sieve Ultrafiltration
Table 20 .
20 Characteristics of the different geometries of membranes[START_REF] Boucif | Modélisation et simulation de contacteurs membranaires pour les procédés d'absorption de gaz acides par solvant chimique[END_REF]
Property Flat sheet membrane Tubular membrane Spiral wound membrane Hollow fiber membrane
Interfacial area [m 2 /m 3 ] ≈ 100 ≈ 1000 ≈ 500 5 -10000
Filling density Low High Moderate Very high
Resistance to soiling Good Low Moderate Low
Use at high pressures Difficult Easy Easy Easy
manufacturing cost High Moderate High Moderate
Application limited to No Yes No Yes
membranes
Table 21 .
21 Condensation or solidification temperatures, at atmospheric pressure, for the different compounds present in biogas
Compound Condensation temperature [°C]
H2S -60.3
CO2 -78.5
CH4 -161.5
N2 -195.8
Table 23 .
23 Comparison of the different biogas purification and upgrading technologies
Separation technology Methane obtained [%] concentration Methane losses [%] Process needs
Chemical absorption 95 -98 [83] > 99.5 [84] 0.1 -0.2 Amines or chemical solvent recharges
Water wash at high pressure 96 -98 [83] > 98 [84] 10 -20 (The high pressure increases the methane solubility in Large water requirement
water)
Adsorption 95 -98 [83] 98 [84] 2
Membrane separation 76 -95 [83] 90 -93.5 [85] 98 [86] 6.5 -10 [85] 2 [86] Change membranes
Cryogenic technology > 97 [83] Refrigerants
Table 24 .
24 Performances, costs, advantages and disadvantages of separation processes[START_REF] Cloirec | Les Composés Organiques Volatils (COVs) dans l'Environnement[END_REF]
Separation technology Efficiency [%] Investment [€/m 3 .h -1 ] cost Operating [€/1000 m 3 ] cost Advantages Disadvantages
-Simple operation
for a wide range of
flow rates,
Absorption 95 -98 7 -32 1.7 -8.2 concentrations and compounds.
Table 27 .
27 Sensitivities of measurement tools
Properties to measure Measurement tools Accuracy
Pressure drop in the packing column Differential Pressure Transmitter ± 0.065 % Full Scale (FS)
Pressure Pressure sensor < 0.5 % FS
H2S concentration Biogas analyser ± 3 % FS
Flow rates Flowmeter < 1 % FS
Table 28 .
28 Constants for the Billet and Schultes model[START_REF] Billet | Prediction of mass transfer columns with dumped and arranged packings: Updated summary of the calculation method of Billet and Schultes[END_REF]
Manufacture Material Description a [m 2 .m -3 ] ε Clp CFl Ch Cp CL CV
Flexipac Metal 350Y 350 0.985 3.157
Mellapak Metal 250Y 250 0.970 3.157 2.464 0.554 0.292 - -
Ralu pak Metal YC-250 250 0.945 3.178 2.558 - 0.191 1.334 0.385
Gempack Metal A2T-304 202 0.977 2.986 2.099 0.678 0.344 - -
Euroform Plastic PN-110 110 0.936 3.075 1.975 0.511 0.250 0.973 0.167
Impulse Metal 250 250 0.975 2.610 1.996 0.431 0.262 0.983 0.270
packing Ceramic 100 91.4 0.838 2.664 1.655 1.900 0.417 1.317 0.327
Montz Metal B1-200 B2-300 200 300 0.979 3.116 2.339 0.547 0.355 0.971 0.390 0.930 3.098 2.464 0.482 0.295 1.165 0.422
packing Plastic C1-200 C2-200 200 200 0.954 0.900 2.653 1.973 -- -- 0.453 1.006 0.412 0.481 0.739 -
Correlations (19), (
Table 30 .
30 Liquid holdup and velocities at loading and flooding point[START_REF] Lamprecht | Establishing a facility to measure packed column hydrodynamics[END_REF]
Parameter Correlation
Gas velocity at loading point 𝑢 𝑉,𝑙𝑝 = √ 𝜓 𝑙𝑝 𝑔 ( 𝑎 𝜀 1 6 -𝑎 0.5 ( 12 𝑔 𝜇 𝐿 𝜌 𝐿 𝑢 𝐿,𝑙𝑝 ) 1 3 ) ( 12 𝑔 𝜇 𝐿 𝜌 𝐿 𝑢 𝐿,𝑙𝑝 ) 1 6 √ 𝜌 𝑉 𝜌 𝐿 (22)
Liquid velocity at loading point 𝑢 𝐿,𝑙𝑝 = 𝜌 𝑉 𝜌 𝐿 𝐿 𝑉 𝑢 𝑉,𝑙𝑝 (23)
3
Gas velocity at flooding point 𝑢 𝑉,𝐹𝑙 = √ 𝜓 𝐹𝑙 2𝑔 (𝜀 -ℎ 𝐿,𝐹𝑙 ) 2 𝜀 0.5 -√ ℎ 𝐿,𝐹𝑙 𝑎 √ 𝜌 𝑉 𝜌 𝐿 (24)
Resistance coefficient at loading point Resistance coefficient at flooding point 𝜓 𝑙𝑝 = 𝜓 𝐹𝑙 = 𝐶 𝑙𝑝 2 ( 𝑉 √ 𝐿 𝜌 𝑉 𝜌 𝐿 𝐶 𝐹𝑙 2 ( 𝐿 𝑉 √ 𝜌 𝑉 𝜌 𝐿 𝑔 ( 𝜇 𝑉 𝜇 𝐿 𝑔 ( 𝜇 𝐿 𝜇 𝑉 ) 0.4 ) 0.2 2𝑛 𝑙𝑝 ) ) 2𝑛 𝐹𝑙 (25) (26)
Packing specific constant at loading point 𝐿 𝑉 √ 𝜌 𝑉 𝜌 𝐿 𝐿 𝑉 > 0.4 → 𝑛 𝑙𝑝 = -0.723 → 𝐶 𝑙𝑝 = 0.695 𝐶 𝑙𝑝 ( √ 𝜌 𝑉 ≤ 0.4 → 𝑛 𝑙𝑝 = -0.326 → 𝐶 𝑙𝑝 = 𝐶 𝑙𝑝 𝜌 𝐿 𝜇 𝑉 𝜇 𝐿 ) 0.1588 (27)
Packing specific constant at flooding point 𝐿 𝑉 √ 𝜌 𝑉 𝜌 𝐿 𝐿 𝑉 > 0.4 → 𝑛 𝐹𝑙 = -0.708 → 𝐶 𝐹𝑙 = 0.6244 𝐶 𝐹𝑙 ( √ 𝜌 𝑉 ≤ 0.4 → 𝑛 𝐹𝑙 = -0.194 → 𝐶 𝐹𝑙 = 𝐶 𝐹𝑙 𝜌 𝐿 𝜇 𝑉 𝜇 𝐿 ) 0.1028 (28)
Liquid holdup at the loading point ℎ 𝐿,𝑙𝑝 = ( 12 𝑎 2 µ 𝐿 𝑢 𝐿 𝜌 𝐿 𝑔 ) 1 3 ( 𝑎 ℎ 𝑎 2 ) 3 (29)
Liquid holdup at the flooding point ℎ 𝐿,𝐹𝑙 3 (3ℎ 𝐿,𝐹𝑙 -𝜀) = 6 𝑔 𝑎 2 𝜀 𝜇 𝐿 𝜌 𝐿 𝐿 𝑉 𝜌 𝑉 𝜌 𝐿 𝑢 𝑉,𝐹𝑙
Table 31 .
31 Liquid holdup in preloading and loading regions[START_REF] Billet | Prediction of mass transfer columns with dumped and arranged packings: Updated summary of the calculation method of Billet and Schultes[END_REF]
Parameter Correlation
Liquid holdup in preloading region ℎ 𝐿,𝑝𝑙 = (12 𝜇 𝐿 𝑎 2 𝑢 𝐿 𝜌 𝐿 𝑔 ) 1 3 ( 𝑎 ℎ 𝑎 2 ) 3 (31)
Hydraulic area of the packing 𝑎 ℎ 𝑎 𝑎 ℎ 𝑎 = 𝐶 ℎ 𝑅𝑒 𝐿 0.15 𝐹𝑟 𝐿 0.1 = 0.85 𝐶 ℎ 𝑅𝑒 𝐿 0.25 𝐹𝑟 𝐿 0.1 𝐹𝑜𝑟 𝑅𝑒 𝐿 < 5 𝐹𝑜𝑟 𝑅𝑒 𝐿 ≥ 5 (32)
Liquid Reynolds number 𝑅𝑒 𝐿 = 𝑢 𝐿 𝜌 𝐿 𝑎 𝜇 𝐿 (33)
Liquid Froude number 𝐹𝑟 𝐿 = 𝑢 𝐿 2 𝑎 𝑔 (34)
Liquid holdup at flooding point ℎ 𝐿,𝐹𝑙 = 2.2 ℎ 𝐿,𝑝𝑙 (35)
Liquid holdup in loading region ℎ 𝐿 = ℎ 𝐿,𝑝𝑙 + (ℎ 𝐿,𝐹𝑙 -ℎ 𝐿,𝑝𝑙 ) ( 𝑢 𝑉,𝐹𝑙 𝑢 𝑉 ) 13
Table 32 .
32 Pressure drop in packing columns using Billet and Schultes model[START_REF] Billet | Prediction of mass transfer columns with dumped and arranged packings: Updated summary of the calculation method of Billet and Schultes[END_REF]
Parameter Correlation
Dry pressure drop ( 𝑑𝑃 𝑑𝑧 ) 𝑑 = 𝜓 0 𝑎 𝜀 3 𝐹 𝑐 2 2 1 𝐾 (37)
Resistance coefficient 𝜓 0 = 𝐶 𝑝 ( 64 𝑅𝑒 𝑉 + 1.8 𝑅𝑒 𝑉 0.08 ) (38)
Gas capacity factor 𝐹 𝑐 = 𝑢 𝑉 √𝜌 𝑉 (39)
Wall factor 1 𝐾 = 1 + 2 3 1 (1 -𝜀) 𝑑 𝑝 𝑑 (40)
Particle diameter 𝑑 𝑝 = 6 1 -𝜀 𝑎 (41)
Gas Reynolds number 𝑅𝑒 𝑉 = 𝑢 𝑉 𝑑 𝑝 𝜌 𝑉 (1 -𝜀) 𝜇 𝑉 𝐾 (42)
Wet pressure drop 𝑑𝑃 𝑑𝑧 = 𝜓 𝐿 𝑓 𝑤 𝑎 (𝜀 -ℎ 𝐿 ) 3 𝐹 𝑐 2 2 1 𝐾 (43)
Resistance factor 𝜓 𝐿 ′ = 𝜓 𝐿 𝑓 𝑤 = 𝐶 𝑝 𝑓 𝑠 ( 64 𝑅𝑒 𝑉 + 1,8 𝑅𝑒 𝑉 0,08 ) ( 𝜀 -ℎ 𝐿 𝜀 ) (1.5)
Table 34 .
34 Liquid holdup in packing columns using SRP model[101]
Parameter Correlation
Dry pressure drop ( 𝛥𝑃 𝛥𝑧 ) 𝑑 = 𝐴 𝜌 𝑉 𝑠 𝜀 2 (𝑠𝑖𝑛𝛳) 2 𝑢 𝑉 2 + 𝐵 𝜇 𝑉 𝑠 2 𝜀 𝑠𝑖𝑛𝛳 𝑢 𝑉
Table 37 .
37 Physical properties of the systems tested[100]
Component Density [kg/m 3 ] Viscosity [kg/m.s] Surface tension [N/m]
Air 0.81 18.10 -6 -
Water 1000 0.001 71.2 x 10 -3
Kerosol 200 763 2.31 x 10 -3 23.9 x 10 -3
Table 38 .
38 Dimensions of Flexipac® 350Y [100]
Property Value
Void fraction 0.985
Corrugation angle 45 °
Corrugation base 15.5 mm
Corrugation side 11.5 mm
Crimp height 8.4 mm
Height of element 265 mm
Table 40 .
40 Changes made to calculate liquid holdup
Equations to modify New equations
For liquid density > 900 kg/m 3
ℎ 𝐿,𝑝𝑙 = [𝟔𝟐𝟖. 𝟒 * (𝒖 𝑳 * 𝟑𝟔𝟎𝟎) -𝟎.𝟗𝟐𝟗 ] 𝟏 𝟑 ( 𝜇 𝐿 𝑎 2 𝑢 𝐿 𝜌 𝐿 𝑔 ) 1 3 ( 𝑎 ℎ 𝑎 2 ) 3
(79) For liquid density ≤ 900 kg/m 3
ℎ 𝐿,𝑝𝑙 = 𝑬𝒙𝒑(-𝟎. 𝟎𝟎𝟓𝟕 * 𝒖 𝑳 * 𝟑𝟔𝟎𝟎 + 𝟏. 𝟐𝟎𝟕𝟒) ( 𝜇 𝐿 𝑎 2 𝑢 𝐿 𝜌 𝐿 𝑔 ) 1 3 ( 𝑎 ℎ 𝑎 2 ) 3
For liquid density > 900 kg/m 3
ℎ 𝐿 = ℎ 𝐿,𝑝𝑙 + (𝟏, 𝟑 ℎ 𝐿,𝐹𝑙 -ℎ 𝐿,𝑝𝑙 ) ( 𝑢 𝑉 𝑢 𝑉,𝐹𝑙 )
(80)
𝟏𝟎
For liquid density ≤
900 kg/m 3
ℎ 𝐿 = ℎ 𝐿,𝑝𝑙 + (𝑬𝒙𝒑 [-𝟎. 𝟎𝟎𝟎𝟔 * (𝒖 𝑳 * 𝟑𝟔𝟎𝟎) 𝟐 -𝟎. 𝟎𝟑𝟏 * 𝒖 𝑳 * 𝟑𝟔𝟎𝟎 + 𝟎. 𝟑𝟕𝟑𝟔] * ℎ 𝐿,𝐹𝑙 -ℎ 𝐿,𝑝𝑙 ) (
𝑢 𝑉 𝑢 𝑉,𝐹𝑙 )
[-𝟎.𝟎𝟐𝟏𝟓 * (𝒖 𝑳 * 𝟑𝟔𝟎𝟎) 𝟐 + 𝟎.𝟖𝟗𝟐𝟕 * 𝒖 𝑳 * 𝟑𝟔𝟎𝟎 + 𝟓.𝟑𝟑𝟓𝟖]
Table 41 .
41 Changes made to calculate Pressure drop for liquid density less than 900 kg.m-3
Equations to modify New equations
Preloading region
(81) 𝜓 𝐿 ′ = 𝜓 𝐿 𝑓 𝑤 = 𝐶 𝑝 ( ℎ 𝐿,𝑙𝑝 ℎ 𝐿 ) 𝟎.𝟑 exp ( 200 ) ( 𝑅𝑒 𝐿 𝑅𝑒 𝑉 64 + 1,8 𝑅𝑒 𝑉 0,08 ) ( 𝜀-ℎ 𝐿 𝜀 ) 𝟏.𝟓
Loading & Flooding regions
𝜓 𝐿 ′ = 𝜓 𝐿 𝑓 𝑤 = 𝐶 𝑝 ( ℎ 𝐿,𝑙𝑝 ℎ 𝐿 ) 𝟎.𝟔 exp ( 200 ) ( 𝑅𝑒 𝐿 𝑅𝑒 𝑉 64 + 1,8 𝑅𝑒 𝑉 0,08 ) ( 𝜀-ℎ 𝐿 𝜀 ) 𝟎.𝟓
Table 43 .
43 Statistical deviation between the modified model and experimental data for pressure drop and liquid holdup predictions
Air / Water System
Liquid load [m/h] Pressure drop AAD [%] MAD [%] Liquid holdup AAD [%] MAD [%]
35.6 10 19 7 11
28.8 5 22 2 6
20.5 5 18 6 16
12.9 8 22 4 12
6 9 21 7 12
Air / Kerosol 200 System
Liquid load [m/h] Pressure drop AAD [%] MAD [%] Liquid holdup AAD [%] MAD [%]
35.6 9 26 3 11
28.8 7 20 3 12
20.6 6 13 4 8
12.7 5 12 6 9
6.1 12 23 4 11
Table 44 .
44 Comparison between modified correlations and experimental data for the prediction of pressure drop in a structured packing column
No. of point L [kg/h] V [kg/h] (𝚫𝐏) 𝐞𝐱𝐩 [𝐏𝐚] (𝚫𝐏) 𝐦𝐨𝐝𝐢𝐟𝐢𝐞𝐝 𝐦𝐨𝐝𝐞𝐥 [𝐏𝐚] (𝚫𝐏) 𝐨𝐫𝐢𝐠𝐢𝐧𝐚𝐥 𝐦𝐨𝐝𝐞𝐥 [𝐏𝐚] Absolute value of relative deviation between experimental and modified model [%] Absolute value of relative deviation between experiment al and original model [%]
1 818 89.7 304.4 289.5 200.0 4.89 34.29
2 809 90.2 310.4 292.2 201.6 5.88 35.07
3 809 90.9 312.3 296.5 204.3 5.06 34.58
4 809 91.2 312.4 298.8 205.8 4.36 34.13
5 850 91.8 321.2 307.8 211.1 4.18 34.28
6 854 92.6 325.5 313.3 214.5 3.74 34.09
7 870 93.0 323.9 318.6 217.6 1.63 32.81
8 821 93.8 325.6 318.0 217.7 2.33 33.13
9 797 94.9 326.2 322.6 220.8 1.10 32.31
10 855 95.4 345.0 334.0 227.1 3.16 34.15
Table 45 .
45 Coefficients used by Aspen Plus® to calculate Henry's constantHenry's constants obtained with Aspen Plus® were compared to experimental data from the research report RR-48 of the Gas processors Association[105]. Results of Henry's constants were also compared to the semi-empirical Equation (84) proposed by Harvey[START_REF] Harvey | Semiempirical correlation for Henry's constants over large temperature ranges[END_REF].
Component i CH4 CO2 H2S
Component j H2O H2O H2O
Low temperature [°C] 1.85 -0.15 -0.15
High temperature [°C] 79.85 226.85 149.85
aij 183.7811 159.1997 346.625
bij -9111.67 -8477.711 -13236.8
cij -25.0379 -21.957 -55.0551
dij 0.0001434 0.00578 0.05957
eij 0 0 0
and (86) for water.
𝑃 𝑠,𝑤𝑎𝑡𝑒𝑟 = 𝐸𝑥𝑝 (73.649 - 72582 𝑇 -7.3037 ln 𝑇 + 4.1653. 10 -6 𝑇 2 ) (85)
𝑇 * = 𝑇 𝑇 𝑐,𝑤𝑎𝑡𝑒𝑟
Table 46 .
46 Coefficients used by Harvey to calculate Henry's constants
Component i CH4 CO2 H2S
Component j H2O H2O H2O
aij 11.01 9.4234 5.7131
bij 4.836 4 5.3727
cij 12.52 10.32 5.4227
Table 47 .
47 Values of the constants used by Equation (87) to calculate the heat capacity of carbon dioxide
Constants Values
C1i 29.37
C2i 34.54
C3i 1428
C4i 26.4
C5i 588
( ____ ) Aspen Plus® ; (♦) Experimental values
5 𝑉 𝑒 𝑐 𝑐𝑎 𝑎 + 10.05 𝑉 𝑒 (𝑐 𝑐𝑎 𝑎 ) 2
Table 48 .
48 Coefficients used in the calculation of the equilibrium constantThese coefficients are unavailable in literature for Reactions (R.18) and (R.19). Therefore, equilibrium constants of these two reactions were calculated by Aspen Plus® using Gibbs free energies. Results for Reaction (R.18) were verified using equilibrium constants of Reactions (R.14) and (R.16) by using Equation[START_REF] Lamprecht | Establishing a facility to measure packed column hydrodynamics[END_REF]. Fig.55shows the results obtained.
Reaction A B C
R.13 231.456 -12092.1 -36.7816
R.14 132.899 -13445.9 -22.4773
R.15 216.05 -12431.7 -35.4819
R.16 214.582 -12995.4 -33.55471
R.17 -9.74 -8585.47 0
𝐾 𝑒𝑞,𝑅.6 = 𝐾 𝑒𝑞,𝑅.4 𝐾 𝑒𝑞,𝑅.2
Table 49 .
49 Coefficients used in the calculation of the equilibrium constant of Reaction (R.[START_REF] Walsh | Utilization of Biogas[END_REF]
Reaction A B C
R.18 147 -1930 -21.15
Table 50 .
50 Parameters k and E for kinetic-controlled reactions[START_REF] Pinsent | The kinetics of combination of carbon dioxide with hydroxide ions[END_REF]
Reaction k E [cal/mol]
R.24 4.32E+13 13249
R.25 2.83E+17 29451
Table 51 .
51 Details of the simulated process
Packed column
Diameter [m] 0.15
Type of the packing / Size / Material / Vendor Flexipac / 500Y / Metal / KOCH
Packing height [m] 2.354
Gas inlet
Temperature [°C] 9
Pressure [atm] 1
Mass flow rate [kg/h] 90
Volume flow rate [m 3 /h] 77
CH4 (60 %)
Molar composition CO2 (39.997 %)
H2S (30 ppm)
Liquid inlet
Temperature [°C] 4
Pressure [atm] 1
Mass flow rate [kg/h] 420
Composition Water with NaOH (0.5 g/l)
Table 52 .
52 Influence of liquid flow rate on the pressure drop
35
30
outlet gas 25
concentration in 15 20
H 2 S 10
5
0
40 140 240 340 440 540 640 740 840
Liquid flow rate [kg/h]
Absolute
value of
relative
No. of point L [kg/h] V [kg/h] (𝜟𝑷) 𝒆𝒙𝒑 [Pa] (𝜟𝑷) 𝒎𝒐𝒅𝒊𝒇𝒊𝒆𝒅 𝒎𝒐𝒅𝒆𝒍 [Pa] deviation between
experimental
and modified
model [%]
1 1185 74.9 232 234 0.86
2 1026 77.1 217 229 5.4
3 852 74.5 204 200 1.9
4 766 73.0 180 187 3.8
5 686 75.0 182 191 5.1
6 423 77.0 204 186 8.7
7 329 78.9 211 190 9.8
Table 53 .
53 Activated carbon properties used for this study[53]
Properties Unit Value
Shape - Pellet or cylindrical shaped
Particle density kg/m 3 460
Particle size mm 4
BET surface area m 2 /g 1134
Micropore volume cm 3 /g 0.48
Table 56 .
56 Estimation of the binary diffusion coefficients[START_REF] Bird | Transport phenomena, 2 nd edition[END_REF] 𝑃 𝑐,𝐻 2 𝑆 𝑃 𝑐,𝐶𝐻 4 ) 1 3 (𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐶𝐻 4 )
Binary systems Correlations
𝐷 𝐻 2 𝑆-𝐶𝐻 4
H2S -CH4 = 2.745 × 10 -4 (5 12 ( 𝑀 𝐻 2 𝑆 1 𝑃 + 1 𝑀 𝐶𝐻 4 0.5 ) ( √ 𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐶𝐻 4 𝑇 ) 1.823 (118)
𝐷 𝐻 2 𝑆-𝐶𝑂 2
H2S -CO2 = 2.745 × 10 -4 (𝑃 𝑐,𝐻 2 𝑆 𝑃 𝑐,𝐶𝑂 2 ) 1 3 (𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐶𝑂 2 ) 𝑃 5 12 ( 𝑀 𝐻 2 𝑆 1 + 1 𝑀 𝐶𝑂 2 0.5 ) ( √ 𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐶𝑂 2 𝑇 ) 1.823 (119)
𝐷 𝐻 2 𝑆-𝑁 2
= 2.745 × 10 -4 (𝑃 𝑐,𝐻 2 𝑆 𝑃 𝑐,𝑁 2 ) 1 3 (𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝑁 2 ) 𝑃 5 12 ( 𝑀 𝐻 2 𝑆 1 + 1 𝑀 𝑁 2 0.5 ) ( √ 𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝑁 2 𝑇 ) 1.823 (120.a)
H2S -N2
𝐷 𝐻 2 𝑆-𝑁 2 = 𝑃 (𝑉 𝑚,𝐻 2 𝑆 1 3 ⁄ 10 -7 𝑇 1.75 √ 𝑀 𝐻 2 𝑆 + 𝑉 𝑚,𝑁 2 1 3 ⁄ ) 2 1 + 𝑀 𝑁 2 1 (120.b)
𝐷 𝐻 2 𝑆-𝐻 2 𝑂
H2S -H2O = 3.64 × 10 -4 (𝑃 𝑐,𝐻 2 𝑆 𝑃 𝑐,𝐻 2 𝑂 ) 1 3 (𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐻 2 𝑂 ) 𝑃 5 12 ( 𝑀 𝐻 2 𝑆 1 + 1 𝑀 𝐻 2 𝑂 0.5 ) ( √ 𝑇 𝑐,𝐻 2 𝑆 𝑇 𝑐,𝐻 2 𝑂 𝑇 ) 2.334
Table 57 .
57 Equations used to estimate internal diffusion coefficients
Type of diffusion mechanism Equation
Knudsen diffusion 𝐷 𝐾𝑛𝑢𝑑𝑠𝑒𝑛 = 97 𝑟 𝑝𝑜𝑟𝑒 √ 𝑀 𝐻 2 𝑆 𝑇 (122)
Surface diffusion 𝐷 𝑠 = 1.1 × 10 -8 𝑒 ( -5.32 𝑇 𝐵,𝐻 2 𝑆 𝑇 ) (123)
Poiseuille diffusion 𝐷 𝑃𝑜𝑖𝑠𝑒𝑢𝑖𝑙𝑙𝑒 = 2 𝑃 𝑟 𝑝𝑜𝑟𝑒 8 𝜇 𝐻 2 𝑆 (124)
Pore diffusion (Global internal diffusion) 1 𝐷 𝑝 = 1 𝐷 𝑚 + 1 𝐷 𝐾𝑛𝑢𝑑𝑠𝑒𝑛
Table 58 .
58 Estimation of dimensionless numbers and diffusion coefficients
Properties Unit Value
Reynolds number, Re - 17
Schmidt number, Sc - 0.81
Sherwood number, Sh - 7.66
Peclet number, Pe - 13.77
Molecular diffusivity m 2 /s 1.35 x 10 -5
Knudsen diffusivity m 2 /s 1.84 x 10 -7
Surface diffusivity m 2 /s 2.34 x 10 -10
Poiseuille diffusivity m 2 /s 4.35 x 10 -10
Pore diffusivity m 2 /s 1.81 x 10 -7
DL [m 2 .s -1 ] is the axial dispersion coefficient. C [mol.m -3 ] is the concentration of H2S in the biogas. L [m] is the length of the adsorption column. z [m] is the axial dimension. t [s] is the time. u [m.s -1 ] is the superficial velocity of the biogas.
𝐷 𝐿 𝐿 2 𝜕 2 𝐶 𝜕𝑧 2 + 𝑢 𝐿 𝜕𝐶 𝜕𝑧 + 𝜕𝐶 𝜕𝑡 + 𝜌 𝜀 𝜕𝑞 𝜕𝑡 = 0 (126)
Where:
ρ [kg.m
Similarly, the expression (137) is obtained by combining equations (129) and[START_REF] Gnielinski | Gleichungen zur Berechnung des Wärmeund Stoffaustausches in durchströmten ruhenden Kugelschüttungen bei mittleren und grossen Pecletzahlen[END_REF].
𝐿 𝜕𝐶 𝜕𝑧 + 𝜕𝐶 𝜕𝑡 + 1 -𝜀 𝜀 𝑘 𝑔 𝑎 𝑝 (𝐶 -𝐶 𝑒 ) = 0 (136)
𝜕𝐶 𝑒 𝜕𝑡 = 𝑘 𝑔 𝑎 𝑝 (1 -𝜀) 𝜌 𝑞 𝑚 𝑏 (1 + 𝑏 𝐶 𝑒 ) 2 (𝐶 -𝐶 𝑒 )
Table 60 .
60 Input parameters for the simulation of the breakthrough curve
Parameters Unit Value
Dimensions of the adsorption column
Diameter m 0.8
Height m 0.8
Biogas composition and flow rate
Methane mol% 60
Carbon dioxide mol% 33 / 30.5 / 28
Hydrogen sulfide mol% 5 / 7.5 / 10
Nitrogen mol% 0.89
Water vapor mol% 1.11
Biogas flow rate Nm 3 /h 80
Pressure and temperature
Inlet pressure bar 1.103
Inlet temperature °C 21
Langmuir constants
b m 3 /g 0.9
Activated carbon maximal capacity g/kg 110
Mesh
Number of mesh nodes over the column height - 20
Time step s 0.01
Number of time steps - 2 x 10 6
]Vapor capacity factor [(m/s).(kg/m 3 ) 0.5 ]
] Liquid load [m/h]
]Gas capacity factor [(m/s) x (kg/m 3 ) 0.5 ] (c')
]Gas capacity factor [(m/s) x (kg/m 3 ) 0.5 ] (e')
]Gas capacity factor [(m/s) x (kg/m 3 ) 0.5 ] (d')
Acknowledgements
I am grateful to Prof. Denis Clodic, supervisor of this thesis for the suggestions that helped to shape my research skills and for the responsibility he granted to me. Without him, this dissertation would not have been possible.
Chapter 4: Industrial demonstrator description
Résumé :
Ce chapitre présente le procédé industriel dans lequel se situe l'étape de désulfurisation étudiée. Il s'agit d'un démonstrateur industriel appelé « BioGNVAL » développé par la société Cryo Pur® en partenariat avec SUEZ et l'Agence De l'Environnement et de la Maitrise de l'Energie (ADEME). GNVert et IVECO sont également partenaires du projet BioGNVAL.
Ce pilote traite 85 Nm 3 /h de biogaz issu de la station d'épuration du Syndicat Interdépartemental pour l'Assainissement de l'Agglomération Parisienne (SIAAP) à Valenton, la deuxième plus grande en France.
Une des voies de valorisation du biogaz est la production de bio-GNL (GNL : Gaz Naturel liquéfié) qui se présente comme un carburant neutre en émissions de gaz à effet de serre avec plusieurs avantages économiques et environnementaux. La production de ce type de carburant requiert de très basses températures afin de liquéfier le bio-méthane, ce qui peut induire la solidification des impuretés et donc des problèmes de fonctionnement des installations. Ces impuretés doivent donc être séparées du biogaz, en particulier l'élimination de l'hydrogène sulfuré est impérative afin de garantir un fonctionnement optimal et une grande pureté des autres composés à valoriser comme le dioxyde de carbone. De surcroît, la présence de H2S dans le biogaz est une source de corrosion pour les équipements comme les pompes et les échangeurs de chaleur.
Because of the lack of predictive models, and because of the imprecision of existing ones to accurately predict the hydrodynamic parameters for some specific applications such as biogas purification, most distillation and packing columns are still being designed based on experimental data from a pilot plant [START_REF] Schultes | Research on mass transfer columns "Old hat or still relevant ?[END_REF].
The objective of this work is to find a model adapted for the representation of the experimental results obtained on the BioGNVAL pilot plant. To this aim, three literature models for the hydrodynamics in structured packing columns have been compared: Billet and Schultes model [START_REF] Billet | A physical model for the prediction of liquid hold-up in two-phase counter-current columns[END_REF], SRP model [START_REF] Fair | A comprehensive model for the performance of columns containing structured packings[END_REF] and Delft model [START_REF] Olujić | Predicting the efficiency of corrugated sheet structured packings with large specific surface area[END_REF]. These models have been developed on dimensionless analysis and experimental data obtained using a distillation column. The two first models are implemented in the process simulator Aspen Plus®. The three models are described in detail in the following section.
Billet and Schultes model
The Billet and Schultes model [START_REF] Billet | A physical model for the prediction of liquid hold-up in two-phase counter-current columns[END_REF] was at the base founded for random packings. Then, it was extended to cover structured packings. Based on semi-empirical correlations, this model assumes that the packing void fraction is represented by vertical tubes where the liquid is sprayed from the top as a film that meets the gas flow in a counter-current configuration. The angle between the corrugations of the packing is not taken into account by the Billet and Schultes model. The refined model is then compared to an extended range of experimental data retrieved also from the work of Erasmus [100] in order to validate the new model.
Comparisons to validate the modified model were made at various liquid loads and using two different systems: Air -Water and Air -Kerosol 200. Results of liquid holdup and pressure drop of the two systems over Flexipac® 350Y are presented in Fig. 41 and42.
The same conditions used to evaluate the three models (Type of packing: Flexipac® 350Y, system: Air -Water, Liquid load: 20.5 m/h) are used again in order to evaluate the new model and compare it to the other ones and to the experimental data. In Fig. 39, the experimentally determined pressure drop and liquid holdup [100] are compared to the results obtained with all the models including the new one.
Breakthrough curve modeling
Fig. 65: Schematic representation of an adsorption column [START_REF] Sigot | Epuration fine des biogaz en vue d'une valorisation énergétique en pile à combustible de type SOFC : Adsorption de l'octaméthylcyclotétrasloxane et du sulfure d'hydrogène[END_REF] The operating conditions used in the model are the same measured on the BioGNVAL pilot plant, in order to adjust the estimated parameters, such as the overall mass transfer coefficient and the maximum adsorption capacity of activated carbon. These operating conditions include the temperature and the pressure of the process, the adsorption column dimensions, the relative humidity of biogas and its composition. They are presented in Table 54. The adsorption mechanism is governed by different types of mass transfer: external and internal diffusion. That's why, before modeling the breakthrough curve, mass transfer coefficients should be correctly estimated.
Table 54. Biogas composition and operating conditions of the adsorption process
Properties
Estimation of mass transfer coefficients
The estimation of mass transfer coefficients is an essential step in order to simulate the dynamic behavior of adsorption.
External mass transfer coefficient
The evaluation of the external mass transfer coefficient depends on the H2S molecular diffusion coefficient, the activated carbon particle size and dimensionless numbers as seen in Equation (101).
The particle size is indicated in Table 53. The Sherwood number (Sh) estimation is based on correlations using Reynolds (Re) and Schmidt (Sc) numbers. Some correlations defining the Sherwood number are presented in Table 55. Re < 15000 [START_REF] Dwivedi | Particle-Fluid Mass Transfer in Fixed and Fluidized Beds[END_REF] (105) Re ≥ 500 [START_REF] Onashi | Correlation of Liquid-Side Mass Transfer Coefficient for Single Particle and Fixed-Beds[END_REF]
Table 55. Sherwood number estimation [123]
Correlations
Reynolds and Schmidt numbers are calculated using Equations ( 114) and ( 115) respectively.
In Expression (107) defined by Doytchava et al. [START_REF] Doytchava | Mass Transfer from Solid Particles to Fluid in Granular Bed[END_REF], α is the available surface coefficient, φ is the particle shape factor, τ is the tortuosity and Pep is the Peclet number of the particle calculated using Equation [START_REF] Gel'perin | The relation between the surface tension of aqueous solutions of inorganic substances and concentration and temperature[END_REF].
Where: kg [m.s -1 ] is global mass transfer coefficient. ap [m 2 .m -3 ] is the external surface of the adsorbent particle per unit volume of adsorbent. qe [g.kg -1 ] is the amount of H2S in equilibrium with the gas phase concentration. Ce [g.m -3 ] is the H2S concentration in the gas phase, in equilibrium with the solid phase.
The adsorption equilibrium between the two phases is defined by means of the Langmuir isotherm Equation (type I).
Where qm and b are the Langmuir parameters.
Fig. 66 shows that the equilibrium parameters are sensitive parameters. They have a great influence on the simulation results. These parameters must be adjusted with experimental data. (----) b = 1 Boundary and initial conditions are summarized in Table 59
Abstract
Biogas must be purified for becoming a renewable fuel. At now, the most part of the purification techniques are not satisfactory because they imply hydrogen sulfide (H2S) rejection to the atmosphere. One example of these methods is the treatment with high pressure water.
The first objective of the thesis is modeling the conventional methods for separating H2S from methane. Typical concentrations of H2S in methane vary from 200 to 5000 ppm. Separation methods must decrease the concentration of H2S in methane to less than 1 ppm. At the same time, methods for H2S treatment will be studied.
Once the most appropriated separation methods will be selected, some test will be carried out on a pilot plant capable of treating 85 Nm 3 /h of methane, where quantities of H2S ranging from 1 and 100 ppm will be injected. These tests will allow validating the modeling of the separation process.
The thesis work requires simulating the separation process using the software Aspen Plus® or an equivalent one. The effectiveness of different operative conditions will be tested, varying also the parameter temperature. The energy necessary for the separation will be one of the most important criteria for the comparison, as well as the mass consumption of the different fluids involved in the process.
A system approach is fundamental for evaluating the backward effect of the H2S valorization method on the separation techniques. The process simulator (Aspen Plus® or equivalent) will allow the system approach. The study will involve modeling and experimental parts.
The experimental part will be carried out taking advantage of a semi-industrial size test bench, allowing studying the separation methods down to -90°C. |
01764957 | en | [
"sdv.sa.sf",
"sdv.bv.bot",
"sde.be"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01764957/file/Valor_et_al_2017.pdf | Teresa Valor
email: [email protected]
Elena Ormeño
Pere Casals
Temporal effects of prescribed burning on terpene production in Mediterranean pines
Keywords: conifers, fire ecology, Pinus halepensis, Pinus nigra, Pinus sylvestris, plant volatiles, prescribed fire, secondary metabolism
Prescribed burning is used to reduce fuel hazard but underburning can damage standing trees. The effect of burning on needle terpene storage, a proxy for secondary metabolism, in fire-damaged pines is poorly understood despite the protection terpenes confer against biotic and abiotic stressors. We investigated variation in needle terpene storage after burning in three Mediterranean pine species featuring different adaptations to fire regimes. In two pure-stands of Pinus halepensis Mill. and two mixed-stands of Pinus sylvestris L. and Pinus nigra ssp. salzmanni (Dunal) Franco, we compared 24 h and 1 year post-burning concentrations with pre-burning concentrations in 20 trees per species, and evaluated the relative contribution of tree fire severity and physiological condition (δ 13 C and N concentration) on temporal terpene dynamics (for mono-sesqui-and diterpenes). Twenty-four hours post-burning, monoterpene concentrations were slightly higher in P. halepensis than at pre-burning, while values were similar in P. sylvestris. Differently, in the more fire-resistant P. nigra monoterpene concentrations were lower at 24 h, compared with pre-burning. One year post-burning, concentrations were always lower compared with pre-or 24 h post-burning, regardless of the terpene group. Mono-and sesquiterpene variations were negatively related to pre-burning δ 13 C, while diterpene variations were associated with fire-induced changes in needle δ 13 C and N concentration. At both post-burning times, mono-and diterpene concentrations increased significantly with crown scorch volume in all species. Differences in post-burning terpene contents as a function of the pine species' sensitivity to fire suggest that terpenic metabolites could have adaptive importance in fire-prone ecosystems in terms of flammability or defence against biotic agents post-burning. One year postburning, our results suggest that in a context of fire-induced resource availability, pines likely prioritize primary rather than secondary metabolism. Overall, this study contributes to the assessment of the direct and indirect effects of fire on pine terpene storage, providing valuable information about their vulnerability to biotic and abiotic stressors throughout time.
Introduction
Prescribed burning (PB) is the planned use of fire under mild weather conditions to meet defined management objectives [START_REF] Wade | A guide for prescribed fire in southern forests[END_REF]). Prescribed burning is executed mostly for fire risk reduction, but also for forest management, restoring habitats or improving grazing. Generally, prescribed burns are low intensity fires, but certain management objectives require a higher burning intensity to effectively achieve specific goals, such as significantly removing understory or slash. In this case, PB can partially damage trees and affect their vitality in the shortterm. Some studies have analysed the effects of PB on postburning growth [START_REF] Battipaglia | The effects of prescribed burning on Pinus halepensis Mill. as revealed by dendrochronological and isotopic analyses[END_REF][START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF] and tree vitality (see Woolley et al. 2012 for review). Less attention has been dedicated to understanding the effect of PB on secondary metabolites produced by pines [START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF], despite the protection they confer against biotic and abiotic stressors, and their potential to increase plant flammability (Ormeño et al. 2009, Loreto and[START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF].
The quantity and composition of terpenes produced against a stressor can be constrained by the plant's physiological status [START_REF] Sampedro | Costs of constitutive and herbivore-induced chemical defences in pine trees emerge only under low nutrient availability[END_REF]) and genetics [START_REF] Pausas | Secondary compounds enhance flammability in a Mediterranean plant[END_REF], but also by the nature and severity of the stress, and the species affected. The main secondary metabolites biosynthesized in conifers are terpenes and phenols [START_REF] Langenheim | Plant resins: chemistry, evolution, ecology and ethnobotany[END_REF]. In Pinus species, oleoresin is a mixture of terpenes including monoterpenes (volatile metabolites), sesquiterpenes (metabolites with intermediate volatility) and diterpenes (semi-volatile compounds), which are stored in resin ducts of woody and needle tissues [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. Upon stress, plants follow a constitutive or induced strategy to defend themselves from a stressor. Although most Pinus spp. favour the production of constitutive terpenes under stress conditions, they can also synthesize new induced defences [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. The induction timing may be different depending on the chemical groups of terpenes, type of stress, and the species or tissue attacked [START_REF] Lewinsohn | Defense mechanisms of conifers differences in constitutive and wound-induced monoterpene biosynthesis among species[END_REF][START_REF] Achotegui-Castells | Strong induction of minor terpenes in Italian Cypress, Cupressus sempervirens, in response to infection by the fungus Seiridium cardinale[END_REF].
Direct effects of fire such as rising temperatures or heatinduced needle damage can alter terpene production. Increases in air and leaf temperature trigger the emission of volatile terpenes [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF]) but their synthesis can also be stimulated if the optimal temperature of enzymes is not exceeded [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. Benefits of such stimulation include thermoprotection against heat, since terpene volatiles neutralize the oxidation pressure encountered by chloroplasts under thermal stress [START_REF] Vickers | Isoprene synthesis protects transgenic tobacco plants from oxidative stress[END_REF]. As the emission of volatile terpenes in several Mediterranean pines cease 24 h after fire [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF] or wounding [START_REF] Pasqua | The role of isoprenoid accumulation and oxidation in sealing wounded needles of Mediterranean pines[END_REF], we hypothesized that the accumulation of monoterpenes will be higher 24 h post-burning, than before PB.
Indirect effects of fire can affect terpene concentrations by means of increasing resource availability [START_REF] Certini | Effects of fire on properties of forest soils: a review[END_REF]. In turn, terpene variations induced by fire could change needle flammability [START_REF] Ormeño | The relationship between terpenes and flammability of leaf litter[END_REF]) and susceptibility to insects [START_REF] Hood | Low-severity fire increases tree defense against bark beetle attacks[END_REF]. The 'growth differentiation balance hypothesis' (GDBH) (Herms andMattson 1992, Stamp 2003) predicts that under poor water and nutrient availabilities, growth is more limited than photosynthesis. Since carbon assimilation is maintained, the excess of carbohydrates favours the synthesis of carbon-based secondary metabolites. On the contrary, when resource availability is high, the growth of plants is not expected to be limited and plants allocate a greater proportion of assimilates to growth rather than to defence traits (Herms andMattson 1992, Stamp 2003). Accordingly, a short-term response following PB should be an increasing demand on the plant for chemical defence if trees are damaged, but with time, if trees heal, increased fertilization and reduced water competition induced by PB [START_REF] Feeney | Influence of thinning and burning restoration treatments on presettlement ponderosa pines at the Gus Pearson Natural Area[END_REF]) could favour carbon allocation to growth rather than chemical defences. Time-course terpene responses of the direct and indirect effects of PB could differ between tree species depending on their fire resistance strategies. In this study, we used pines with contrasting tolerance to surface fires: Pinus halepensis, a fire sensitive species, Pinus sylvestris, moderately fire-resistant and the fire-resister Pinus nigra, which is supposed to be less vulnerable to fire tissue damage due to its pyro-resistant traits (e.g. thicker bark, higher crown base height) [START_REF] Fernandes | Fire resistance of European pines[END_REF]. In agreement with these strategies, we previously found that radial growth was reduced the year of PB in the most firesensitive species and unaffected in P. nigra, while 1 year postburning, growth was augmented in P. nigra and P. halepensis, and reduced in P. sylvestris [START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF]. In consequence, we hypothesized that 1 year post-burning, the concentration of terpenes would be, as a whole, lower than before PB, if fire induces a decrease in nutrient and water competition; this reduction would be lower on damaged trees and in pines defined as having lower fire resistance (e.g., P. halepensis and P. sylvestris).
The objectives of this study were to evaluate the effects of relatively high-intensity PB (enough to remove understory and ladder fuels) on mono-, sesqui-and diterpene storage in Pinus spp., comparing 24 h and 1 year post-burning concentrations with pre-burning concentrations. We modelled the relative change of terpene concentrations at two sampling times: (i) 24 h post-burning, as a function of fire severity and pre-burning physiological condition and (ii) 1 year post-burning, as a function of fire severity and PB-induced changes in pine physiological conditions. Additionally, we aimed to identify the most representative terpenes of each sampling time since burning.
Materials and methods
The study was established in three sites situated in the NE Iberian Peninsula (Catalonia, Spain): two plots in mixed-stands of P. nigra ssp. salzmanni (Dunal) Franco and P. sylvestris L. at Miravé and Lloreda localities, situated in the foothills of the Pyrenees; and two other plots in a pure-stand of P. halepensis Mill. at El Perelló locality, in the Southern part of Catalonia. The P. halepensis stand is located in areas of dry Mediterranean climate while the mixed-stands of P. nigra and P. sylvestris are situated in temperate cold sub-Mediterranean climate with milder summers and colder winters (Table 1). In the sub-Mediterranean sites, soils are developed from calcareous colluviums (0.5-1 m deep) and thus classified as Calcaric cambisols (FAO 2006); in the Mediterranean site, they are developed from limestones (0.4-0.5 m deep) and classified as Leptic Regosol (FAO 2006). The understory is dominated by Buxus sempervirens L. and Viburnum lantana L., in the P. nigra and P. sylvestris mixedstands, and by Pistacia lentiscus L. and Quercus coccifera L. in the P. halepensis stand.
Experimental design: tree selection and prescribed burning
A total of four plots (30 × 30 m) were set up: one in each of the mixed-stand of P. nigra and P. sylvestris, and two in the pure P. halepensis stand. Each plot was burnt in spring 2013 (Table 2). Prescribed burns were conducted by the Forest Actions Support Group (GRAF) of the Autonomous Government (Generalitat de Catalunya) using a strip headfire ignition pattern. Prescribed burning aimed to decrease fuel hazard by reducing surface and ladder fuel loads. Between 90% and 100% of the surface fuel load was consumed in all plots. Needle terpene concentration, fire features and tree physiological condition were studied in 9/10 dominant or co-dominant pines per species in each plot. Each tree was sampled on three occasions for analysing terpene concentration: 24 h before PB (pre-burning), 24 h and 1 year after PB (24 h post-burning and 1 year post-burning, respectively). δ 13 C and N concentrations of 1-year-old needles were also analysed as a proxy of physiological condition in pre-burning and 1 year post-burning samples.
Before PB, selected trees were identified with a metal tag. Their diameter at breast height (DBH), total height and height to live crown base were measured. During fires, the fire residence time (minutes) above 60 °C and the maximum temperature at the base of the trunk were measured for the selected trees with K-thermocouples (4 mm) connected to dataloggers (Testo 175), packed with a fireproof blanket and buried into the soil. Temperatures were recorded every 10 s. The maximum temperatures registered at the soil surface occurred in the P. nigra and P. sylvestris plots, while the highest residence time above 60 °C was recorded in the P. halepensis plots (Table 2). One week after PB, the crown volume scorched was visually estimated to the nearest 5% as an indicator of fire severity. Foliage scorch was defined as a change in needle colour resulting from direct foliage ignition or indirect heating [START_REF] Catry | Post-fire tree mortality in mixed forests of central Portugal[END_REF].
Needle sampling
In each plot, we cut an unscorched branch from the top of the south-facing crown in the 9/10 trees selected per species for each sampling time studied: pre-burning, 24 h and 1 year postburning. Five twigs with unscorched healthy needles were cut immediately, covered with aluminium paper and stored in a portable refrigerator at 4 °C until being stored at -20 °C in the laboratory for terpene analysis. The time period between the field and the laboratory did not exceed 2 h. Additionally, about five twigs were transported to the laboratory, dried at 60 °C and stored in tins before δ 13 C and N concentration analysis.
Needle terpene concentration
In the studied pine species, needles reached up to 3 years old. Before terpene extraction, we collected the 1-year-old needles from each twig to control for the effect of age for each sampling time. Needles were cut in small parts (~5 mm) and placed in well-filled, tightly closed amber glass vials to avoid exposure to light and oxygen (Guenther 1949[START_REF] Farhat | Seasonal changes in the composition of the essential oil extract of East Mediterranean sage (Salvia libanotica) and its toxicity in mice[END_REF]. The extraction method consisted in dissolving 1 g of cut 1-year-old unscorched green needles in 5 ml of organic solvent (cyclohexane + dichloromethane, 1:9), containing a constant amount of undecane, a volatile internal standard to quantify terpene concentrations which was not naturally stored in the needles. Extraction occurred for 20 min, under constant shaking at room temperature, similar to extractions shown in [START_REF] Ormeño | Plant coexistence alters terpene emission and concentration of Mediterranean species[END_REF]. The extract was stored at -20 °C and then analysed within the following 3 weeks. Analyses were performed on a gas chromatograph (GS-Agilent 7890B, Agilent Technologies, Les Ulis, France) coupled to a mass selective detector (MSD 5977A, Agilent Technologies, Les Ulis, France). Compound separation was achieved on an HP-5MS (Agilent Technologies, Les Ulis, France) capillary column with helium as the carrier gas. After sample injection (1μl), the start temperature (40 °C for 5 min) was ramped up to 245 °C at a rate of 3 °C min -1 , and then until 300 °C a rate of 7 °C min -1 . Terpene identifications were based on the comparison of terpene retention times and mass spectra with those obtained from authentic reference samples (Sigma-Aldrich ® , Sigma-Aldrich, Saint-Quentin-Fallavier, France) when available, or from databases (NIST2008, Adams 2007) when samples were unavailable. Also, we calculated the Kovats retention index and compared it with bibliographical data. Terpenes were quantified based on the internal standard undecane (36.6 ng μl -1 of injected solution). Thus, based on calibrations of terpene standards of high purity (97-99%), also prepared using undecane as internal standard, chromatographic peak areas of an extracted terpene where converted into terpene masses based on the relative response factor of each calibrated terpene. Results were expressed on a needle dry mass (DM) basis. The identified terpenes were grouped in mono-, sesqui-and diterpenes. At each post-burning time, we calculated the relative change of terpene concentration as the difference between the pre-and post-burning concentration of each terpene group expressed as percentage.
Tree physiological condition: δ 13 C and N analysis δ 13 C and N analysis were carried out on 1-year-old unscorched needles in pre-burning and 1 year post-burning samples. For δ 13 C and N, needles were oven-dried at 60 °C for 48 h, ground and analysed at the Stable Isotope Facility of the University of California at Davis (USA) using an ANCA interfaced to a 20-20 Europa ® isotope ratio mass spectrometer (Sercon Ltd, Cheshire, UK).
Climatic data before and during sampling years
Monthly precipitation (P) and temperature (T) from March 2012 to August 2014 were downloaded from the three nearest meteorological stations to the sub-Mediterranean and the Mediterranean plots. Monthly potential evapotranspiration (PET) was estimated using the Thornthwaite (1948) method. For each sampling year (t), 2013 and 2014, accumulated values of P and PET of different periods were calculated for each meteorological station. Seven periods of accumulated climate data were compiled: annual, from June before the sampling year (t -1) to May of the sampling year (t); spring, summer, fall and winter before the sampling year (t -1); spring and summer of the sampling year (t). For each period, we calculated the difference between P and PET (P -PET) for each meteorological station and sampling year.
Linear mixed models (LMM), considering plot as a random factor, were used to:
(i) Analyse potential differences in pre-burning tree physiological condition and fire parameters among pine species. (ii) Test for differences in total terpene and terpene group concentrations (expressed in a needle mass basis and as the percentage of the terpene group from the total) between times since burning for each pine species. (iii) Model 24 h and 1 year impact of PB on the relative concentration change of mono-, sesqui-and diterpenes with respect to pre-burning concentration. The 24 h and 1 year post-burning models considered pine species as a fixed factor, needle δ 13 C and N concentration pre-burning, and the proportion of crown scorched and fire residence time above 60 °C as covariables. In addition, in the 1 year postburning model, δ 13 C and N concentration changes were also included (1 year post-burning minus pre-burning levels of δ 13 C and N concentration). Second interactions of pine species with each co-variable were included.
Terpene concentrations were log-transformed to accomplish normal distribution requirement. When the relative concentration change of terpenes was modelled, 100 was summed as a constant before taking the logarithm. Therefore, log-transformed data higher than 2 indicate higher terpene concentrations than pre-burning, while values lower than 2 mean lower terpene concentrations. Residuals presented no pattern and highly correlated explanatory variables were avoided. The variance explained for the fixed effects was obtained by comparing the final model with the null model (containing only the random structure). A Tukey post-hoc test was used for multiple comparisons when needed.
For each pine species, terpene profiles were evaluated using a principal component analysis to show potential qualitative and quantitative variation in needle terpene within and between plots and time since burning. Terpene concentrations were centred and the variance-covariance matrix used to understand how terpene profiles varied. Moreover, for each pine species, we used a multilevel sparse partial least squares discriminant analysis (sPLS-DA) to select the terpenes that best separated each time since burning in terms of their concentration. The sPLS-DA is a supervised technique that takes the class of the sample into account, in this case time since burning, and tries to reduce the dimension while maximizing the separation between classes. To conduct the analysis, we selected those compounds that were present in at least 75% of the sampled trees, resulting in a total of 48, 37 and 35 compounds in P. halepensis, P. nigra and P. sylvestris, respectively. We used the multilevel approach to account for the repeated measures on each tree to highlight the PB effects within trees separately from the biological variation between trees. The classification error rate was estimated with leave-one-out cross validation with respect to the number of selected terpenes on each dimension. Lastly, differences in P -PET between sampling years were tested by a Student's t-test for the Mediterranean and sub-Mediterranean plots. All analyses were conducted with the software R (v. 3.2.1, The R Foundation for Statistical Computing, Vienna, Austria) using the package nlme for linear mixed-effects modelling and the package mixOmics for the sPLS-DA analysis. The model variances explained by fixed effects (marginal R 2 ) and by both fixed and random effects (conditional R 2 ) are provided [START_REF] Nakagawa | A general and simple method for obtaining R2 from generalized linear mixed-effects models[END_REF].
Results
Tree, fire and climate characteristics
The proportion of crown scorched was significantly higher in P. halepensis than in the other species despite the fact that the three pine species presented similar height to live crown base (Table 3). By contrast, no differences in fire residence time above 60 °C were encountered among species (Table 3). Needle δ 13 C decreased significantly 1 year post-burning in the three species while N concentration was similar (Table 3).
This decrease in δ 13 C contrasted with the drier conditions found 1 year post-burning (P -PET = 200 mm and = 135 mm in Mediterranean and sub-Mediterranean plots, respectively) in comparison with pre-burning (P -PET = 481 mm and = 290 mm in Mediterranean and sub-Mediterranean plots, respectively; see Figure S1 available as Supplementary Data at Tree Physiology Online).
A total of 56, 59 and 49 terpenes were identified and quantified in P. halepensis, P. nigra and P. sylvestris, respectively (see Table S1 available as Supplementary Data at Tree Physiology Online). Preburning, P. nigra showed the highest terpene concentration (65.6 ± 7.1 mg g DM -1
) followed by P. halepensis and P. sylvestris (41.2 ± 5.8 mg g DM -1 and 21.4 ± 2.6 mg g DM -1 , respectively the diterpene thunbergol in P. halepensis, the sesquiterpene β-caryophyllene in P. nigra and the monoterpene α-pinene in P. sylvestris were the major compounds found, representing an average of 22%, 22% and 40% of the total terpene concentration, respectively (see Figure S2 available as Supplementary Data at Tree Physiology Online). Terpene concentration and composition strongly varied within plots in all species, with no clear differences in terpene composition among plots (see Figures S3a,S4a and S5a available as Supplementary Data at Tree Physiology Online). The variation in terpene concentrations was high within pre-and 24 h post-burning samples, while variation for 1 year post-burning concentrations was much lower (see Figures S3b,S4b and S5b available as Supplementary Data at Tree Physiology Online). In all species, the quantity of dominant compounds in pre-and 24 h post-burning samples were clearly different from those of 1 year post-burning samples (see Figures S3b,S4b and S5b available as Supplementary Data at Tree Physiology Online). For instance, the quantity of α-pinene was higher in pre-and 24 h post-burning times in all pine species in opposition with 1 year post-burning samples. Limonene was characteristic 24 h post-burning in the needles of P. halepensis and P. nigra, while the quantity of camphene and myrcene was higher in pre-and 24 h post-burning needles samples of P. sylvestris. Differences in total terpene concentration between pre-and 24 h post-burning were only detected in P. nigra, which decreased ∼39% (Figure 1a). When analysing terpene groups, the 24 h post-burning needle concentration of both mono-and sesquiterpenes were, in comparison with pre-burning, slightly higher in P. halepensis, lower in P. nigra and similar in P. sylvestris (Figure 1b andc). No differences were detected in the diterpene concentration between pre-and 24 h post-burning times (Figure 1d).
One year after burning, total terpene concentration was lower compared with the levels observed pre-and 24 h post-burning in the three species (Figure 1a). In P. halepensis this reduction was similar for each terpene group while, in the two sub-Mediterranean species, it was mostly due to a decrease in the proportion of monoterpenes (see Table S2 available as Supplementary Data at Tree Physiology Online). In contrast, an increase in the relative contribution of the sesquiterpene group to the total terpenes was found 1 year post-burning in both sub-Mediterranean species.
The relative changes of mono-and diterpene concentrations 24 h post-burning were directly related to the proportion of crown scorched (Table 4). However, crown scorch volume interacted with pine species to explain the relative changes in mono-and diterpene concentrations (Table 4). Thus, in both P. halepensis and P. sylvestris, the 24 h post-burning concentration of monoterpenes was higher than pre-burning and increased with crown scorched (Figure 2a.1 and a.3); only individual pines with low proportion of the crown scorched (<15--20%) showed similar or lower concentration than pre-burning. In for each pine species (P. halepensis, n = 20; P. sylvestris, n = 19; P. nigra, n = 19 and n = 18 in 1 year-post-burning). Differences in the concentration between TSB within each pine species were tested using LMM considering plot as a random factor. Within each pine species, different letters indicate differences between TSB using a Tukey post-hoc, where regular letters indicate significant differences at P < 0.05; italic letters represent a marginal significant difference (0.05 < P < 0.1). contrast, the relative change of monoterpene concentration in P. nigra was generally lower than pre-burning, at least in the range of crown scorch measured (0-50%) (Figure 2a.2). The relationship between the relative concentration change in diterpenes and crown scorch followed a similar trend as in monoterpenes for P. halepensis and P. sylvestris (Figure 2b.1 and b.2), while in P. nigra, the ratio of change in crown scorch was higher and shifted from lower to higher concentrations than pre-burning in the middle of the measured crown scorch range (Figure 2b.3).
The relative concentration change of monoterpene was also directly related to the needle N concentration and the height to live crown base (Table 4). In the case of sesquiterpenes, needle N concentration interacted with pine species (Table 4, Figure 2). Thus, the relative concentration change of sesquiterpenes 24 h post-burning was higher in P. halepensis and P. sylvestris, and augmented as needle N concentration increased (Figure 2c.1 and c.3), whereas it was always lower in P. nigra, decreasing inversely with increasing needle N concentration (Figure 2c.2). Finally, fire residence time above 60 °C directly affected the relative change of sesquiterpene concentration in all species (Table 4).
One year after PB, the relative change of mono-and sesquiterpene concentrations were always lower than pre-burning and inversely related with δ 13 C of pre-burning needles (Table 5, Figure 3a.1). The 1 year post-burning relative change concentration of diterpenes were also lower than pre-burning, but variations were associated with changes in δ 13 C or N concentration of needles (Figures 3a.2 and a.3).
Similar to 24 h post-burning, the proportion of crown scorched had a direct effect on the relative concentration change of all terpene groups, although marginally significant in mono-and sesquiterpene models (Table 5). This variable interacted with pine species in the case of diterpenes (Figure 3b) and showed that as crown scorch increased, the relative concentration change in P. nigra was more acute than in the other species (Table 5, Figure 3b.2).
Discriminant terpenes across time since burning for each pine species
The multilevel sPLS-DA in P. halepensis led to optimal selection of six and one terpenes on the first two dimensions with a classification error rate of 0.26 and 0.06, respectively, reflecting a clear separation between times since burning (Figure 4). Among compounds, terpinen-4-ol separated pre-burning (Cluster 2) from both post-burning times; whereas E-β-ocimene and α-thujene discriminated the 24 h post-burning sampling time from the others (Cluster 1). Four sesquiterpenes characterized the 1 year postburning needle samples (Cluster 3).
In P. nigra, we chose three dimensions and the corresponding terpenes selected for each were four, one and one (Figure 5). The classification error rates were 0.35, 0.33 and 0.18, respectively, for the first three dimensions. Two clusters were differentiated: pre-burning was discriminated, mainly, by three sesquiterpenes (Cluster 1) and bornyl acetate and β-springene represented, postburning samplings (Cluster 2) (Figure 5). Table 4. Summary of the models characterizing the impact of prescribed burning and tree vitality on the 24 h post-burning relative concentration change of mono-, sesqui-and diterpenes, calculated as the standardized difference between 24 h post-burning and pre-burning concentration expressed as percentage (logarithmically transformed). Only the significant interaction terms are shown. Bold characters indicate significant effects (P < 0.05). Finally, two dimensions were selected for P. sylvestris (Figure 6) with 11 terpenes on each component. The classification error rates were 0.66 and 0.33. As in P. nigra, two clusters were distinguished: sesquiterpenes characterized the pre-burning sampling time, whereas both post-burning times were characterized mainly by mono-and sesquiterpenes (Figure 6).
h post-burning relative concentration change
Monoterpenes
Discussion
Pinus nigra is a species considered to be resistant to mediumlow fire intensities, P. sylvestris a moderately fire-resistant species and P. halepensis a fire-sensitive species [START_REF] Agee | Fire and pine ecosystems[END_REF][START_REF] Fernandes | Fire resistance of European pines[END_REF]. While the concentration of the semi-Figure 2. Measured and predicted (line) relative concentration change (log-transformed) using 24 h post-burning models (see Table 4) of monoterpenes and diterpenes against crown scorched (a and b) and for sesquiterpenes against needle N (c). Before the log-transformation, 100 was summed. The dashed line indicates no changes between pre-and post-burning terpene concentrations: higher values indicate a higher terpene concentration than those of pre-burning, while the opposite is indicated by lower values. volatile diterpenes was not affected 24 h post-burning, the concentration of mono-and sesquiterpenes seems to decrease in P. nigra, was sustained in P. sylvestris and tended to increase in P. halepensis. Although massive needle terpene emissions have been reported at ambient temperatures often reached during PB [START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF][START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF][START_REF] Zhao | Terpenoid emissions from heated needles of Pinus sylvestris and their potential influences on forest fires[END_REF], various explanations may justify the different terpene contents observed 24 h post-burning between species. For instance, terpenes stored in needle resin ducts are likely to encounter different resistance to volatilization due to differences in the specific characteristics of the epistomatal chambers which are, respectively, unsealed, sealed and buried in needles of P. nigra, P. sylvestris and P. halepensis [START_REF] Hanover | Surface wax deposits on foliage of Picea pungens and other conifers[END_REF][START_REF] Boddi | Structure and ultrastructure of Pinus halepensis primary needles[END_REF][START_REF] Kim | Micromorphology of epicuticular waxes and epistomatal chambers of pine species by electron microscopy and white light scanning interferometry[END_REF]). These differences in needle morphology may contribute to explaining the reduction of terpenes observed 24 h post-burning in P. nigra. Another reason for variable terpene contents may be different respiration sensitivity between species. As the consumption of assimilates increases relative to the photosynthetic production at high temperatures [START_REF] Farrar | The effects of increased atmospheric carbon dioxide and temperature on carbon partitioning, source-sink relations and respiration[END_REF], this could bring about a decrease in the weight of carbohydrates and, thus, an apparent increase in needle terpene concentrations. If the respiration sensitivity to increasing temperature is higher in P. halepensis than in the other two species, this may explain the slight increase in terpene concentration in this species 24 h post-burning. Alternatively, the increase in monoterpene concentration in unscorched needles of P. halepensis 24 h post-burning may partly reflect systemic induced resistance, triggered by burning needles from lower parts of the canopy, although no data was found in literature to support this hypothesis. Finally, although we carefully selected only 1-year-old unscorched needles and from the same part of the crown, we cannot fully exclude that terpene variation between preand post-burning are reflecting differences in light availability between the sampled needles.
Terpene dynamics within the species were modulated by fire severity. Thus, relative concentration changes of mono-and diterpenes increased with the proportion of crown scorched 24 h post-burning. This trend was evident 1 year post-burning, suggesting that the damaged pines were still investing in chemical defences. According to the GDBH (Herms andMattson 1992, Stamp 2003) and the reduction in radial growth detected in P. halepensis and P. sylvestris [START_REF] Valor | Assessing the impact of prescribed burning on the growth of European pines[END_REF], we hypothesized that the increase in monoterpenes by P. halepensis and, to a lesser extent, in P. sylvestris, may constrain primary metabolism. Although the rate of increase in diterpenes post-burning was greater in P. nigra than in the other two species, P. nigra required a greater proportion of scorched crown in order to achieve higher concentrations than those observed pre-burning. Therefore, trees with a greater proportion of scorched crown could be investing in secondary metabolism rather than primary metabolism, although this potential trade-off on carbon investment deserves further research.
Table 5. Summary of the models characterizing the impact of prescribed burning and tree vitality on the 1 year post-burning relative concentration change of mono-, sesqui-and diterpenes, calculated as the standardized difference between 1 year post-burning and pre-burning content expressed as percentage (logarithmic transformed). Only the significant interaction terms are shown. Bold characters indicate significant effects (P < 0.05). 2 HLCB, height to live crown base (m).
3 Change δ 13 C, change in δ 13 C (difference between 1 year post-burning and pre-burning δ 13 C). 4 Change N, change in foliar N content (difference between 1 year post-burning and pre-burning N content).
Needle N concentration was positively associated with the relative concentration change of monoterpenes in the three species and of sesquiterpenes in the case of P. halepensis and P. sylvestris. As resin canal ducts are limited by N [START_REF] Björkman | Different responses of two carbon-based defences in Scots pine needles to nitrogen fertilization[END_REF]), these positive relationships may be explained by an increase in the number and size of the ducts in needles with higher N content. In contrast, we did not detect any effect of pre-burning water status, as estimated by δ 13 C, for 24 h post-burning terpene concentration change in individual pines.
According to our study, tree-to-tree variation in terpene concentration is known to be naturally high, even over short spatial distances, or when plants grow in the same soil in the same geographic area [START_REF] Ormeño | Production and diversity of volatile terpenes from plants on Calcareous and Siliceous soils: effect of soil nutrients[END_REF][START_REF] Kännaste | Highly variable chemical signatures over short spatial distances among Scots pine (Pinus sylvestris) populations[END_REF]). Our study reveals, however, that this variation is reduced 1 year postburning within and between plots. One year post-burning, the terpene concentration was lower than pre-burning, while an increase could be expected given the drier meteorological conditions during the year after burning [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. In contrast, lower needle δ 13 C values, compared with pre-burning, suggest a decrease in water competition 1 year post-burning, an increase in the photosynthetic rate or stomatal conductance [START_REF] Battipaglia | The effects of prescribed burning on Pinus halepensis Mill. as revealed by dendrochronological and isotopic analyses[END_REF], or an improvement in water conditions in the remaining needles of highly scorched trees [START_REF] Wallin | Effects of crown scorch on ponderosa pine resistance to bark beetles in Northern Arizona[END_REF]. A lower terpene concentration 1 year after burning differs from other studies [START_REF] Cannac | Phenolic compounds of Pinus laricio needles: a bioindicator of the effects of prescribed burning in function of season[END_REF][START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF]) comparing burned versus unburned plots. These studies concluded that needle terpene concentration returns to normal values 1 year after fire. They suggested that short-term increases in nutrient availability had minor effects on terpene concentration. The discrepancies with our investigation may be explained by the higher burning intensity in our study, which impacted water availability as indicated by δ 13 C values. In agreement with the GDBH (Herms andMattson 1992, Stamp 2003), our results showed that the relative concentration change of diterpenes was lower in trees that had an improvement in their physiological condition 1 year post-burning, as suggested by needle δ 13 C change and changes in needle N concentration. Despite the fact that no relationships Figure 3. Measured and predicted (line) relative concentration change (log-transformed) using 1 year post-burning models (see Table 5) for monoterpenes against δ 13 C (a.1), for diterpenes against change in δ 13 C (a.2), change in needles N (a.3), and the interaction between species and crown scorch (b). Before the log-transformation, 100 was summed. The dashed line indicates no changes between pre-and post-burning terpene concentrations: higher values indicate a higher terpene concentration than those of pre-burning while the opposite is indicated by lower values.
were found between mono-or sesquiterpenes regarding the change in δ 13 C or N, the direct relationship between the relative terpene concentration change and the pre-burning δ 13 C suggested that the decrease in both terpene groups occurred in pines that were more stressed pre-burning.
The ecological functions of many mono-, sesqui-and diterpene compounds are still not well understood, although in recent years significant achievements have been made via genetic engineering (Cheng et al. 2007, Loreto and[START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF]. Likewise, research on terpenes and flammability is generally scarce, though there are some studies that have shown a correlation between both variables [START_REF] Owens | Seasonal patterns of plant flammability and monoterpenoid concentration in Juniperus ashei[END_REF][START_REF] Alessio | Direct and indirect impacts of fire on isoprenoid emissions from Mediterranean vegetation[END_REF][START_REF] Ormeño | The relationship between terpenes and flammability of leaf litter[END_REF]. The reduction in terpene concentration 24 h postburning in the fire-resister P. nigra could imply a reduction of needle flammability with respect to pre-burning, strengthened by a reduction in the highly flammable α-caryophyllene (also known as α-humulene) and the increase in bornyl acetate, which is inversely related to flammability [START_REF] Owens | Seasonal patterns of plant flammability and monoterpenoid concentration in Juniperus ashei[END_REF]. By contrast, increases of mono-and sesquiterpene concentrations in P. halepensis may involve greater flammability, which would favour fire reaching the canopy to effectively open the serotinous cones. Specifically, the sPLS-DA showed E-β-ocimene, which is correlated with flammability [START_REF] Page | Mountain pine beetle attack alters the chemistry and flammability of lodgepole pine foliage[END_REF], as representative of 24 h postburning samples. In P. sylvestris, the poor terpene discrimination in relation to time since burning limits the interpretation of any compound in terms of flammability.
Fire-damaged trees are more vulnerable to insects, especially bark beetles, and infections by root fungus, which contribute to trees susceptibility to beetle attack [START_REF] Sullivan | Association between severity of prescribed burns and subsequent activity of conifer-infesting beetles in stands of longleaf pine[END_REF][START_REF] Parker | Interactions among fire, insects, and pathogens in coniferous forests of the interior western United States and Canada[END_REF]. The accumulation of high amounts of monoterpenes 24 h post-burning in the lower fire resistant species (P. halepensis and P. sylvestris) when fire partially scorches the crowns, might accomplish several functions, such as effective transport of diterpenes to the affected tissues [START_REF] Phillips | Resin-based defenses in conifers[END_REF], better protection of the photosynthetic apparatus [START_REF] Vickers | Isoprene synthesis protects transgenic tobacco plants from oxidative stress[END_REF] or ensuring the needs for chemical defence against pathogens [START_REF] Phillips | Resin-based defenses in conifers[END_REF]. According with this last function, E-β-ocimene and α-thujene with antifungal activity [START_REF] Bajpai | Chemical composition and antifungal properties of the essential oil and crude extracts of Metasequoia glyptostroboides Miki ex Hu[END_REF][START_REF] Deba | Chemical composition and antioxidant, antibacterial and antifungal activities of the essential oils from Bidens pilosa Linn. var. radiata[END_REF] appear to correctly classify 24 h post-burning needle samples of P. halepensis. Although the discriminant analysis in P. sylvestris showed poor classification power, the presence of E-β-ocimene and ϒ-terpinene also suggests that trees possess a higher resistance to fungus compared with pre-burning [START_REF] Espinosa-García | Dosedependent effects in vitro of essential oils on the growth of two endophytic fungi in coastal redwood leaves[END_REF]). In the case of the fire-resistant P. nigra, the pre-burning concentration of monoterpenes may be sufficient to cope with biotic stresses related with medium intensity fires. Nonetheless, bornyl acetate seems to represent 24 h post-burning samples conferring resistance to defoliators immediately after fire [START_REF] Zou | Foliage constituents of Douglas fir (Pseudotsuga menziesii (Mirb.) Franco): their seasonal variation and potential role in Douglas fir resistance and silviculture management[END_REF]. The high accumulation of diterpenes 24 h post-burning in P. nigra as the proportion of the scorched crown increases in respect to the other species, and possibly indicates a better chemical protection against xylophagus insects [START_REF] Lafever | Diterpenoid resin acid biosynthesis in conifers: enzymatic cyclization of geranylgeranyl pyrophosphate to abietadiene, the precursor of abietic acid[END_REF]). In P. nigra and P. sylvestris, the fact that the percentage of sesquiterpenes was augmented significantly 1 year postburning with respect to pre-burning, together with the increase in the relative concentration change as crown scorch augmented, might indicate the importance of sesquiterpenes as indirect defences to a wide range of biotic stressors [START_REF] Phillips | Resin-based defenses in conifers[END_REF]Croteau 1999, Schnee et al. 2006) and, as reported in [START_REF] Lavoir | Does prescribed burning affect leaf secondary metabolites in pine stands?[END_REF], were representative in repeatedly burned plots. Similarly, our classification found the sesquiterpenes guaiol, α-muurolene and δ-elemene as being characteristic in 1 year post-burning P. halepensis needle samples. These compounds might have defensive roles in defoliated trees against insects [START_REF] Wallis | Systemic induction of phloem secondary metabolism and its relationship to resistance to a canker pathogen in Austrian pine[END_REF][START_REF] Liu | Guaiol-a naturally occurring insecticidal sesquiterpene[END_REF].
After fire, bark beetles pose a significant threat to trees, especially when a significant amount of the crown has been scorched [START_REF] Lombardero | Effects of fire and mechanical wounding on Pinus resinosa resin defenses, beetle attacks, and pathogens[END_REF]. Several volatile terpenes such as α-pinene, camphene and myrcene can be released during PB and facilitate the attack of bark beetles [START_REF] Coyne | Toxicity of substances in pine oleoresin to southern pine beetles[END_REF]. Twenty-four hour post-burning P. sylvestris tended to present higher amounts of these terpene compounds, suggesting higher susceptibility to bark beetle attack with respect to the other species. Finally, limonene, which is highly toxic for several types of beetle [START_REF] Raffa | Interactions among conifer terpenoids and bark beetles across multiple levels of scale: an attempt to understand links between population patterns and physiological processes[END_REF], was present in higher amounts in P. nigra and P. halepensis, suggesting a higher resistance to bark beetle attack for both species 24 h post-burning.
The concentration of mono-and sesquiterpenes 24 h post-burning was similar to the pre-burning ones in the more fire-sensitive species (P. halepensis and P. sylvestris) and lower in the fire-resistant P. nigra species. Terpene dynamics were modulated within the species by fire severity, as indicated by the direct relation between the proportion of scorched crown and the concentration of terpenes 24 h post-burning. As discussed, a combination of morphological and physiological mechanisms may be operating during and in the short-term after PB, but no clear conclusions may be stated. However, differences in terpene contents as a function of the pine species sensitivity to fire suggest that terpenic metabolites could have adaptive importance in fire-prone ecosystems, in terms of flammability and defence against biotic agents short-term after fire. In agreement with the GDBH (Herms andMattson 1992, Stamp 2003) trees may be allocating assimilates to growth rather than to defence, as suggested by the remarkable decrease in terpene concentration and the negative relation between terpene concentration and the change in needle δ 13 C. This decrease in terpene concentration, in turn, could imply a higher susceptibility to fire-related pathogens and insects.
Figure 1 .
1 Figure 1. Concentration (mean ± SE) of total terpene (a), monoterpenes (b), sesquiterpenes (c) and diterpenes (d) across time since burning (TSB)for each pine species (P. halepensis, n = 20; P. sylvestris, n = 19; P. nigra, n = 19 and n = 18 in 1 year-post-burning). Differences in the concentration between TSB within each pine species were tested using LMM considering plot as a random factor. Within each pine species, different letters indicate differences between TSB using a Tukey post-hoc, where regular letters indicate significant differences at P < 0.05; italic letters represent a marginal significant difference (0.05 < P < 0.1).
Figure 4 .
4 Figure 4. Hierarchical clustering for P. halepensis of the seven terpenes selected with multilevel sPLS-DA using terpene content. Samples are represented in columns and terpenes in rows. MHT, monoterpene hydrocarbon; SHT, sesquiterpene hydrocarbon; O, oxygenated compounds; der, derivative compounds.
Figure 5 .
5 Figure 5. Hierarchical clustering for P. nigra of the six terpenes selected with multilevel sPLS-DA using terpene content. Samples are represented in columns and terpenes in rows. MT, monoterpene hydrocarbon; SHT, sesquiterpene hydrocarbon; DHT, diterpene hydrocarbon; der, derivative compounds; others, compounds other than terpenes.
Table 1 .
1 Topographical and climate characteristics of the study localities.
Study sites Topography Climate 1
Localities Lat. (°) Long. (°) Aspect Slope Elevation Annual Mean annual
(%) (m.a.s.l.) rainfall (mm) temperature (°C)
Lloreda 1.5706 42.0569 N 30 715 731.6 11.7
Miravé 1.4494 41.9515 NE 25 723 677.3 11.5
El Perelló 0.6816 40.9068 NW 10 244 609.9 15.5
Table 2 .
2 Characteristics of prescribed burnings and forest experimental units (mean
± std).
1
Wind speed was measured outside the forest.
2 Range of maximum temperatures (Tmax) and residence time above 60 °C (RT60) in 10 trees in each of the Perelló experimental units and in 20 trees in Miravé and Lloreda.
3 Ph, Pinus halepensis; Pn/Ps, P. nigra and P. sylvestris; phytovolume calculated using the cover and height of the understory shrubs; diameter at breast height (DBH), density and basal area of trees with DBH ≥7.5 cm.
Table 3 .
3 Studied pine trees and fire characteristics (mean ± std) before and after prescribed burnings grouped by species.
Tree and fire characteristics P. halepensis P. nigra P. sylvestris
n (trees) 20 19 19 1
DBH (cm) 20.0 ± 6.9a 13.6 ± 5.5b 12.7 ± 5.3b
Total height (m) 9.1 ± 2.4a 8.3 ± 2.4a 8.6 ± 1.9a
Height to live crown base (m) 5.2 ± 1.0a 4.8 ± 1.3a 6.6 ± 13.2b
Crown scorched (%) 44.0 ± 32.1a 6.6 ± 13.2b 5.5 ± 9.5b
Fire residence time >60 °C (min) 38.2 ± 54.1a 16.6 ± 6.9a 15.2 ± 6.4a
Needle δ 13 C (‰)
Pre-burning -25.8 ± 0.5Aa -26.6 ± 1.0 Ab -26.5 ± 0.6Ab
1 year post-burning -27.6 ± 0.9Ba -28.5 ± 1.4Ba -28.0 ± 0.8Ba
Needle N content (mg g DM -1 )
Pre-burning 14.8 ± 1.9Aa 10.1 ± 0.8Ab 12.3 ± 1.6Ac
1 year post-burning 14.9 ± 3.2Aa 9.1 ± 2.7Ab 11.0 ± 3.1Aa
). Before PB, more than 45% of total terpene concentration was represented by diterpenes in P. halepensis while sesquiterpenes represented about 59% in P. nigra and monoterpenes represented 83% in P. sylvestris (see Table
S2
available as Supplementary Data at Tree Physiology Online). Considering all sampling times,
Climate variables, annual rainfall and annual mean temperature, were estimated using a georeferenced model(Ninyerola et al.
2000).
Sample size is 18 for 1 year post-burning data because of death of one tree. Different small letters within a row indicate statistical significant differences (P < 0.05) among pine species using LME (where fixed factor = species, random factor = plot) followed by Tukey post-hoc test. Different capital letters within a column indicate statistical significant differences (P < 0.05) between pre-burning and 1 year post-burning for each pine species using LMM (where fixed factor = time since burning, random factor = plot) followed by Tukey post-hoc test.
Acknowledgments
We wish to thank GRAF (Bombers, Generalitat de Catalunya) who kindly executed the PB; the EPAF team, Dani Estruch, Ana I. Ríos and Alba Mora for their technical assistance in the field, and Carol Lecareux and Amelie Saunier for their help in the laboratory. Finally, we would like to thank Miquel De Cáceres for his invaluable comments.
Funding Ministerio de Economía, Industria y Competitividad (projects AGL2012-40098-CO3, AGL2015 70425R; EEBB I 15 09703 and BES 2013 065031 to T.V.; RYC2011 09489 to P.C.). CERCA Programme/Generalitat de Catalunya.
Conflict of interest
None declared. |
01636819 | en | [
"sdv.bid.spt",
"sde.be"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01636819/file/Sellam%20et%20al%202017%20Cystoseira%20michaelae_HAL.pdf | nom. et stat. nov Verlaque Louiza-Nesrine
Aurelie Sellam
Charles-François Blanfuné
Thierry Boudouresque
C Thibaut
Marc Rebzani-Zahaf
Verlaque
J G Agardh
C Spinosa Sauvageau
stat. nov Al Nom Louiza
Nesrine Sellam
Aurélie Blanfuné
Charles F Boudouresque
Thierry Thibaut
Rebzani Chafika
Zahaf
Marc Verlaque
C Adriatica Sauvageau
G Agardh
Pc0535490
Roussel
C Turneri Montagne
C Montagnei
Pc0535491
P C Monnard
) Montagne
C Sp Nov
C Platyclada Sauvageau
Fig. 22 Montagne
Jean Bart
Thuret
J Feldmann
Ld00526
C Abies
Cystoseira montagnei
come
INTRODUCTION
In the Mediterranean Sea, the species of the genus Cystoseira C. Agardh, 1820, nom. cons., are the main forest-forming species of the photophilous rocky substrates from the littoral fringe down to the lower euphotic zone (down to 70-80 m depth in the clearest waters) [START_REF] Giaccone | Le Cistoseire e la vegetazione sommersa del Mediterraneo[END_REF][START_REF] Giaccone | -La vegetazione marina bentonica fotofila del Mediterraneo: II. Infralitorale e Circalitorale. Proposte di aggiornamento[END_REF][START_REF] Blanfuné | Decline and local extinction of Fucales in the French Riviera: the harbinger of future extinctions?[END_REF]Blanfuné et al., 2016a,b;[START_REF] Boudouresque | Where seaweed forests meet animal forests: the examples of macroalgae in coral reefs and the Mediterranean coralligenous ecosystem[END_REF]. Out of 289 taxa of Cystoseira listed worldwide (including homotypic and heterotypic synonyms and names of uncertain taxonomic status), 32 species and more than fifteen infra-specific taxa are currently accepted taxonomically in the Mediterranean Sea [START_REF] Guiry | their discussion is beyond the scope of the present study[END_REF]. However, in spite of their importance as habitat formers, the delimitation and distribution of a number of species are still not well known [START_REF] Roberts | Active speciation in the taxonomy of the genus Cystoseira C. Ag[END_REF][START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Draisma | -DNA sequence data demonstrate the polyphyly of the genus Cystoseira and other Sargassaceae Genera (Phaeophyceae)[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Berov | -Reinstatement of species rank for Cystoseira bosphorica Sauvageau (Sargassaceae, Phaeophyceae)[END_REF][START_REF] Bouafif | -Cystoseira taxa new for the marine flora of Tunisia[END_REF][START_REF] Bouafif | -New contribution to the knowledge of the genus Cystoseira C. Agardh in the Mediterranean Sea, with the reinstatement of species rank for C. schiffneri Hamel[END_REF]. The reason for this is that the species of Cystoseira offer few unambiguous diagnostic characters and that some of these characters are more or less overlapping. Genetic tools will probably help to disentangle their taxonomic value. Montagne (1838) described from Algeria (Cherchell, west of Algiers, Mediterranean Sea) a taxon he regarded as a new variety of the Atlantic species Cystoseira granulata C. Agardh, as C. granulata var. turneri Montagne. This taxon represents one of the least well known Cystoseira taxa; in addition, it became subsequently a source of confusion. Agardh (1842: 47-48), on the basis of distinct specimens from the north-western Mediterranean and the Adriatic Sea, raised Montagne's taxon to species level, under the name of C. montagnei J. Agardh, actually a new species. Cystoseira montagnei was widely recorded in the Mediterranean Sea until several authors expressed doubts regarding its taxonomic value, considering it as a mixture of distinct taxa (e.g. [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF][START_REF] Papenfuss | Taxonomic and nomenclatural notes on three species of brown algae. In: Travaux de Biologie végétale dédiés au Professeur P. Dangeard[END_REF][START_REF] Roberts | Active speciation in the taxonomy of the genus Cystoseira C. Ag[END_REF]. Following [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF], Mediterranean authors replaced the name 'C. montagnei' by those of C. spinosa Sauvageau and C. adriatica Sauvageau. Cystoseira montagnei is now often treated as a taxon inquirendum in the updated Mediterranean checklists and floras [START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF]; but see Perret- [START_REF] Perret-Boudouresque | Inventaire des algues marines benthiques d'Algérie[END_REF].
In 2014 and 2015, we collected specimens corresponding to Montagne's taxon, C. granulata var. turneri, from the regions of Tipaza and Algiers (Algeria) [START_REF] Sellam | -Rediscovery of a forgotten seaweed forest in the Mediterranean Sea, the Cystoseira montagnei (Fucales) forest. Rapports et procès-verbaux de la commission internationale pour l'Exploration[END_REF]. The morphological study of these specimens showed that they belonged to a taxon quite distinct from C. spinosa. The aim of this study was (i) to reassess the status of C. granulata var. turneri, C. montagnei and C. spinosa, made obscure by taxonomic ambiguities and misuses (ii) to propose the lectotypification of C. granulata var. turneri and of C. montagnei, (iii) to propose a new name for Montagne's C. granulata var. turneri (C. michaelae nom. et stat. nov.) and (iv) to provide information concerning the ecology and distribution range of the latter species.
MATERIAL AND METHODS
Sampling and observations were undertaken using SCUBA diving, between the sea surface and 25 m depth, at different localities in the regions of Tipaza and Algiers (Algeria), from La Corne d'Or to Bounetah Island, between August 2014 and November 2015 (Fig. 1). The populations were studied all year round, at twoweek intervals.
Samples were transferred to the laboratory (in Algiers), then rinsed with seawater and cleaned of epiphytes. The samples were either preserved in 4% buffered formalin/seawater or pressed and prepared as herbarium specimens. A subsample of some specimens was preserved in silica gel for further DNA analyses. The material studied has been deposited in HCOM, the Herbarium of the Mediterranean Institute of Oceanography, Aix-Marseille University (Herbarium abbreviation follows [START_REF] Thiers | Index Herbariorum: A global directory of public herbaria and associated staff[END_REF].
Specimens were compared with the almost exhaustive collection of Mediterranean Cystoseira species deposited in the HCOM and with the syntype of C. granulata C. Agardh var. turneri Montagne (deposited in the herbaria of the Muséum National d'Histoire Naturelle, Paris, PC), the syntype of C. montagnei J. Agardh (deposited in the J.G. Agardh herbarium, Botanical Museum, Lund University, LD) and the lectotypes of C. spinosa Sauvageau and C. adriatica Sauvageau (herbaria of the Muséum National d'Histoire Naturelle, Paris, PC). They were also compared with other specimens housed in the herbaria of the Université Montpellier 2 (MPU) and of PC (Table 1). Identification criteria, in the genus Cystoseira, are based on the mode of attachment to the substratum, the number and the form of axes, the aspect of apices and tophules (when present), the phyllotaxy and the morphology of branches, the occurrence and the arrangement of cryptostomata and aerocysts, and the location and the morphology of reproductive structures (see [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF][START_REF] Hamel | Phéophycées de France[END_REF][START_REF] Ercegović | -Fauna i Flora Jadrana. Jadranske cistozire. Njihova morfologija, ekologija i razvitak / Fauna et Flora Adriatica. Sur les Cystoseira adriatiques[END_REF]Gómez Garreta et al., 2001;[START_REF] Mannino | Guida all' identificazione delle Cistoseire. Area Marina Protetta "Capo Gallo -Isola delle Femmine[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Taşkin | The Mediterranean Cystoseira (with photographs)[END_REF]. The initial nomenclature (before the changes resulting from the present investigation) followed that adopted by [START_REF] Guiry | their discussion is beyond the scope of the present study[END_REF].
Literature dealing with C. granulata var. turneri, C. montagnei, C. spinosa and C. adriatica was exhaustively searched and analyzed.
RESULTS
Hereafter, we describe the morphology, phenology and habitat of the specimens we collected in Algeria. Morphological description. Plants stiff, up to 30 cm high, not caespitose, yellowbrown in colour when alive and darker when dried, attached to the substrate by a robust discoid holdfast (Figs 2-4); main axis up to 20 cm high, branched, with apices not protruding, surrounded by spinose tophules (Figs 5, 7); spinose tophules ovoid, 5-13 mm × 5-7 mm, becoming smooth-tuberculate when older (Figs 5-7); primary branches, up to 18-19 cm long, of two different types: either slightly complanate, up to 2.5 mm wide, with an inconspicuous rib and irregularly alternately branched in one plane, or cylindrical and branched in all directions, with spaced short simple to bifid spine-like appendages . The habit varies with depth: shallow specimens have cylindrical branches while the deeper ones have complanate branches. Cryptostomata scattered along branches; specimens monoecious; receptacles both intercalary basal, compact to more or less loosely arranged, attached just above the tophule (Figs 11-13), and terminal, cylindrical and more or less diffuse on branchlets ; conceptacles male, female or hermaphroditic, differentiated in the branch and at the base of spine-like appendages . No obvious relationship was found between the location of the conceptacles, their type (male, female, hermaphroditic) and the receptacles. Phenology. The annual growth cycle is similar to that of other lower infralittoral species of Cystoseira. The plant grows from the early spring to the summer. While terminal receptacles were observed in late summer and autumn, basal receptacles were present all year round. Plants shed their primary branches in late autumn and they are almost devoid of primary branches in winter (Fig. 4). Habitat. The species thrives in the lower infralittoral zone (sensu [START_REF] Pérès | Major benthic assemblages[END_REF], between 10 m and 25 m depth (limit of our investigations), and on sub-horizontal to gently sloping photophilous rocky substrates (0 to 45°). It is always heavily covered with epiphytes such as other macroalgae, bryozoans, hydroids and sponges. The specimens we collected from the region of Algiers correspond well with Montagne's description and to his herbarium specimens (syntype) housed in the Muséum National d'Histoire Naturelle (Paris: PC) (Table 1). The taxon is very easily distinguishable from all other Cystoseira species through a panel of characters: (i) a single axis with spinose (when young) to smooth-tuberculate (when old) tophules, (ii) primary branches either slightly compressed, with an inconspicuous rib, and irregularly alternately branched in one plane, or cylindrical and branched in all directions, with spaced short spine-like appendages (iii) receptacles either intercalary basal, just above the tophule, or terminal, cylindrical and diffuse on branchlets.
The fate of Cystoseira granulata var. turneri and the confusing story of C. montagnei and C. spinosa [START_REF] Meneghini | Alghe italiane e dalmatiche[END_REF] reported C. granulata var. turneri from Naples, Toulon, the northern Adriatic Sea and Dalmatia. J.G. [START_REF] Agardh | Algae maris Mediterranei et Adriatici, observationes in diagnosin specierum et dispositionem generum[END_REF], receiving several specimens of Cystoseira with spinose tophules from France (Cette, now Sète, and Marseille) and the northern Adriatic Sea (Trieste, Italy), concluded that they all belonged to Montagne's taxon. Considering C. granulata var. turneri to be quite distinct from C. granulata C. Agardh [currently C. usneoides (Linnaeus) M.Roberts [START_REF] Roberts | -Taxonomic and nomenclatural notes on the genus Cystoseira C[END_REF][START_REF] Spencer | -Typification of Linnaean names relevant to algal nomenclature[END_REF]] and from all the other species known at that time ('Species distinctissima, a Montagne primum bene descripta, sed cum C. granulata male confusa': 'very distinct species…, well described for the first time by Montagne, but with C. granulata badly confused'), J.G. Agardh raised the var. turneri to species rank under the name C. montagnei J. Agardh. However, it is worth noting that, in his description, J.G. Agardh mentioned the spinose tophules and the compressed branches but omitted the major diagnostic character of Montagne's taxon, i.e. the basal intercalary receptacles. Montagne (1846) followed J.G. Agardh and re-described his alga from Algeria under the name C. montagnei J. Agardh [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF] published his impressive revision of the genus Cystoseira, with descriptions of new species with spinose tophules (C. adriatica Sauvageau and C. spinosa Sauvageau), the first taxonomic questions about the real identity of C. montagnei J. Agardh were raised. Considering all the previous records of C. montagnei with, when possible, re-examination of samples, [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF] concluded that C. montagnei J. Agardh differed from C. montagnei sensu Montagne and was a mixture of species. Except for specimens from Algeria that possessed all the characteristics of C. montagnei Montagne non J. Agardh (sic) [here: C. michaelae], all the other records from France, Corsica, Sardinia, Italy and Adriatic Sea were doubtful because devoid of basal intercalary receptacles: C. montagnei sensu Va liante from Naples would probably be C. spinosa and C. montagnei sensu Hauck from the Adriatic sea would probably be C. adriatica (subsequently synonymised with C. spinosa; see [START_REF] Cormaci | Observations taxonomiques et biogéographiques sur quelques espèces du genre Cystoseira C.Agardh[END_REF]. In the Adriatic Sea, [START_REF] Ercegović | -Fauna i Flora Jadrana. Jadranske cistozire. Njihova morfologija, ekologija i razvitak / Fauna et Flora Adriatica. Sur les Cystoseira adriatiques[END_REF] followed the conclusions of [START_REF] Sauvageau | A propos des Cystoseira de Banyuls et de Guéthary[END_REF][START_REF] Sauvageau | -A propos des Cystoseira[END_REF] and treated (i) C. montagnei J. Agardh sensu Hauck as a synonym of C. adriatica, (ii) 'C. montagnei J. Agardh (ex parte)' (sic) as a synonym of C. platyramosa Ercegović [currently C. spinosa var. compressa (Ercegović) Cormaci et al.], and (iii) 'C. montagnei J. Agardh (pro parte)' (sic) and C. montagnei J. Agardh sensu Va liante as synonyms of C. spinosa. At Naples, [START_REF] Funk | Beiträge zur Kenntnis der Meeresalgen von Neapel: Zugleich mikrophotographischer Atlas[END_REF] did the opposite and recorded C. montagnei J. Agardh, with C. spinosa Sauvageau as synonym. According to [START_REF] Papenfuss | Taxonomic and nomenclatural notes on three species of brown algae. In: Travaux de Biologie végétale dédiés au Professeur P. Dangeard[END_REF], 'until J. G. Agardh's material has been examined and C. montagnei lectotypified, it will not be possible to settle the status of the species'. Our re-examination of the syntype of C. montagnei J. Agardh dating before 1842 and deposited in the J.G. Agardh Herbarium (LD) showed that no specimen originates from Algeria and confirmed the conclusions of Sauvageau and Ercegović: all the specimens of J.G. Agardh do not differ from C. spinosa Sauvageau (synonyms: C. adriatica Sauvageau and C. platyramosa Ercegović) [here: C. montagnei]. In Spain, C. montagnei was recorded from the Balearic Islands before being excluded from the flora [START_REF] Gallardo | -A preliminary checklist of Iberian benthic marine algae[END_REF]Ribera Siguan & Gómez Garreta, 1985;Gómez Garreta et al., 2001). The species was never recorded from Morocco (see [START_REF] Benhissoune | -A checklist of the seaweeds of the Mediterranean and Atlantic coasts of Morocco. II. Phaeophyceae[END_REF], Libya, in spite of extensive research on the genus Cystoseira [START_REF] Nizamuddin | Cystoseira gerloffi, a new species from the coast of Libya[END_REF][START_REF] Nizamuddin | -A new species of Cystoseira C. Ag. (Phaeophyta) from the Eastern part of Libya[END_REF][START_REF] Nizamuddin | A caespitose-tophulose Cystoseira species from Tripoli, Libya[END_REF], and from Egypt [START_REF] Aleem | Marine algae of Alexandria, Egypt[END_REF]. Subsequently, C. montagnei was definitely considered as taxon inquirendum [START_REF] Ribera | -Check-list of Mediterranean seaweeds. I. Fucophyceae (Warming, 1884)[END_REF][START_REF] Furnari | Catalogue of the benthic marine macroalgae of the Italian coast of the Adriatic Sea[END_REF][START_REF] Furnari | -Biodiversità marina delle coste italiane: catalogo del macrofitobenthos[END_REF][START_REF] Giaccone | -Biodiversità vegetale marina dell'arcipelago 'Isole Eolie[END_REF][START_REF] Cormaci | -Flora marina bentonica del Mediterraneo: Phaeophyceae[END_REF][START_REF] Taşkin | The Mediterranean Cystoseira (with photographs)[END_REF][START_REF] Tsiamis | -Seaweeds of the Greek coasts. I. Phaeophyceae[END_REF], including in Algeria [START_REF] Ould- | -Checklist of the benthic marine macroalgae from Algeria. I. Phaeophyceae[END_REF]. See Table 2 for further records of C. montagnei, C. spinosa and C. adriatica. the Algerian coast. Currently, C. michaelae seems to be an endemic species restricted to Algeria and northern Tunisia (Cyrine Bouafif, pers. com.).
Cystoseira forests are highly impacted due to the cumulative effects of increasing human pressure (e.g. destruction of habitats, pollution, non-indigenous species, overfishing, coastal aquaculture and global warming). Losses have been reported throughout the Mediterranean Sea caused by habitat destruction, eutrophication and overgrazing by herbivores (fish, sea urchins), leading to a shift to lesser structural complexity, such as turf-forming seaweed assemblages or barren grounds where sea urchins are the drivers of habitat homogenization [START_REF] Pinedo | -Long-term decline of the populations of Fucales (Cystoseira, Sargassum) in the Albères coast (northwestern Mediterranean)[END_REF][START_REF] Blanfuné | Decline and local extinction of Fucales in the French Riviera: the harbinger of future extinctions?[END_REF]Blanfuné et al., 2016a,b). Protective measures should be taken so that the C. michaelae forests do not suffer the same decline as many Cystoseira forests of the Mediterranean Sea.
Specimens studied: H8287-8288 -Cap Caxine (36' 49' 4" N & 2°57' 19" E), September 2014, 17 m depth, rocky substrates; H8289-8290 -Aïn Benian, close to the harbor (36°48' 45" N & 2°53' 29" E), August 2014, 19 m depth, rocky substrates; H8291-8294 and H8300-8301 -Tipaza, La Corne d'Or (36°35' 45" N & 2°26' 44" E), September 2014, 16 m depth, rocky substrates; H8295 -Aïn Benian, close to the harbour, September 2014, 18 m depth, rocky substrates; H8296-8297 -Aïn Benian, close to the harbour, April 2015, 14 m depth, rocky substrates; H8298 -Cap Caxine, April 2015, 16 m depth, rocky substrates; H8299 -Islets of Tipaza (36°35' 52" N & 2°27' 40" E), March 2015, 11 m depth, rocky substrates; H8302 -Islets of Tipaza, August 2015, 10 m depth, rocky substrates; H8303 -Bounetah Island (36°47' 46" N & 3°21' 19" E), August 2015, 14 m depth, rocky substrates; H8304 -Cap Caxine, August 2015, 13 m depth, rocky substrates; H8305 -Islets of Tipaza, October 2015, 12 m depth, rocky substrates; H8306 -Cap Caxine, November 2015, 13 m depth, rocky substrates (Fig. 1).
The species forms sparse algal forests (< 5 individuals.m -2 ), in association with other large macroalgae such as Cystoseira zosteroides (Turner) C. Agardh, Dictyopteris lucida M.A.Ribera Siguán et al., Dictyota cyanoloma Tronholm et al., Dictyota spp., Flabellia petiolata (Turra) Nizamuddin, Phyllariopsis sp., Sargassum sp., Zonaria tournefortii (J.V. Lamouroux) Montagne, and a rich sessile fauna dominated by Eunicella singularis (Esper, 1791) and large species of sponges, bryozoans and hydroids.DISCUSSION AND CONCLUSIONSA good fit between Cystoseira granulata C. Agardh var. turneri Montagne and the specimens collected in the Algiers regionMontagne (1838: 340-342) described from Algeria, 'prope Juliam Caesaream' (now Cherchell, ~80 km west of Algiers) a taxon he regarded as a variety of the Atlantic species Cystoseira granulata C. Agardh, as C. granulata var. turneri Montagne. Montagne (1846: 13-14, plate 4) re-described and nicely illustrated
Fig. 1 .
1 Fig. 1. Locations with collection dates of Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. (C. granulata C. Agardh var. turneri Montagne) in Algeria (Tipaza and Algiers regions) -Historical data: Herbarium specimens and references (light circles) and newly collected specimens (this work) (dark circles). *: Debray (1897): specimens not found.
Figs 2- 4 .
4 Figs 2-4. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 2-3. Habit of specimens H8300 and H8291, respectively, from Tipaza, La Corne d'Or, September 2014. 4. Habit of an old individual, specimen H8299 from Islets of Tipaza, March 2015. Bars = 5 cm.
Figs 5 -
5 Figs 5-10. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 5-6. Apical views of axes showing spinose (black arrows) and smooth-tuberculate (white arrows) tophules (specimen H8300); bars = 5 mm. 7. Spinose and smooth-tuberculate tophules, specimen H8288 from Cap Caxine, September 2014; bar = 1 cm. 8. Complanate branch with inconspicuous midrib, specimen H8300; bar = 1 cm. 9. Complanate to cylindrical branch with short spine-like appendages, specimen H8291; bar = 1 cm. 10. Detail of a complanate branch with inconspicuous midrib, specimen H8300; bar = 1 cm.
Figs
Figs 11-20. Cystoseira michaelae Ve rlaque et al., nom. et stat. nov. from Algeria (newly collected specimens, this work). 11-12. Compact tuberculate-spinose basal intercalary receptacles (arrows) close to the tophule (arrow heads), specimen H8300: bars = 5 mm. 13. Diffuse spinose basal intercalary receptacles (arrows) close to the tophule (arrow head), specimen H8290 from Aïn Benian, August 2014; bar = 5 mm. 14. Transverse section of a female basal receptacle, specimen H8291; bar = 200 µm. 15. Transverse section of a male basal receptacle, specimen H8300; bar = 200 µm. 16. Transverse section of a female basal conceptacle, specimen H8291; bar = 100 µm. 17. Transverse section of a male basal conceptacle, specimen H8300; bar = 100 µm. 18-19. Diffuse spinose terminal receptacles, specimen H8291; bars = 5 mm. 20. Transverse section of a hermaphroditic terminal conceptacle, specimen H8291; bar = 100 µm.
Fig. 21 .
21 Fig. 21. Illustration of Cystoseira michaelae Ve rlaque et al., nom. et stat. nov., as C. montagnei J. Agardh (sensu Montagne), in Montagne (1846, plate 4, figs 2a-h). a: Habit. b: Lower part of a branch with intercalary receptacles. c: Terminal receptacle. d: Detail of a terminal receptacle. e: Transverse section of a terminal receptacle showing the conceptacles. f: Oogonia. The original numbering of figures within the plate has been changed, but the original numbers have not been erased and can be seen in very small print.
[here: C. michaelae]. At the same time, he excluded all the records of[START_REF] Meneghini | Alghe italiane e dalmatiche[END_REF]. J.G.[START_REF] Agardh | Species genera et ordines algarum, seu descriptiones succinctae specierum, generum et ordinum, quibus algarum regnum constituitur[END_REF] considered Montagne's illustrations of C. montagnei [here: C. michaelae] as excellent ('Montagne (1846) p.43. tab. IV.2 (eximie !)'), and completed the distribution of the species ('Hab. In mari mediterraneo ad littora Occitaniae et galloprovinciae (ipse! = J.G. Agardh), ad Algeriam (Montagne!); in Adriatico ad Trieste (Biasoletto! et C. Agardh!) et Ve netiam (Martens!); e Gadibus = Cadix (Cabrera!)', but always without any mention of basal intercalary receptacles. This shows that (i) J.G. Agardh probably never saw any genuine specimen of Montagne's taxon, (ii) the J.G. Agardh concept of C. montagnei is much broader than that of Montagne and (iii) C. montagnei cannot be treated as a replacement name for C. granulata var. turneri (according to Art. 6.11 of ICN; McNeill et al., 2012), but as a new species based upon the Sète, Marseille and Trieste specimens (syntype housed in the Lund herbarium, LD). Kützing (1849) transferred C. montagnei to the genus Phyllacantha Kützing (currently a junior synonym of Cystoseira). Later on, he published the illustrations of P. montagnei and P. montagnei var. cirrosa Kützing on the basis of Algerian specimens sent by Montagne (Kützing, 1860) [here: C. michaelae]. Hauck (1885) recorded C. montagnei (with Phyllacantha gracilis Kützing, P. pinnata Kützing and P. affinis Kützing as synonyms), from the Adriatic Sea, with no mention of basal intercalary receptacles; the illustrations of these species of Phyllacantha in Kützing (1860) agree more with the Sauvageau (1912) C. spinosa [here: C. montagnei] than with Montagne's taxon [here: C. michaelae]. In his 'Catalogue des algues du Maroc, de l'Algérie & de la Tunisie', Debray (1897) recorded C. montagnei [here: C. michaelae] only from Algeria, close to Algiers [Cherchell, Matifou and Saint Eugène (now Bologhine)]. When
Cystoseiramichaelae
Fig. 22. Lectotype of Cystoseira granulata C. Agardh var. turneri Montagne (here: C. michaelae Ve rlaque et al., nom. et stat. nov.), Algiers [Algeria], PC (Herbarium Montagne), barcode PC0043663, by courtesy of the MNHN-Paris ©; Collection C. and P. Monnard; labelled 'Cystoseira granulata L. Turner var. turneri Montagne -Alger n°397 -Com. Class. Monnard'. The lectotype is the top left specimen. Isolated branches (top right and bottom) possibly belong to the same individual.
Figs
Figs 23-26. Lectotype of Cystoseira montagnei J. Agardh, Cette (now Sète), France, May 1837, Botanical Museum, Lund University (Herbarium J. Agardh), barcode LD528, by courtesy of Lund University ©. 23. Habit; bar = 10 cm. 24. Detail of the label. 25. Detail of spinose tophules; bar = 5 mm. 26. Detail of the upper part of branchlets with receptacles; bar = 5 mm.
Figs
Figs 27-29. Lectotype of Cystoseira spinosa Sauvageau, Banyuls-sur-Mer, Pyrénées-Orientales, France, 6 May 1907, PC (Herbarium Général, collection C. Sauvageau), barcode PC0525446, by courtesy of the MNHN-Paris ©. 27. Habit; bar = 2 cm. 28. Detail of spinose tophules; bar = 5 mm. 29. Detail of the upper part of branchlets with receptacles; bar = 5 mm.
Table 1 .
1 Major herbarium specimens of Cystoseira examined. The syntypes of C. michaelae Ve rlaque et al. nom. et stat. nov. and of C. montagnei Ve rlaque et al. comb. nov.
C. michaelae Ve rlaque et al. nom. et stat. nov.
C. granulata var. turneri Montagne
Table 1. Major herbarium specimens of Cystoseira examined. The syntypes of C. michaelae Ve rlaque et al. nom. et stat. nov. and of C. montagnei J. Agardh are indicated with an asterisk. (coll.: collection); m.d.: missing data. Correct names according to the authors of the present study (continued)
Table 2 .
2 Records of Cystoseira species with spinose tophules, referred to as C. spinosa, C. adriatica and C. montagnei, in the Mediterranean Sea (in addition to the records mentioned within the text), and probable correspondence with the taxonomic treatment in the present study (Cystoseira montagnei J.Agardh and C. michaelae Ve rlaque et al. nom. et stat. nov.)
Name(s) and Correct name(s)
Reference Location authority used by according to the Comments
the author(s) present treatment
Ardissone & Liguria (Italy) C. montagnei C. montagnei ? No description or
Strafforello J. Ag. illustration
(1877)
Piccone La Galite C. montagnei C. montagnei or No description or
(1879) (Tunisia) J. Agardh C. michaelae ? illustration
Va liante Gulf of Naples C. montagnei C. montagnei Description and illustrations
(1883) (Italy) J. Ag. corresponding well to
C. spinosa Sauvageau [here
C. montagnei]
Piccone Sardinia (Italy) C. montagnei C. montagnei ? No description or
(1884) J. Ag. illustration
Rodríguez Balearic Islands C. montagnei C. montagnei ? No description or
y Femenías (Spain) J. Ag. illustration
(1889)
Petersen La Galite C. montagnei C. michaelae As the 2 taxa are cited,
(1918) (Tunisia) Montagne and and the possibility that they
C. spinosa C. montagnei, could actually refer
Sauvageau respectively ? to C. michaelae and
C. montagnei must be
considered
Acknowledgements. The authors wish to thank the Herbarium LD and Dr Patrik Froden, Assistant Curator at the Botanical Museum of Lund University, for sending photographs of J.G. Agardh specimens of Cystoseira montagnei (syntype); Dr V. Bourgade of the Université Montpellier 2 and Prof. Bruno de Reviers and Dr B. Dennetière of the Muséum National d'Histoire Naturelle, Paris, for permission to consult collections and notebooks of C. Montagne, and for permission to reproduce the photographs of the lectotypes of C. michaelae (C. granulata var. turneri), C. adriatica and C. spinosa; Michèle Perret-Boudouresque for documentation assistance; Oussalah Adel for diving assistance; and Michael Paul for revising the English text. Many thanks are due to the anonymous reviewers for their comments and constructive criticism of the manuscript.
Algérie / Cystoseira michaelae / Cystoseira montagnei / Cystoseira spinosa /
(7 specimens that predate the protologue) (Table 1), because it is the most similar to the fertile specimen illustrated by Montagne (1846: Plate 4, Fig. 2a-h) (reproduced here as Fig. 21). Type locality: On the sheet of the lectotype: Algiers (Algeria). However, in the protologue, Montagne (1838) mentions 'propè Juliam Caesaream' (near Cherchell, ~80 km West of Algiers). We therefore consider that the type locality is Cherchell. Illustrations: Montagne (1846, Algiers, Plate 4, Fig. 2a-h
Cystoseira montagnei J. Agardh |
00611620 | en | [
"sdv.imm"
] | 2024/03/05 22:32:13 | 2011 | https://hal.science/hal-00611620/file/article.pdf | François Romagné
email: [email protected]
Eric Vivier
Natural killer cell-based therapies
Allotransplantation of natural killer (NK) cells has been shown to be a key factor in the control and cure of at least some hematologic diseases, such as acute myeloid leukemia or pediatric acute lymphocytic leukemia. These results support the idea that stimulation of NK cells could be an important therapeutic tool in many diseases, and several such approaches are now in clinical trials, sometimes with conflicting results. In parallel, recent advances in the understanding of the molecular mechanisms governing NK-cell maturation and activity show that NK-cell effector functions are controlled by complex mechanisms that must be taken into account for optimal design of therapeutic protocols. We review here innovative protocols based on allotransplantation, use of NK-cell therapies, and use of newly available drug candidates targeting NK-cell receptors, in the light of fundamental new data on NK-cell biology.
Introduction
Natural killer (NK) cells are the front-line troops of the immune system that help to keep you alive while your body marshals a specific response to viruses or malignant cells. They constitute about 10% of circulating lymphocytes [START_REF] Vivier | Innate or adaptive immunity? The example of natural killer cells[END_REF] and are on patrol constantly, always on the lookout for virus-infected or tumor cells, and when detected, they lock onto their targets and destroy them by inducing apoptosis while signaling danger by releasing inflammatory cytokines. By using NK cells that do not need prior exposure to their target, the innate immune system buys time for the adaptive immune system (T cells and B cells) to build up a specific response to the virus or tumor. Recent advances in understanding this process have led to the hope that NK cells could be harnessed as a therapy for cancers and other diseases, and we shall outline recent progress in understanding NK-cell biology that brings this approach into the realm of clinical trials.
Considerable advances have been made in understanding the molecular mechanisms governing NK-cell activation, which are assessed by the cells' ability to lyse different targets and/or secrete inflammatory cytokines such as interferon gamma (IFN-g) when in their presence. NK-cell activation is the result of a switch in the balance between the positive and negative signals provided by two main types of receptors. The receptors NKG2D, NKp46, NKp30, NKp44, the activating form of KIR (killer cell immunoglobulin-like receptor), known as KIR-S, and CD16 provide positive signals, triggering toxicity and production of cytokines. Although some of the ligands of these receptors remain unknown, the discovery of NKG2D ligands (MICA and the RAET1 family) and the NKp30 ligand (B7H6) suggests that such receptors recognize molecules that are seldom present on normal cells but are induced during infection or carcinogenesis. It is worth noting that CD16 recognizes antibody-coated target cells through their Fc portion, the receptor that mediates antibody-dependent cellular cytotoxicity, an important mechanism of action of therapeutic monoclonal antibodies (mAbs). The function of KIR-S, a family of activating receptors with a lot of homology with inhibitory KIRs (KIR-L) including the sharing of some ligands, remains largely unknown.
In the normal state of affairs, there are checks and balances to keep NK cells from attacking normal cells: activating ligands are rare on normal cells and there are inhibitory receptors on NK cells (Figure 1). The most studied inhibitory receptors are a family of immunoglobulin (Ig)-like receptors with two (KIR2DL1 and KIR2DL2/3) or three (KIR3DL1) Ig-like domains, and immunoreceptor tyrosine-based inhibition intracellular motifs (ITIMs), which transduce negative signals [START_REF] Vivier | Natural killer cell signaling pathways[END_REF]. The ligands of these receptors are well characterized and each consist of large families of major histocompatibility complex (MHC) class I gene variants (alleles) sharing structural determinants. KIR2DL1 and KIR2DL2/3 molecules recognize MHC-C alleles with a lysine or an asparagine at position 80 (collectively termed C2 alleles and C1 alleles, respectively), whereas KIR3DL1 recognizes MHC-B alleles sharing a Bw4 epitope, representing about half of the overall MHC-B alleles. Another receptor, NKG2A, recognizes HLA-E, an MHC class I-like molecule, loaded mostly with peptides derived from other class I molecules [START_REF] Parham | MHC class I molecules and KIRs in human history, health and survival[END_REF]. The expression of these molecules is variegated, and an individual NK cell will express either one or several inhibitory receptors. In combination, these receptors are sensors of the presence of MHC class I molecules on target cells and inhibitors of NK function. An integrated, although simplified, view of NK-cell activation is that NK cells quantitatively integrate positive and negative signals provided by cancer cells or infected cells, which express NK-stimulatory ligands de novo, while often down-modulating MHC class I to avoid detection by T cells.
There has been considerable interest in stimulation of NKcell activity in recent years because of genetic studies, both in preclinical and clinical settings, showing that it can increase tumor immunosurveillance and eradicate established hematological diseases such as acute myeloid leukemia (AML), as well as some viruses [START_REF] Terme | Natural killer cell-directed therapies: moving from unexpected results to successful strategies[END_REF]. In mouse models, the expression of NK-stimulatory NKG2D ligands not only induces short-term rejection of tumors, but also induces a protective adaptive immune response [START_REF] Diefenbach | Rae1 and H60 ligands of the NKG2D receptor stimulate tumour immunity[END_REF]. Similarly, mice genetically deficient in NKG2D are more susceptible to spontaneous cancer than wild-type mice [START_REF] Guerra | NKG2Ddeficient mice are defective in tumor surveillance in models of spontaneous malignancy[END_REF]. In humans, the development of allotransplantation, a clinical procedure involving transplantation of genetically nonidentical cells (routinely used in AML), shed light on the role of NK cells and particularly the role of inhibitory receptors in this process. For certain donor recipient pairs, genetic differences in MHC class I genes between the donor and the recipient cause the KIR-expressing cells NK cells sense interacting cells via their activating and inhibitory receptors. The density of ligands for these receptors dictates whether or not this interaction will lead to NK-cell activation and hence cytotoxicity and/or cytokine secretion. MHC, major histocompatibility complex; KIR, killer cell immunoglobulin-like receptor.
from the donor to not recognize their inhibitory MHC class I ligands in the recipient, leaving a subpopulation of donor NK cells free from inhibition, referred to as "alloreactive" NK cells. For example, a donor NK-cell subpopulation expressing only KIR2DL1 transplanted in C1/C1 homozygotes or KIR2DL2/3 NK cells transplanted in C2/C2 individuals do not find their cognate inhibitory ligands and become alloreactive. In haploidentical MHCmismatched hematopoietic stem cell transplantation (HSCT)-a situation where one MHC haplotype is similar between donor and recipient whereas the other is fully mismatched-that absence of inhibition due to the KIR-MHC incompatibility results in major differences in the clinical outcome [START_REF] Ruggeri | Effectiveness of donor natural killer cell alloreactivity in mismatched hematopoietic transplants[END_REF][START_REF] Ruggeri | Donor natural killer cell allorecognition of missing self in haploidentical hematopoietic transplantation for acute myeloid leukemia: challenging its predictive value[END_REF][START_REF] Ruggeri | Role of natural killer cell alloreactivity in HLA-mismatched hematopoietic stem cell transplantation[END_REF]. Clinical benefit correlates with the presence in the recipient of these disinhibited alloreactive NK cells from the donor, which are effective against recipient tumor cells. In viral infections, particular combinations of NK-activating receptors or KIR and their ligands are protective. Presence of the activating receptor KIR3DS1 and its putative ligand HLABw4-I80 has been shown to be a key factor in preventing HIV infection from leading to full-blown AIDS [START_REF] Alter | Differential natural killer cell-mediated inhibition of HIV-1 replication based on distinct KIR/HLA subtypes[END_REF][START_REF] Alter | HLA class I subtype-dependent expansion of KIR3DS1+ and KIR3DL1+ NK cells during acute human immunodeficiency virus type 1 infection[END_REF][START_REF] Carrington | KIR-HLA intercourse in HIV disease[END_REF]. In hepatitis C, KIR2DL3 homozygosity and HLA-C1 homozygosity are beneficial in both early eradication of infection and response to standard treatment (type I IFN + ribavirin) [START_REF] Khakoo | HLA and NK cell inhibitory receptor genes in resolving hepatitis C virus infection[END_REF][START_REF] Vidal-Castiñeira | Effect of killer immunoglobulin-like receptors in the response to combined treatment in patients with chronic hepatitis C virus infection[END_REF]. Homozygosity of KIR2DL3 and HLA-C1 alleles has been reported to lead to lower levels of NK inhibition than other pairs of KIR ligand combinations [START_REF] Ahlenstiel | Distinct KIR/HLA compound genotypes affect the kinetics of human antiviral natural killer cell responses[END_REF][START_REF] Moesta | Synergistic polymorphism at two positions distal to the ligand-binding site makes KIR2DL2 a stronger receptor for HLA-C than KIR2DL3[END_REF], suggesting that this underlies the enhanced response to hepatitis C. However, as KIR can also be expressed by some T-cell subsets, it remains to be firmly established whether NK cells are responsible for these effects. Nevertheless, the results of these studies suggest that we should extend the design of NK cell-based therapies to diseases other than cancer, such as infections and inflammation.
We will review here the recent advances that could help with the design of proper protocols and therapies and advance the use of NK cells in the clinic, starting with allotransplantation (transplantation between genetically different individuals of the same species). This will be followed by a discussion of the cell therapy procedures that are being developed, and the pharmacological agents that are currently or could be used in clinical trials to take advantage of the activity of NK cells.
Lessons from transplantation
Since the initial data from haploidentical HSCT, a number of retrospective studies in allostransplantation have been published, sometimes leading to differing clinical outcomes [START_REF] Witt | The influence of NK alloreactivity on matched unrelated donor and HLA identical sibling haematopoietic stem cell transplantation[END_REF]. These conflicting results may be explained in the light of new findings in NK-cell physiology and maturation.
Initially, alloreactive NK cells were simply defined by having KIRs that were only incompatible with the host MHC, and several studies have identified such alloreactive NK cells that are effective against AML blasts. However, it has been shown in normal mice that NK cells with only inhibitory receptors incompatible with self MHC class I alleles do arise physiologically (i.e., not after transplantation) but are partially functionally disabled [START_REF] Raulet | Self-tolerance of natural killer cells[END_REF]. Hence, NK cells undergo a complex maturation process that necessitates the interaction of their inhibitory receptors with their ligands, in order to be fully functional against class I negative cells (recognition of missing self; see [START_REF] Raulet | Self-tolerance of natural killer cells[END_REF] for review). The precise molecular mechanisms and localization of this process remain largely unknown in mice but were shown to be dynamic and reversible [START_REF] Elliott | MHC class I-deficient natural killer cells acquire a licensed phenotype after transfer into an MHC class I-sufficient environment[END_REF][START_REF] Joncker | Mature natural killer cells reset their responsiveness when exposed to an altered MHC environment[END_REF]. It has since been confirmed in humans that NK cells with only MHCincompatible KIR cells do exist in normal individuals but, as in mice, they are partially functionally disabled [START_REF] Anfossi | Human NK cell education by inhibitory receptors for MHC class I[END_REF][START_REF] Cooley | A subpopulation of human peripheral blood NK cells that lacks inhibitory receptors for self-MHC is developmentally immature[END_REF], indicating that human NK cells also undergo education much like mouse NK cells. This leads to a revision of the concept of alloreactivity: KIR mismatch is necessary to induce activity against MHC-positive cells (we will refer to these cells as potentially alloreactive) but not entirely sufficient, as they must have undergone an education process. It follows that functional assays must be performed to demonstrate activity and define truly alloreactive cells.
These new findings may lead to reconciliation of the conflicting data from allogeneic HSCT. Allogeneic HSCT (from a nonidentical donor) is a complex clinical procedure, with considerable differences in the nature and origin of the graft, as well as in pregraft treatments (conducted to remove recipient hematopoietic cells and thereby allow the graft to implant) and postgraft treatments (to prevent graft-versus-host disease [GVHD] caused by donor T cells). Generally, there are two main scenarios. In the first, haploidentical grafts consisting of high doses of highly purified donor CD34positive hematopoietic stem cells, with very few mature cells, are injected after very intense conditioning regimens of the host to avoid graft rejection (there is virtually no postgraft treatment as the graft is highly T cell-depleted) (Figure 2). Truly alloreactive NK cells have been consistently found ex vivo following such transplantation, in an activated state resulting from missing-self recognition, and this scenario is associated with an improved clinical outcome [START_REF] Moretta | Killer Ig-like receptor-mediated control of natural killer cell alloreactivity in haploidentical hematopoietic stem cell transplantation[END_REF]. Unfortunately, such haploidentical procedures also require profound immunosuppression of the host, and the treatmentrelated morbidities caused by infection are high, so such procedures are not used widely. In the second scenario, allogeneic HSCT can be matched except at a given HLA-B or HLA-C allele, and require much less conditioning pregraft, but more immunosuppressive treatment postgraft to avoid GVHD. Such protocols vary widely depending on the laboratories, both in terms of pregraft and postgraft treatments and cell content in the graft (mature cell content and origin of graft consisting of either bone marrow cells or mobilized peripheral cells). Not surprisingly, such protocols vary widely in the KIR mismatch effect, with outcomes to match: beneficial, neutral, or even pejorative. Taking into account the new findings on NK-cell physiology, the current prevailing hypothesis is that in haploidentical HSCT, the harsh conditioning regimen and high CD34-positive cell content allow the donor NK cells to mature with a recognition of the "self" MHC type on the donor hematopoietic cells, and therefore become truly alloreactive against residual recipient blast cells, whereas normal host tissues are spared because of lack of NK-stimulatory ligand expression [START_REF] Pende | Anti-leukemia activity of alloreactive NK cells in KIR ligand-mismatched haploidentical HSCT for pediatric patients: evaluation of the functional role of activating KIR and redefinition of inhibitory KIR specificity[END_REF][START_REF] Haas | NK-cell education is shaped by donor HLA genotype after unrelated allogeneic hematopoietic stemcell transplantation[END_REF]. In nonhaploidentical situations, education of NK cells on donor HLA may be lacking in some graft preparation and pregraft regimens, which might account for the neutral effects seen (cells remain potentially alloreactive). Conflicting results in nonhaploidentical situations [START_REF] Giebel | Survival advantage with KIR ligand incompatibility in hematopoietic stem cell transplantation from unrelated donors[END_REF][START_REF] Davies | Evaluation of KIR ligand incompatibility in mismatched unrelated donor hematopoietic transplants. Killer immunoglobulin-like receptor[END_REF] may be also explained by different treatments resulting in different T-cell levels in grafts and consequently different levels of GVHD [START_REF] Cooley | Donors with group B KIR haplotypes improve relapse-free survival after unrelated hematopoietic cell transplantation for acute myelogenous leukemia[END_REF]. This hypothesis is further supported by protocols where the graft origin is cord blood, a situation with few mature T cells in the graft, which results in a beneficial outcome [START_REF] Willemze | Eurocord-Netcord and Acute Leukaemia Working Party of the EBMT: KIR-ligand incompatibility in the graft-versus-host direction improves outcomes after umbilical cord blood transplantation for acute leukemia[END_REF].
In truly matched transplantation, there are no obvious reasons for alloreactive cells to develop as the MHC of donor and recipient are the same and the maturation of NK cells should spare all host cells (either normal or NK-stimulating ligand-expressing cells). Surprisingly, even in completely MHC-matched transplantation, in particular in T-depleted grafts, functionally alloreactive NK cells have been reported, with an improved outcome for patients homozygous for HLA-C1 or HLA-C2, for example [START_REF] Yu | Breaking tolerance to self, circulating natural killer cells expressing inhibitory KIR for non-self HLA exhibit effector function after T cell-depleted allogeneic hematopoietic cell transplantation[END_REF][START_REF] Sobecks | Survival of AML patients receiving HLA-matched sibling donor allogeneic bone marrow transplantation correlates with HLA-Cw ligand groups for killer immunoglobulin-like receptors[END_REF]. In the same vein, it has been recently demonstrated in a large retrospective study that the KIR genotype alone influences clinical outcome, with the presence of KIR2DL3 and/or absence of KIR2DL2 and KIR2DS2 being less favorable, opening the way for the selection of donors based on KIR genotype in matched allotransplantation [START_REF] Cooley | Donor selection for natural killer cell receptor genes leads to superior survival after unrelated transplantation for acute myelogenous leukemia[END_REF]. The functional basis of such observations is still incompletely understood: during NK-cell reconstitution from stem cells, KIR expression is variegated, and potentially alloreactive cells appear, but, as mentioned above, if such reconstitution was equivalent to normal NK-cell maturation, they should be functionally impaired and tolerant to self. It is possible that during hematopoietic reconstitution and in certain allograft protocols, the cytokine milieu, strength of inhibitory interaction and presence of different activating genes (depending on KIR genotype), and absence of T-cell interaction (T cell-depleted grafts) favor the maturation of truly alloreactive NK cells despite the presence of matched inhibitory receptors. Indeed, new studies describing NK-cell maturation point to the fact that the hyporesponsiveness of NK cells is very subtle and malleable, influenced by cytokines and probably genotype, and reversible [START_REF] Elliott | MHC class I-deficient natural killer cells acquire a licensed phenotype after transfer into an MHC class I-sufficient environment[END_REF][START_REF] Joncker | Mature natural killer cells reset their responsiveness when exposed to an altered MHC environment[END_REF]. In summary, although many studies strongly suggest the efficacy of KIR-mismatched NK cells, definitive studies are needed to optimize the clinical settings. We need a better understanding of NK-cell development and function after matched allogeneic transplantation, depending on the specific allotransplantation protocol used, to take full advantage of the alloreactive potential of NK cells.
Cell therapy protocols in development
The current view arising from the results of allotransplantation studies is that NK cells, and particularly allogeneic KIR-mismatched NK cells, are effective, at least in adult AML and pediatric acute lymphocytic leukemia, but that the effect may depend on NK-cell maturation/ activation state. One way to better control NK-cell functional status (as well as the ratio of NK cells to target cells) would be to generate large quantities of these cells in vitro and inject them either as a therapeutic regimen alone or after allotransplantation.
Historically, crude, short-term (1-2 days) interleukin (IL)-2-activated cells were used as graft material (lymphokine-activated killer [LAK] cells), and although this was enriched in NK cells, the LAK cells were mostly T cells, and the cellular content was poorly defined and variable (see [START_REF] Suck | Emerging natural killer cell immunotherapies: large-scale ex vivo production of highly potent anticancer effectors[END_REF] for review). Initial attempts to work with purified preparations of NK cells led to promising results, although with a limited number of patients. Autologous HSCT followed by injection of purified, short-term IL-2-stimulated KIR-mismatched NK cells in multiple myeloma patients destroyed multiple myeloma blasts in vitro, did not lead to graft failure, and the NK cells survived at least a few days [START_REF] Shi | Infusion of haploidentical killer immunoglobulin-like receptor ligand mismatched NK cells for relapsed myeloma in the setting of autologous stem cell transplantation[END_REF]. In a protocol not involving HSCT, purified short-term IL-2-activated haploidentical NK cells were injected into AML and other hematologic cancer patients after mild conditioning to avoid rejection of injected NK cells. This study showed that injected NK cells survived in the host for a few days and were well tolerated [START_REF] Miller | Successful adoptive transfer and in vivo expansion of human haploidentical NK cells in patients with cancer[END_REF]. Although the number of patients is still too limited to draw firm conclusions, encouraging clinical signs of activity were seen in the above protocols. As neither protocol reached any doselimiting toxicity, these findings suggest that it may be possible to inject higher cell numbers if cell sources or ex vivo expansion procedures improve.
These initial results have prompted several groups to embark on the large-scale expansion of highly purified, GMP ("good manufacturing practice") grade NK cells after longer-term in vitro expansion. NK-cell purification by magnetic beads is followed by IL-2 expansion with or without feeder cells. Protocol for generation of single KIR-positive cells has also been designed but is not yet ready to be applied to large-scale clinical trials [START_REF] Siegler | Good manufacturing practice-compliant cell sorting and large-scale expansion of single KIR-positive alloreactive human natural killer cells for multiple infusions to leukemia patients[END_REF]. Functional studies using NK cells against leukemic cells or NK infusion in xenogenic models have demonstrated, however, that the cells generated are very active. Some of the protocols have reached smallscale phase I clinical trials and have demonstrated that high numbers of these infused NK cells are safe in humans [START_REF] Barkholt | Safety analysis of ex vivo-expanded NK and NKlike T cells administered to cancer patients: a phase I clinical study[END_REF][START_REF] Fujisaki | Expansion of highly cytotoxic human natural killer cells for cancer cell therapy[END_REF].
The current caveats of such protocols are the complexity of the procedures required, which would make it difficult to increase the scale-up to multicenter clinical studies necessary for larger phase II trials. Indeed, successful transfer of cell therapy protocols (compliant with regulatory standards) to industry and large clinical trials requires a centralized cell culture factory and the use of frozen cells. NK-cell culture protocols do not yet meet this benchmark, but further refinements should solve these issues [START_REF] Berg | Clinical-grade ex vivo-expanded human natural killer cells up-regulate activating receptors and death receptor ligands and have enhanced cytolytic activity against tumor cells[END_REF]. Moreover, alloreactive NK cells should be the most potent cells in cell therapy protocol and their source can be a problem outside the context of allotransplantation. This alone may prevent the use of such cells in larger trials. The development of antiKIRtherapeutic mAbs (see below and Figure 2) that block NK inhibition may allow the use of autologous cells as an easier source of cell material, by inducing alloreactivity of NK cells that would otherwise be MHCtolerant.
Another important problem to be solved is the fate of ex-vivo expanded NK cells after infusion. Indeed, if NK cells from allogeneic donors are used, they may be rejected by the host immune system despite the mild immunosuppression used in some protocols. Even in cases of autologous transplantation or allotransplantation where donor NK cells are not rejected, they may be short lived, and protocols usually involve daily injection of IL-2 to sustain NK levels and activation status [START_REF] Shi | Infusion of haploidentical killer immunoglobulin-like receptor ligand mismatched NK cells for relapsed myeloma in the setting of autologous stem cell transplantation[END_REF][START_REF] Miller | Successful adoptive transfer and in vivo expansion of human haploidentical NK cells in patients with cancer[END_REF]. IL-2 injection may increase NK-cell lifespan and activity (although it has not been formally tested, by comparison with untreated cells) but also can generate outgrowth of Treg cells that may hamper the overall response to the tumor as shown in pilot clinical trials [START_REF] Barkholt | Safety analysis of ex vivo-expanded NK and NKlike T cells administered to cancer patients: a phase I clinical study[END_REF][START_REF] Geller | A phase II study of allogeneic natural killer cell therapy to treat patients with recurrent ovarian and breast cancer[END_REF]. The very recent availability of GMP grade IL-15, now in phase I clinical trials by the National Institutes of Health (NIH) [START_REF] Geller | A phase II study of allogeneic natural killer cell therapy to treat patients with recurrent ovarian and breast cancer[END_REF] may circumvent the use of IL-2, providing a better activation signal for NK cells, both in vitro and in vivo, without promoting Treg expansion.
Pharmacological agents in development to modulate NK activity
Although cell therapy protocols can be very useful to characterize NK-cell activity and, if successful, can translate into commercially available products, they remain very difficult and costly to develop on a large scale. It should be easier to move new drugs forward now that therapeutic agents are being tested that aim to stimulate NK-cell activity.
The most advanced compound specifically targeting NK cells is a blocking antiKIR mAb. This mAb, 1-7F9, recognizes KIR2DL1, 2, and 3 and therefore blocks the inhibition imposed by virtually all MHC class I C alleles, allowing it to be tested in all patients whatever their KIR and HLA genotypes. Building on the results of allotransplantation in AML and multiple myeloma patients [START_REF]gov -A phase I study of intravenous recombinant human IL-15 in adults with refractory metastatic malignant melanoma and metastatic renal cell cancer[END_REF], as well as preclinical data showing reconstitution of NK-cell lysis of MHC-positive multiple myeloma and AML blasts in vitro and in preclinical models [START_REF] Kröger | Clinical Trial Committee of the British Society of Blood and Marrow Transplantation and the German Cooperative Transplant Group: Comparison between antithymocyte globulin and alemtuzumab and the possible impact of KIR-ligand mismatch after dose-reduced conditioning and unrelated stem cell transplantation in patients with multiple myeloma[END_REF], clinical trials with 1-7F9 mAb in both diseases have been initiated. Phase I results showed good tolerability in both scenarios (Vey et al., manuscript in preparation), paving the way for phase II trials that are now ongoing. While the monoclonal 1-7F9 should be valuable to block the inhibition of NK cells, other products are now available that can enhance the activation of NK cells. One of the most promising of these is IL-15, which is a key cytokine for NK cells.
In the same vein, it has been shown by several groups that certain drugs, already available in the therapeutic arsenal, can increase the expression of NK-activating ligands on the tumor, and therefore increase NK tumor lysis in vivo. Initially, it was shown that some chemotherapy (5-FU, Ara-C, cisplatin) and radiation or ultraviolet therapy targeting the DNA damage pathway can increase expression of the NK-stimulating ligand NKG2D on tumor cells, and lead to enhanced NK lysis of tumors [START_REF] Romagné | Preclinical characterization of 1-7F9, a novel human anti-KIR receptor therapeutic antibody that augments natural killer-mediated killing of tumor cells[END_REF]. More recently, new drugs targeting proteasome inhibitors, such as bortezomib, which is now registered for the treatment of multiple myeloma, have also been shown to induce NK-stimulatory ligands [START_REF] Gasser | The DNA damage pathway regulates innate immune system ligands of the NKG2D receptor[END_REF][START_REF] Ames | Sensitization of human breast cancer cells to natural killer cell-mediated cytotoxicity by proteasome inhibition[END_REF]. Finally, lenalidomide (Revlimid), a drug which has been shown to be active in multiple myeloma and to have promising preliminary results in other hematological malignancies, has been shown, in addition to having a direct antitumor effect, to upregulate NK-cell function through induction of cytokines [START_REF] Butler | Proteasome regulation of ULBP1 transcription[END_REF] and to induce NK-stimulatory ligands on tumor cells. Some of these drugs, such as bortezomib or chemotherapies [START_REF] Davies | Thalidomide and immunomodulatory derivatives augment natural killer cell cytotoxicity in multiple myeloma[END_REF][START_REF] Markasz | Effect of frequently used chemotherapeutic drugs on the cytotoxic activity of human natural killer cells[END_REF], can also have inhibitory effects on NK cells so their use must be carefully evaluated, but their clinical availability opens the door to multiple combination possibilities, either sequentially or concomitantly, with cell therapy and antiKIR antibodies. Such combinations are beginning to be tested in the clinic (phase I/II for antiKIR in combination with lenalidomide, and cell therapies in combination with bortezomib [START_REF] Berg | Clinical-grade ex vivo-expanded human natural killer cells up-regulate activating receptors and death receptor ligands and have enhanced cytolytic activity against tumor cells[END_REF]).
Future directions
Since the initial demonstration that NK-cell therapies are effective in some contexts, there has been a lot of progress in refining protocols, as well as new approaches such as the injection of highly purified, functionally controlled NK cells. Also, new drugs allow the in-vivo manipulation of NK cells by targeting their inhibitory receptors or activating receptors (through drugs driving the expression of ligands of activating receptors on tumor cells). All these tools have now been developed to a point where they can be tested in clinical trials either alone or in combination. Because recent advances have increased our understanding of NK maturation and function, such clinical trials can now be monitored for NK-cell activity and represent attractive possibilities to be translated into successful treatments in the clinic.
Figure 1 .
1 Figure 1. Natural killer (NK) cell recognition strategies
Figure 2 .
2 Figure 2. Natural killer (NK) cell-based therapies
(page number not for citation purposes)
(page number not for citation purposes) F1000 Medicine Reports 2011, 3:9 http://f1000.com/reports/m/3/9
Acknowledgements
The authors thank Corinne Beziers-Lafosse (CIML) for excellent graphic assistance and CIML's antibody and cytometry facilities. EV's lab is supported by grants from the European Research Council (ERC advanced grants), Agence Nationale de la Recherche (ANR), Ligue Nationale Contre le Cancer (Equipe labellisée 'La Ligue'), as well as by institutional grants from INSERM, CNRS, and Université de la Méditerranée to the CIML. EV is a scholar from the Institut Universitaire de France.
Abbreviations
AML, acute myeloid leukemia; GMP, good manufacturing practice; GVHD, graft-versus-host disease; HSCT, hematopoietic stem cell transplantation; IFNγ, interferon gamma; Ig, immunoglobulin; IL, interleukin; KIR, killer cell immunoglobulin-like receptor; LAK, lymphokine-activated killer; mAb, monoclonal antibody; MHC, major histocompatibility complex; MICA, MHC class Irelated chain A; NK, natural killer; RAET1, retinoic acid early transcripts-1.
Competing interests
FR and EV are co-founders and shareholders of Innate Pharma. |
01765072 | en | [
"sde",
"sdv",
"sdv.bv",
"sdv.ee"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01765072/file/Roux_et_al_2017.pdf | David Roux
email: [email protected]
Osama Alnaser
Elnur Garayev
Béatrice Baghdikian
Riad Elias
Philippe Chiffolleau
Evelyne Ollivier
Sandrine Laurent
Mohamed El Maataoui
Huguette Sallanon
Ecophysiological and phytochemical characterization of wild populations of Inula montana L. (Asteraceae) in Southeastern France
Keywords:
Inula montana is a member of the family Asteraceae and is present in substantial numbers in Garrigue country (calcareous Mediterranean ecoregion). This species has traditionally been used for its anti-inflammatory properties as well as Arnica montana. In this study, three habitats within Luberon Park (southern France) were compared regarding their pedoclimatic parameters and the resulting morpho-physiological response of the plants. The data showed that I. montana grows in south-facing poor soils and tolerates large altitudinal and temperature gradients. The habitat conditions at high elevation appear to affect mostly the morphology of the plant (organ shortening). Although the leaf contents of total polyphenols and flavonoids subclass essentially followed a seasonal pattern, many sesquiterpene lactones were shown to accumulate first at the low-elevation growing sites that suffered drought stress (draining topsoil, higher temperatures and presence of a drought period during the summer). This work highlights the biological variability of I. montana related to the variation of its natural habitats which is promising for the future domestication of this plant. The manipulation of environmental factors during cultivation is of great interest due to its innovative perspective for modulating and exploiting the phytochemical production of I. montana.
Introduction
The sessile living strategy of terrestrial plants, anchored to the ground, forces them to face environmental variations. Plants have developed complex responses to modify their morpho-physiological characteristics to counteract both biotic and abiotic factors [START_REF] Suzuki | Abiotic and biotic stress combinations[END_REF][START_REF] Rouached | Plants coping abiotic and biotic stresses: a tale of diligent management[END_REF]. Altitude is described as an integrative environmental parameter that influences phytocoenoses in terms of species distribution, morphology and physiology [START_REF] Liu | Influence of environmental factors on the active substance production and antioxidant activity in Potentilla fruticosa L. and its quality assessment[END_REF]. It reflects, at minimum, a mixed combination of temperature, humidity, solar radiation and soil type [START_REF] Körner | Alpine Plant Life[END_REF]. In addition, the plant age, season, microorganism attacks, competition, soil texture and nutrient availability have been proven to strongly influence the morphology and the secondary metabolite profile of plants [START_REF] Seigler | Plant Secondary Metabolism[END_REF]. Altitudinal gradients are attractive for eco-physiological studies to decipher the mechanisms by which abiotic factors affect plant biological characteristics and how those factors influence species distribution [START_REF] Graves | A comparative study of Geum rivale L. and G. urbanum L. to determine those factors controlling their altitudinal distribution II. Photosynthesis and respiration[END_REF]. For instance, a summer increase of nearly 10% in solar irradiance per 1000 m in elevation has been demonstrated in the European Alps. This increase was also characterized by an 18% increase in UV radiation [START_REF] Blumthaler | Increase in solar UV radiation with altitude[END_REF]. Considering the reliefs of the Mediterranean basin, plants must confront both altitude and specific climate, namely high summer temperatures, infrequent but abundant precipitation, and wind [START_REF] Bolle | Mediterranean Climate: Variability and Trends[END_REF]. Moreover, plants that live at higher elevation must also survive winter conditions characterized by low temperatures and high irradiance. All together, these factors force the plants to develop dedicated short-and long-term phenological, morphological and physiological adaptations [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF]. Many of these adjustments are protective mechanisms against photoinhibition of photosynthesis [START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF][START_REF] Sperlich | Seasonal variability of foliar photosynthetic and morphological traits and drought impacts in a Mediterranean mixed forest[END_REF], and most of them involve the synthesis of secondary metabolites [START_REF] Ramakrishna | Influence of abiotic stress signals on secondary metabolites in plants[END_REF][START_REF] Bartwal | Role of secondary metabolites and brassinosteroids in plant defense against environmental stresses[END_REF].
The genus Inula (Asteraceae) includes more than 100 species that are widely distributed in Africa and Asia and throughout the Mediterranean region. These plants have long been collected or cultivated around the world for their ethnomedicinal uses. They synthesize and accumulate significant amounts of specific terpenoids and flavonoids. Secondary metabolites (including sesquiterpene lactones) from Inula spp. have shown interesting biological activities such as antitumor, anti-inflammatory, antidiabetic, bactericidal, antimicrobial and antifungal activities, and these plants have also been used for tonics or diuretics [START_REF] Reynaud | Free flavonoid aglycones from Inula montana[END_REF][START_REF] Seca | The genus Inula and their metabolites: from ethnopharmacological to medicinal uses[END_REF].
The species Inula montana is a hairy rhizomatous perennial (hemicryptophyte) herb with a 10-40 cm circumference and solitary capitulum (5-8 cm diameter) of yellow florets (long ligules) positioned at the top of a ≈20 cm floral stem. It grows at altitudes of 50-1300 m from eastern Italy to southern Portugal and is frequent in Southeast France. This calcicolous and xerophilous plant can be locally abundant, particularly in the Garrigue-type lands [START_REF] Gonzalez Romero | Phytochemistry and pharmacological studies of Inula montana L[END_REF][START_REF] Girerd | Flore du Vaucluse: troisième inventaire, descriptif, écologique et chorologique[END_REF][START_REF] Botanica | Tela Botanica[END_REF]. In the south of France, I. montana was incorrectly called "Arnica" because it was used in old traditional medicine as an alternative drug to the well-known Arnica montana [START_REF] Reynaud | Free flavonoid aglycones from Inula montana[END_REF]. Due to herbivory pressure, loss of habitat and the fact that it is mainly harvested from the wild, A. montana is cited in the Red List of Threatened Species (IUCN). In Europe, more than 50 t of dried flowers are traded each year [START_REF] Sugier | Propagation and introduction of Arnica montana L. into cultivation: a step to reduce the pressure on endangered and highvalued medicinal plant species[END_REF]. Although many efforts are currently underway to domesticate A. montana and to correctly manage its habitats, the opportunity to find an alternative plant would therefore be of considerable interest.
In this context, we have developed a scientific program that aims to rehabilitate, domesticate and test I. montana as an efficient pharmaceutical substitute to A. montana. We have recently published a phytochemical investigation of the contents of leaves and flowers of I. montana [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF]. Those data showed new compounds with associated anti-inflammatory activity. Here, we present the results of an ecophysiological study of I. montana that aimed to analyze the putative correlations between its morphology, its phytochemical production (with a focus on sesquiterpene lactones) and the characteristics (edaphic and climatic) of its natural habitats. It was expected that I. montana would face various abiotic stresses according to the large altitude gradient of its habitats. Assessing the response of the plant to its natural growing conditions will be helpful for its future domestication. In addition, a successful identification of environmental levers that could modulate the phytochemical production of this medicinal plant would be of great interest.
Material and methods
Luberon park
The present study was focused on I. montana populations growing in the French "Parc Naturel Régional du Luberon" (Luberon Park) that is located in southeastern France. The park (185,000 ha) is characterized by medium-sized mountains (from 110 to 1125 m high; mean altitude ≈680 m) that stretch from west to east over the "Vaucluse" and the "Alpes-de-Haute-Provence" regions (Supplemental file). Although the overall plant coverage of Luberon Park belongs to the land type "Garrigue" (calcareous low scrubland ecoregion), there are two significant climatic influences: first, the north-facing shady side is characterized by a cold and humid climate that supports the development of deciduous species such as the dominant white oak (Quercus pubescens). Second, the sunny south-facing side receives eight to ten times more solar radiation. On this side, the vegetation is typically Mediterranean with a majority of green oak (Quercus ilex), Aleppo pine (Pinus halepensis), kermes oak (Quercus coccifera) and rosemary (Rosmarinus officinalis). The ridges of the Luberon Park suffer from extreme climatic variations: windy during all seasons, intense summer sun, cold during the winter, dry atmosphere and spontaneous and intense rains. These conditions limit the spectrum of plant species to those most resistant to these conditions, such as the common juniper (Juniperus communis) and boxwood (Buxus sp.) [START_REF] Gressot | Le parc naturel régional du Luberon[END_REF].
Sites of interest and sampling
Inula montana is present in highly variable amounts over Luberon Park. By exploring the south-facing sides we selected three sites of interest: Murs, Bonnieux and Apt (Supplemental file). At these locations, I. montana forms several small, sparse and heterogeneous groups of tens of plants per hectare. These sites were also selected for their similar presentation as grassy clearings (area from 4 to 9 ha) and for their uniform flatness and slight inclination (≈7%). The linear distance between the 3 sites is 21.4 ± 2 km. The Apt site is 500-600 m higher than both other sites. A preliminary phenological survey showed that the vegetative growth of I. montana extended from early April to late October, consistent with the hemicryptophytic strategy of the plant. Mid-June corresponded to the flowering period, which lasted ≈10 days. Accordingly, samples were synchronously collected from the three habitats at four consecutive periods during 2014: early April (early spring), mid-May (late spring), mid-June (summer) and late October (autumn).
Climatic and edaphic data
The measurements of climate characteristics (standard weather stations, 1.5 m height above soil surface) were accessed from the French weather data provider (meteofrance.fr, 2014, France) and supplemented with agronomic weather station data near each site (climatedata.org, 2014, Germany). The satellite-based solar radiation measurements (Copernicus Atmosphere Monitoring Service (CAMS)) were obtained from the solar radiation data service (soda-pro.com, 2014, MINES ParisTech, France). The measurements of the physical properties of the soils and of the chemical content of the aqueous extracts (cf. Table 1) were subcontracted to an ISO certified laboratory (Teyssier, Bordeaux, France) according to standards. Briefly, 10 g of raw soil were milled, dried (12 h at 45 °C, except for nitrogen determination) and sifted (2 mm grid). Samples were then stirred into 50 ml of demineralized water for 30 min at 20 °C and filtered. Organic matter was measured after oxidation in potassium dichromate and sulfuric acid. NH 4 and NO 3 were extracted with 1 M KCl. Organic matter, NH 4 , NO 3 and water-extractable PO 4 were then determined by colorimetric methods. K, Mg, Ca, Fe, Cu, Mn, Zn and Bo were determined by atomic absorption spectroscopy.
Determination of growth parameters
Plant growth for each period evaluated was determined by using several parameters: fresh and dry weight, water content, leaf area, and height of floral stem at the flowering stage. For each period, ten plants were collected randomly from each of the three sites (Luberon Park). The fresh weight was measured immediately after harvest, and the leaves were scanned to measure their area with the ImageJ software (National Institutes of Health, USA). The collected plants were subsequently dried (80 °C, 24 h) to calculate the water content. Glandular trichome density was assessed on 10 leaves randomly collected at the flowering period from 10 different plants per site. This assessment was performed using a stereomicroscope (Nikon ZX 100, Canagawa, Japan) equipped with fluorescence (excitation 382 nm, emission 536 nm) and digital camera (Leica DFC 300 FX, Wetzlar, Germany). The captured images allowed the quantification of glandular trichomes using ImageJ.
Chlorophyll-a fluorescence measurements
Chlorophyll-a fluorescence was measured in vivo using a portable Handy-PEA meter (Hansatech, Kings Lynn, UK) on 20 plants arbitrarily selected three times per day: in the morning (10:00), at midday (12:00) and in the afternoon (14:00). This was done for each considered time period (season) and for each of the three I. montana habitats. The fluorescence parameters calculated were the maximum quantum yield of primary photosystem II photochemistry (Fv/Fm) and the performance index (PI) according to the OJIP test [START_REF] Strasser | The fluorescence transient as a tool to characterize and screen photosynthetic samples[END_REF]. Both parameters are plant stress indicators and provide indications of the overall plant fitness. The ratio (Fv/Fm) between variable chlorophyll fluorescence (Fv = Fm -F0) and maximum fluorescence (Fm) is the most used parameter to assess plant stress. Initial fluorescence (F0) is obtained from dark adapted samples and maximum fluorescence (Fm) is measured under a saturation pulse [START_REF] Maxwell | Chlorophyll fluorescence -a practical guide[END_REF][START_REF] Rohaçek | Chlorophyll fluorescence parameters: the definitions, photosynthetic meaning, and mutual relationships[END_REF]. PI is an integrative parameter that reflects the contribution to photosynthesis of the density of reaction centers and both the light and the dark reactions [START_REF] Poiroux-Gonord | Metabolism in orange fruits is driven by photooxidative stress in the leaves[END_REF]. All of the parameters were calculated from the measured fluorescence of leaves under saturating pulsed light (1 s at 3500 μmol m -2 s -1 ) after 20 min adaptation to the dark.
Total polyphenol and flavonoid contents
Harvested leaves were air dried on absorbent paper at room temperature for 3 weeks. The samples were prepared by maceration at room temperature for 96 h in 20 ml of 50% ethanol (v/v) (Carlo Erba, Italy). This step was followed by ultrasonic extraction for 30 min. The samples were then filtered into a 20 ml volumetric flask and adjusted to volume with the same solvent. The total polyphenol content was determined according to paragraph 2.8.14 of the current European Pharmacopoeia (Ph. Eur, 2017): the absorbance was measured at 760 nm (Shimadzu 1650pc, Japan), and the results are expressed as pyrogallol (Riedel-de-Haën, Germany) equivalents in percent (g/100 g of dried plant sample). The total flavonoid content was determined according to the aluminum chloride colorimetric method of monograph number 2386 (safflower flower) from the current European Pharmacopoeia. The absorbance was measured at 396 nm (Shimadzu 1650pc), and the results are expressed as the percentage of total flavonoids relative to luteoline (C 15 H 10 O 6 ; Mr 286.2).
High-performance liquid chromatography (HPLC) analyses
The extraction was performed by mixing 10 g of dried leaves with 100 ml of CH 2 Cl 2 (Carlo Erba, Italy) and introducing the mixture into a glass column in order to extract compounds by percolation with dichloromethane. After 18 h of maceration, 100 ml of dichloromethane extract was collected and evaporated to dryness. Next, 10 mg of dried extract were dissolved in 5 ml of methanol (Carlo Erba) and centrifuged. Then, 4 ml of the supernatant was brought to a final volume of 10 ml with distilled water. The solution was filtered through a 0.45-μm membrane. The analyses were performed using an Agilent 1200 series apparatus (G1379A degasser, G1313A autosampler, G1311A quaternary pump, G1316A column thermostat and G1315 B diode array detector (DAD)) (Agilent, Germany) with a Luna C18 adsorbent (3 μm, 150 mm × 4.6 mm) (Phenomenex, USA) and a SecurityGuard C18 column (3 mm ID × 4 mm cartridge) (Phenomenex). Instrument control, data acquisition and calculation were performed with ChemStation B.02.01. (Agilent). The mobile phase consisted of 52% MeOH (Carlo Erba) and 48% water (Millipore, Germany), and the pH of the mobile phase was 5.5. The flow rate was 1.0 ml/min. The detector was operated at 210 nm (sesquiterpene lactones absorption wavelength), and peaks were identified according to [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF]. The injection volume was 20 μl.
Statistical analysis
The principal component analysis (PCA) and non-parametric test were performed using R (R Foundation, Austria). For multiple comparisons, the post hoc Kruskal-Wallis-Dunn test with the Bonferroni adjustment method was used. The R libraries used were factomineR, PMCMR, and multcompView. The data are displayed as the means ± standard error of the mean and were considered significant at p < 0.05.
Results
Pedoclimatic characterization of I. montana habitats
Among the three Luberon Park sites assessed, Murs and Bonnieux showed a similar climatic pattern in terms of temperature and precipitation (Fig. 1). In addition, the 20-year-data suggested that both of these I. montana habitats experienced a drought period centered on July (1-2 months long). The Apt habitat, which is 500-600 m higher than the two other sites (Supplemental file), showed a lower mean temperature and higher precipitation. In addition, Apt notably displayed the absence of a drought period, according to the averaged data (Fig. 1), but it showed drier air throughout the year (Table 1).
The 3-year (2013-2015) satellite-based measurement of the global solar irradiation on the horizontal plane at ground level (GHI) (Fig. 2) showed a strong increase in irradiance from January to June, a stable amount of radiation from June to July and then a strong decrease until December. When investigating the irradiance in detail for the 3 I. montana populations, no difference was observed (GHI). However, when irradiance was estimated under clear sky, namely by virtually removing the clouds (Clear-Sky GHI), it appeared that the Apt site received ≈3% higher solar irradiation from May to July. Taken together, these results indicate that the cloudier Apt weather compensates on average for the higher solar irradiation at this altitude.
Considering the physical characteristics of the topsoil, Apt appeared clayey and loamy, whereas Murs and Bonnieux were much richer (6-12 times) in sand (Table 1). Concerning the chemical characteristics, the analysis of the topsoil aqueous extracts showed that all three growing sites appeared equally poor (Table 1). The Apt topsoil showed slightly lower levels of NH 4 , NO 3 , K, Ca, Mn and Zn than either of the other sites and also showed the lowest pH.
Impact of the geographic location and seasonal progress on I. montana morphology and physiology
Until autumn, I. montana plants from the Apt population showed less leaf blade surface area than leaves from the other sites (Fig. 3A). All three habitats displayed an intensive early-spring growth period, but during late spring, the leaf surface area was 45% lower at the Apt site than at the other sites. However, at Apt, the leaf area was quite stable after this time and for the remaining period, but the leaf surface decreased progressively from late spring to autumn at the two lower-altitude sites (Murs and Bonnieux). There was no significant variation in the number of leaves during the season, with the exception of a slight difference during summer between Bonnieux and the two other habitats (Fig. 3B).
In addition, the geographic location of I. montana habitats seemed to influence both the length of the flowering stem and the number of glandular trichomes per leaf (Table 2). It appeared that I. montana plants from the Apt habitat showed a significantly shortened floral stem (≈-12%) and fewer leaf glandular trichomes (≈-23%) in comparison with the other sites (Murs and Bonnieux).
Dry and fresh weights increased from spring to summer and then decreased until autumn (Fig. 4A,B). When comparing the three I. montana habitats in terms of plant dry weight, they showed no difference during the overall season (Fig. 4A). Similarly, all plants showed essentially the same water content until late spring (Fig. 4C). Then, Apt plants displayed a water content of over 70% during the remaining seasons, while plants from the two low-elevation locations stayed below 65% (Bonnieux showed the lowest value in the summer: 55%).
Both indicators from chlorophyll a fluorescence measurements (the maximum quantum yield of primary photosystem II photochemistry and the performance index) showed slight decreases during the summer regardless of the geographic location of the plants (Fig. 5). The overall values then increased until autumn, but they did not return to their initial levels. In addition, I. montana plants from the Apt population showed higher Fv/Fm and PI values during the whole growing period than did plants in the two low-altitude habitats.
Phytochemical contents of I. montana according to geographic location
The amounts of total polyphenols and their flavonoid subclass during the overall period did not differ among the three habitats, with the exception of a lower level of polyphenols during late-spring for the plants in Bonnieux (Fig. 6A). For the three habitats, the total polyphenol level was 49% lower in autumn than in the early spring. The total flavonoids (Fig. 6B) showed an average increase of 56% from early spring to summer (higher level) but then decreased drastically thereafter (-68%).
We conducted high-performance liquid chromatography analysis on the leaves of the I. montana plants. The chromatograms (Fig. 7) showed 10 major peaks, in which we recently identified (by the external standard method) 5 sesquiterpene lactones [START_REF] Garayev | New sesquiterpene acid and inositol derivatives from Inula montana L[END_REF] respectively artemorin (p1), 9B-hydroxycostunolide (p2), reynosin (p3), santamarine (p5) and costunolide (p10). Other peaks were determined to be a mix of two flavonoids (Chrysosplenol C and 6-hydroxykaempferol 3,7-dimethyl ether; p4) and four inositol derivatives (myoinositol-1,5- diangelate-4,6-diacetate, myoinositol-1,6-diangelate-4,5-diacetate, myoinositol-1-angelate-4,5-diacetate-6-(2-methylbutyrate), myoinositol-1-angelate-4,5-diacetate-6-isovalerate; p6 to p9). The cross-location and cross-time relative quantification of the 5 sesquiterpene lactones (Table 3) suggested that I. montana plants from the low-altitude Murs and Bonnieux populations contained approximately three times more phytochemicals than plants from Apt. The data also showed that p1, p3 and p5 tended to accumulate throughout the seasons, unlike p2 and p10, which decreased. Lastly, p10 appeared to be the most abundant compound (roughly 50% more than the other lactones) regardless of the location or season.
Discussion
4.1. Inula montana morphology montana plants exhibited shorter floral stem length and reduced leaf surface at high altitude (Apt; Table 2 and Fig. 3A). This is consistent with the tendency of many plants to shorten their organs during winter (Åström et al., 2015) or at high elevation due to low temperatures and strong wind speeds, as shown previously in three Asteraceae species [START_REF] Yuliani | The relationship between habitat altitude, enviromental factors and morphological characteristics of Pluchea indica, Ageratum conyzoides and Elephantopus scaber[END_REF]. This behavior allows for limiting dehydration and ameliorate the photosynthetic conditions by setting plant organs closer to the warmer soil surface [START_REF] Cabrera | Effects of temperature on photosynthesis of two morphologically contrasting plant species along an altitudinal gradient in the tropical high Andes[END_REF]. The seasonal modification of the leaf morphology has also been shown to optimize photosynthetic capacity (Åström et al., 2015). The slightly lower nutrient availability at Apt (Table 1) may also contribute to the smaller organ sizes. In addition, the leaf surface of I. montana remained stable during the hot period at Apt, whereas it decreased at low altitude (Fig. 3B). This result is correlated with both the higher temperature and the drought period present at the low-elevation sites. Taken together, the data suggest that the plant morphological response is clearly adapted to both the climate and the location.
Inula montana displays two different trichome types on its leaves: hairy and glandular. Trichomes are well described as being plastic and efficient plant weapons against herbivory, notably through their high contents of protective secondary metabolites. Insect feeding can modify both the density and the content of trichomes [START_REF] Tian | Role of trichomes in defense against herbivores: comparison of herbivore response to woolly and hairless trichome mutants in tomato (Solanum lycopersicum)[END_REF]. Abiotic factors also strongly influence plant hairs; for example dry conditions, high temperatures or high irradiation can increase the number of trichomes per unit leaf area [START_REF] Pérez-Estrada | Variation in leaf trichomes of Wigandia urens: environmental factors and physiological consequences[END_REF]. Conversely, trichome density decreases in the shade or in well-irrigated plants. In this context, water availability in the plant environment is an integrative factor [START_REF] Picotte | ), respectively. A: Variable factor map. The bold lines and squares show sesquiterpene lactones; B: Individual factor map with confidence ellipses (95%) around the descriptive variables. temporal variation in moisture availability: consequences for water use efficiency and plant performance[END_REF]. Our data are consistent with this model, since plants from the Apt habitat (showing the highest altitude and precipitation but the lowest temperatures) displayed fewer glandular trichomes on their leaves than either of the other growing sites that suffered from drought periods (Table 2). These results also indicate that I. montana undergoes a stronger or at least a different type of stress at low altitude.
Inula montana physiology
It appeared that I. montana biomass increased from early spring to summer but then decreased, consistent with the hemicryptophytic strategy of this plant (dry weight, Fig. 4A). The location of the I. montana habitats had no effect on the dry weight but significantly influenced the plant water content, which markedly decreased during the summer at low elevation (Murs and Bonnieux; Fig. 4C). This is consistent with the expectation that low-elevation regions in the Mediterranean area would be hotter and drier than high-altitude regions, leading to more stressful conditions for plants [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF][START_REF] Wolfe | Adaptation to spring heat and drought in northeastern Spanish Arabidopsis thaliana[END_REF]. The absence of an effect on the dry weight of I. montana here illustrates the strong variability of this xerophilous plant, namely its capacity to grow in various habitats and its ability to resist drought. Chlorophyll a fluorescence has been described as an accurate indicator of the plant response to environmental fluctuations and biotic stress [START_REF] Murchie | Chlorophyll fluorescence analysis: a guide to good practice and understanding some new applications[END_REF][START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF] and has gained interest in ecophysiological studies [START_REF] Åström | Morphological characteristics and photosynthetic capacity of Fragaria vesca L. winter and summer leaves[END_REF][START_REF] Perera-Castro | Light response in alpine species: different patterns of physiological plasticity[END_REF]. The maximum photochemical quantum yield of PSII (Fv/Fm) and the performance index (PI) reflect photooxidative stress and plant fitness [START_REF] Strasser | The fluorescence transient as a tool to characterize and screen photosynthetic samples[END_REF]. Fv/Fm values usually vary from 0.75 to 0.85 for non-stressed plants. Any decrease indicates a stressful situation, reducing the photosynthetic potential [START_REF] Maxwell | Chlorophyll fluorescence -a practical guide[END_REF]. In the Mediterranean climate, plant photoinhibition frequently occurs [START_REF] Guidi | Non-invasive tools to estimate stress-induced changes in photosynthetic performance in plants inhabiting Mediterranean areas[END_REF]. Below a certain limit of solar radiation, this protective mechanism allows the dissipation of excessive photosynthetic energy as heat [START_REF] Dos Santos | Seasonal variations of photosynthesis gas exchange, quantum efficiency of photosystem II and biochemical responses of Jatropha curcas L. grown in semi-humid and semi-arid areas subject to water stress[END_REF]. Here, both of the indicators (Fv/Fm and PI) displayed lower values at the lowelevation sites (Murs and Bonnieux; Fig. 5), confirming that I. montana was subjected to greater stress there. These results are in agreement with the observed drought periods at those sites and reflect the adaptive response of the plants to avoid photodamage under high temperature and drought stress in order to preserve their photosynthetic apparatus [START_REF] Poiroux-Gonord | Metabolism in orange fruits is driven by photooxidative stress in the leaves[END_REF]. It is not possible to easily correlate these results to the solar radiation because no difference was observed among the 3 habitats, as described above (Fig. 2). However, a similar study that focused on the combined effects of altitude and season on Clinopodium vulgare highlighted a decrease in Fv/Fm values in lowland populations at the beginning of a drought period [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF].
Secondary metabolites
Plant secondary metabolites are well known for accumulating in response to environmental conditions that induce oxidative stress. Many studies have proposed that polyphenols might play a protective anti-oxidative role in plants [START_REF] Bartwal | Role of secondary metabolites and brassinosteroids in plant defense against environmental stresses[END_REF][START_REF] Bautista | Environmentally induced changes in antioxidant phenolic compounds levels in wild plants[END_REF]. Consequently, phenolics and other secondary metabolites usually accumulate under drought stress, salt stress [START_REF] Adnan | Desmostachya bipinnata manages photosynthesis and oxidative stress at moderate salinity[END_REF], high or low temperatures and at high altitude; this is exacerbated in Mediterranean plants [START_REF] Kofidis | Combined effects of altitude and season on leaf characteristics of Clinopodium vulgare L. (Labiatae)[END_REF][START_REF] Scognamiglio | Chemical composition and seasonality of aromatic Mediterranean plant species by NMR-Based Metabolomics[END_REF]. Phenology and plant development also strongly influence the concentrations of phenolic compounds [START_REF] Radušienė | Effect of external and internal factors on secondary metabolites accumulation in St. John's Worth[END_REF]. Our data showed that the I. montana leaf total polyphenol and flavonoid contents both varied over the season and reached their maximum value during latespring and summer (Fig. 6). This physiological behavior follows the seasonal solar radiation profile (Fig. 2) and is consistent with the welldescribed photoprotective role of polyphenols [START_REF] Agati | Multiple functional roles of flavonoids in photoprotection[END_REF]. It has long been known that the quantity of solar radiation increases with altitude [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF]. Accordingly, we expected a higher content of phenolics at high elevation due to the higher irradiance, including UV. However, the cloudier weather at the Apt site appeared to compensate for the theoretical 3% difference in sunshine between that site and the 2 other I. montana habitats (Fig. 2). As such, our results cannot explain the low polyphenol content observed in latespring at Bonnieux. Either way, it appears that the stress perceived by I. montana plants at low altitude is not due to the simple variation in solar radiation but rather to a significant susceptibility to drought stress and/ or high temperatures.
Sesquiterpenes are an important group of organic compounds released by plants and are characteristic of the family Asteraceae. Most of them are volatile molecules used as hormones and for functions such as communication and defense against herbivory [START_REF] Rodriguez-Saona | The role of volatiles in plant-plant interactions[END_REF][START_REF] Chadwick | Sesquiterpenoids lactones: benefits to plants and people[END_REF]. In this work we have identified 5 sesquiterpene lactones that tend to accumulate in higher amounts in lowelevation habitats (Table 3). These compounds also showed quantities that were positively or negatively correlated with the seasonal progression. Sesquiterpene lactones are well described to follow a seasonal pattern and to accumulate in response to biotic and abiotic stresses [START_REF] Chadwick | Sesquiterpenoids lactones: benefits to plants and people[END_REF][START_REF] Sampaio | Effect of the environment on the secondary metabolic profile of Tithonia diversifolia: a model for environmental metabolomics of plants[END_REF]. Since these compounds play essential roles in the plant defense response, their accumulation under abiotic stress is consistent with the carbon balance theory, which states that the investment in plant defense increases in response to a growth limitation [START_REF] Mooney | Response of Plants to Multiple Stresses[END_REF]. However, in Arnica montana, no positive correlation between the production of these molecules and altitude was found [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF]. In addition, plant terpenoid release has been reported to be modulated by temperature, drought and UV radiation [START_REF] Loreto | Abiotic stresses and induced BVOCs[END_REF].
The topsoil at the low altitude sites (Murs and Bonnieux) was significantly richer in sand than that at Apt (Table 1). This confers a high draining capacity to the low-elevation sites that would inevitably increase the water deficiency and contribute to the drought stress perceived by the plants. Last, our data from aqueous extracts highlighted a slightly lower topsoil nutrient content at the Apt site (Table 1). Although the literature on this topic is scarce, and no information is available concerning N and K, some soil nutrients (namely P, Cu and Ca) can influence the plant sesquiterpene lactone content [START_REF] Foster | Influence of cultivation site on sesquiterpene lactone composition of forage chicory (Cichorium intybus L.)[END_REF][START_REF] Sampaio | Effect of the environment on the secondary metabolic profile of Tithonia diversifolia: a model for environmental metabolomics of plants[END_REF]. More broadly, deficiencies in nitrogen have been described to induce the accumulation of plant phenylpropanoids [START_REF] Ramakrishna | Influence of abiotic stress signals on secondary metabolites in plants[END_REF]. Apt also showed the lowest Ca content and pH. Although the values were only slightly higher at the two other sites, they may globally contribute to decreasing the availability of topsoil cations. We cannot exclude the possibility that this would also contribute to the stress on the plants at these locations, but it is not easy to make a connection with the plant phytochemical production.
Conclusion
The morpho-physiological characteristics of I. montana showed that the plant undergoes higher stress at its lower-altitude growing sites (Murs and Bonnieux). Four plant and environmental variables (chlorophyll fluorescence, plant water content, climate and topsoil draining capacity) specially converged to highlight the site water availability as the primary source of stress. In addition, the sesquiterpene lactone production by I. montana was higher at these low-elevation stress-inducing habitats.
The overall data are summarized in the principal component analysis (Fig. 8). The I. montana growing location (dimension 1) and the seasons (dimension 2) encompass more than 76% of the total variability, and the location itself exceeds 50%. The map confirms that plant stress (expressed as water content or Fv/Fm) and the subsequent release of sesquiterpene lactones (including 4 of the 5 compounds) are correlated to the integrative altitude parameter. The individual factor map (B) clearly discriminates the I. montana growing locations from the seasons and highlights the interaction of these two factors.
Dissecting the manner in which molecules of interest fluctuate in plants (in response to biotic and abiotic stress) is of great interest scientifically and economically [START_REF] Pavarini | Exogenous influences on plant secondary metabolite levels[END_REF]. The present study shows that growing habitats that induce plant stress, particularly drought stress, can significantly enhance the production of sesquiterpene lactones by I. montana. Similar approaches have been conducted with A. montana [START_REF] Spitaler | Altitudinal variation of secondary metabolite profiles in flowering heads of Arnica montana cv[END_REF][START_REF] Perry | Sesquiterpene lactones in Arnica montana: helenalin and dihydrohelenalin chemotypes in Spain[END_REF][START_REF] Clauser | Differences in the chemical composition of Arnica montana flowers from wild populations of north Italy[END_REF] and have provided valuable information and cultivation guidelines that helped with its domestication [START_REF] Jurkiewicz | Optimization of culture conditions of Arnica montana L.: effects of mycorrhizal fungi and competing plants[END_REF][START_REF] Sugier | Propagation and introduction of Arnica montana L. into cultivation: a step to reduce the pressure on endangered and highvalued medicinal plant species[END_REF]. Appropriate cultivation techniques driven by the ecophysiological study of A. montana have succeeded in influencing its sesquiterpene lactone content for medicinal use [START_REF] Todorova | Developmental and environmental effects on sesquiterpene lactones in cultivated Arnica montana L[END_REF]. The manipulation of environmental stress has also been described to significantly promote the phytochemical (phenolic) content of lettuce [START_REF] Oh | Environmental stresses induce healthpromoting phytochemicals in lettuce[END_REF] and halophytes [START_REF] Slama | Water deficit stress applied only or combined with salinity affects physiological parameters and antioxidant capacity in Sesuvium portulacastrum[END_REF][START_REF] Adnan | Desmostachya bipinnata manages photosynthesis and oxidative stress at moderate salinity[END_REF]. Literature regarding I. montana is very sparse. The present results bode well for our ongoing field-work that aims to simulate and test environmental levers to augment the secondary metabolism and to develop innovative culture methods for I. montana.
Our data also illustrate the high morpho-physiological variability of this calcicolous plant. High-altitude habitat appears to primarly impact the morphology of the plant, while low-elevation sites mostly induce physiological responses to stress (chlorophyll fluorescence, phytochemicals synthesis). I. montana appears to grow well on south-facing sites possessing poor topsoil and low nutrient availability. It is also able to face high temperature and altitude gradients and to grow well on draining soil under a climate that induces drought stress. Table 3 Cross-location and cross-time relative quantification of the 5 sesquiterpene lactones found in Inula montana leaves. "-" indicates the absence of the molecule, and "+", "++", and "+++" indicate its relative increasing abundance.
Fig. 1 .
1 Fig. 1. Annual climographs (20 years of averaged data) of the three Inula montana study sites. The black line represents the mean temperature, and the hatched area represents the mean precipitation per month (rainfall or snow). Drought periods are symbolized by an asterisk.
Fig. 2 .
2 Fig. 2. Monthly means of satellite-based global solar irradiation as measured on the horizontal plane at ground level (GHI). The solid lines (GHI) represent the actual terrestrial solar radiation; the dashed lines (Clear-Sky GHI) estimate the irradiation under a cloudless sky. The data represent the means of 3 years of records (2013-2015) for the 3 Inula montana populations (spatial resolution was 3-8 km). The asterisks indicate significant differences between sites at p < 0.05.
Fig. 3 .Fig. 4 .
34 Fig. 3. Mean leaf blade surface area (A) and number of leaves (B) of Inula montana plants according to the geographic location and seasonal progress. The data represent the mean values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 6 .
6 Fig. 6. Effect of the geographic location on Inula montana phytochemical contents. The data represent the contents of total polyphenols (A) and total flavonoids (B). The data represent the mean values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 5 .
5 Fig. 5. Effect of the geographic location on Inula montana photosystem II fluorescence. The data represent the mean Fv/Fm (A) and mean PI (B) values of 10 plants ± standard error. The lowercase letters represent significant differences at p < 0.05.
Fig. 7 .
7 Fig. 7. HPLC chromatograms of Inula montana leaves harvested during the summer, according to the plant geographic location. S.l.: sesquiterpene lactone; Fl.: flavonoid; In.: inositol. Peaks were identified according to Garayev et al. (2017).
Table 1
1 Pedoclimatic characterization of Inula montana habitats. Exp. Δ: expected theoretical range of element concentrations for a standard agricultural parcel.
Murs Bonnieux Apt
Climate
Mean temperature (°C) 11.0 12.1 7.7
Air moisture (%) 66.5 69.7 54.4
Annual precipitation (mm) 774 702 928
Topsoil
Composition (%) & texture Sandy loam Sandy clayey Clayey loam
loam
Organic matter (from aqueous 5.01 4.98 5.00
extract)
Clay 6.4 16.1 18.0
Sand 37.3 20.4 3.1
Silt 56.3 63.5 78.8
Macro-and microelements (mg/kg from aqueous extract) Exp. Δ
pH 8.3 7.8 7.7
NH 4 3.96 2.56 2.09 4.0-8.0
NO 3 3.79 3.14 1.29 4.0-8.0
K 5.4 5.4 2.7 40-80
PO 4 0.2 0.2 0.2 15-25
Mg 2.5 2.5 3.3 20-40
Ca 96.2 71.8 68.7 100-200
Fe 0.33 0.07 0.14 8.0-12.0
Cu 0.01 0.01 0.01 0.30-0.50
Mn 0.13 0.14 0.08 0.30-0.50
Zn 0.09 0.08 0.07 0.30-0.50
Bo 0.51 0.49 0.53 1.0-2.0
Acknowledgments
This work was supported by the French region Provence-Alpes-Côte d'Azur (project n°2013_13403), the Luberon Regional Natural Park and the TERSYS Research Federation of the University of Avignon. We thank Prof. Vincent Valles (Avignon University) for his advice on the statistics. We thank Didier Morisot, collections manager of the plant garden of the Faculty of Medicine of the University of Montpellier, for the I. montana identification. |
01765113 | en | [
"info.info-ro"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765113/file/ROADEF2018_paper_060.pdf | Olivier Briant
email: [email protected]
Hadrien Cambazard
email: [email protected]
Diego Cattaruzza
email: [email protected]
Nicolas Catusse
email: [email protected]
Anne-Laure Ladier
email: [email protected]
Maxime Ogier
email: [email protected]
A column generation based approach for the joint order batching and picker routing problem
Keywords: order batching, picker routing, column generation
Introduction
Picking is the process of retrieving products from the inventory and is often considered a very expensive operation in warehouse management. A set of pickers perform routes into the warehouse, pushing a trolley and collecting items to prepare customer orders. However, customer orders usually do not fit the capacity of a trolley. They are typically grouped into batches, or on the contrary divided into subsets, with the aim of collecting all the orders by minimizing the walked distance. This problem is known as the joint order batching and picker routing problem [START_REF] Cristiano | Optimally solving the joint order batching and picker routing problem[END_REF].
This work presents an exponential linear programming formulation where variables or columns are related to single picking routes in the warehouse. More precisely, a column refers to a route involving a set of picking operations and satisfying the side constraints required at the trolley level such as the mixing of orders or the capacity. Computing such a picking route is an intractable routing problem in general and, depending on the warehouse layout, can closely relate to the traveling salesman problem (TSP). The rational of our approach is however to consider that the picking problem alone, in real-life warehouses, is easy enough in practice to be solved exactly. We apply this approach on two different industrial benchmarks, based on different warehouse layouts.
Problem specification and industrial applications
The warehouse layout is modeled as a directed graph G = (V, A) with two types of vertices, locations and intersections. Locations contain one or more product references to be picked.
Two typical examples of warehouse layouts are used as benchmarks in the present work.
-A regular rectangular layout made of vertical aisles and horizontal cross-aisles. Such a layout has been used by numerous authors in the past [START_REF] De Koster | Design and control of warehouse order picking : A literature review[END_REF] to define the order picking problem. It is the setup of the Walmart benchmark. -An acyclic layout where pickers are not allowed to backtrack. It is another typical industrial setup where the flow is constrained in a single direction and an aisle must be entered and exited on the same side. It is the setup of the HappyChic benchmark. Each product reference p ∈ P is characterized by its location in the warehouse and its size V w p in each dimension w ∈ W. A product reference may have several dimensions such as weight and volume and we refer to the set of dimensions as W.
An order from a customer is defined as a set of order lines. An order line l ∈ L is related to an order o and defined as a pair (p l , Q l ) where p l ∈ P is a product reference and Q l is the number of items to pick. An order o ∈ O is a set of order lines L o ⊆ L. Moreover, an order can be split in at most M o boxes.
Order lines are collected by trolleys, each carrying a set of B boxes. A box has a capacity V w in dimension w ∈ W and an order line can be assigned into several boxes (the quantity Q l of an order line l can be split among several boxes). A box is therefore filled with partial order lines. A partial order line l is a pair (p l , Ql ) with Ql ≤ Q l . A box can only contain partial order lines from a single order.
A solution is a collection of routes R in the warehouse layout G. Each route r is travelled by a trolley which collects partial order lines into its boxes. The capacities of the boxes must be satisfied in each dimension w ∈ W. An order o ∈ O can not be assigned to more than M o boxes. Finally all order lines must be picked with the required number of items. The objective is to minimize the total distance to perform all the routes in R.
The two industrial cases addressed in the present work, from the Walmart and HappyChic, differ slightly. In particular, for the Walmart case, only one dimension is considered for a box, representing the maximum number of items in a box. Additionally, an order must be picked entirely by a single trolley.
A column generation based approach
In the industrial case of HappyChic, the picking takes place on an acyclic graph, thus boils down to an easy path problem. In Walmart's case, warehouse have the regular rectangular structure made of aisles and cross-aisles. In that case, dynamic programming algorithms can take advantage of that structure to efficiently solve the corresponding TSP when the warehouse contains up to eight cross-aisles, which is beyond most real-life warehouse's sizes [START_REF] Cambazard | Fixed-Parameter Algorithms for Rectilinear Steiner tree and Rectilinear Traveling Salesman Problem in the plane[END_REF]. We therefore assume that in both cases, an efficient oracle is available to provide optimal picking routes in the warehouse.
We show that such an oracle allows for a very effective exponential LP formulation of the joint order batching and picking problem. The pricing problem can be seen as a prize-collecting TSP with a capacity constraint and the pricing algorithm heavily relies on the picking oracle to generate cutting planes. A number of improvements are proposed to speed up the pricing. In particular, a procedure to strengthen the cutting planes is given when the distance function for the considered set of orders is submodular. For the industrial case of HappyChic, the graph is acyclic, so it is possible to propose a polynomial set of constraints to exactly calculate the distance, instead of generating cutting planes.
The proposed formulation is compared experimentally on Walmart's benchmark and proves to be very effective, improving many of the best known solutions and providing very strong lower bounds. Finally, this approach is also applied to the HappyChic case, demonstrating its generality and interest for this application's domain. |
01546357 | en | [
"chim.mate"
] | 2024/03/05 22:32:13 | 2004 | https://hal.science/hal-01546357v2/file/Jacques%20-%2028th%20ICACC%20-%20CB-S4-55%20for%20HALnew.pdf | S Jacques
B Bonnetot
M.-P Berthet
H Vincent
BN interphase processed by LP-CVD from tris(dimethylamino)borane and characterized using SiC/SiC minicomposites
SiC/BN/SiC 1D minicomposites were produced by infiltration of a Hi-Nicalon (from Nippon Carbon, Japan) fiber tow in a Low Pressure Chemical Vapor Deposition reactor.
Tris(dimethylamino)borane was used as a halogenide-free precursor for the BN interphase processing. This precursor prevents fiber and CVD apparatus from chemical damage. FT-IR and XPS analyses have confirmed the boron nitride nature of the films. Minicomposite tensile tests with unload-reload cycles have shown that the minicomposite mechanical properties are good with a high interfacial shear stress. Transmission electron microscopy observation of the interphase reveals that it is made of an anisotropic turbostratic material.
Furthermore, the fiber/matrix debonding, which occurs during mechanical loading, is located within the BN interphase itself.
INTRODUCTION
In SiC/SiC type ceramic matrix composites, a good toughness can be achieved by adding between the fiber and the brittle matrix a thin film of a compliant material called "interphase" [START_REF] Evans | The physics and mechanics of fibre-reinforced brittle matrix composites[END_REF]. Anisotropic pyrolytic boron nitride obtained from BF3/NH3/H2 mixture can play such a role. However, its processing by LP-CVD (Low Pressure Chemical Vapor Deposition) from BF3 requires protecting the fiber from gaseous chemical attack [START_REF] Rebillat | Oxidation resistance of SiC/SiC minicomposites with a highly crystallised BN interphase[END_REF] [START_REF] Jacques | SiC/SiC minicomposites with structure-graded BN interphases[END_REF]. Furthermore, the CVD apparatus is quickly deteriorated by the aggressive halogenated gases and expensive maintenance is needed. On the other hand, some authors have reported the use of a halogenide-free precursor: B[N(CH3)2]3 (tris(dimethylamino)borane, TDMAB) for CVD semiconductor h-BN film processing [START_REF] Dumont | Deposition and characterization of BN/Si(0 0 1) using tris(dimethylamino)borane[END_REF].
The aim of the present contribution was to prepare within one-dimensional minicomposites a BN interphase from TDMAB and to characterize this interphase and the properties of these SiC/BN/SiC minicomposites.
EXPERIMENTAL
SiC/BN/SiC minicomposites were produced by infiltration of the BN interphase within a Hi-Nicalon (from Nippon Carbon, Japan) fiber tow by LP-CVD in a horizontal hot-wall reactor (inner diameter: 24 mm) at a temperature close to 1100°C during 90 seconds. TDMAB vapor was carried by hydrogen through a bubbler at 30°C (TDMAB is liquid at this temperature and the vapor pressure is 780 Pa). The H2 gas flow rate was 15 sccm. NH3 was added to the gaseous source with a flow rate of 100 sccm in order to enhance nitrogen source and favor amine group stripping from the precursor and carbon suppression in the coating. A BN film was also deposited with the same conditions on a Si wafer for Fourier transform infrared (FT-IR) spectroscopy (Nicolet spectrometer, Model MAGNA 550, USA) and X-ray photoelectron spectroscopy (XPS) analyses (SSI model 301 spectrometer). The SiC matrix was classically infiltrated in the fiber tow from CH3SiCl3/H2 precursor gases at 950°C in a second LP-CVD reactor. In both cases, the total gas pressure in the reactors was as low as 2 kPa in order to favor infiltration homogeneity within the fiber tows.
The interphase thickness was about 150 nm and the fiber volume fraction was about 40 % (measured by weighing). The minicomposites were tensile tested at room temperature with unload-reload cycles using a machine (MTS Systems, Synergie 400, USA) equipped with a 2 kN load cell. The minicomposite ends were glued with an epoxy resin (Lam Plan, ref 607, France) in metallic tubes separated by 40 mm that were then gripped into the testing machine jaws. The crosshead speed was 0.05 mm/min. The strain was measured with an extensometer (MTS, model 634.11F54, USA) directly gripped on the minicomposite itself.
The extensometer gauge length was 25 mm. The total number of matrix cracks was verified by optical microscopy on polished longitudinal sections of the failed minicomposites after chemical etching (Murakami reactant) in order to reveal the matrix microcracks which were closed during unloading. The interfacial shear stress was then estimated from the last hysteresis loop recorded before failure by following the method described in reference [START_REF] Lamon | Microcomposite test procedure for evaluating the interface properties of ceramic matrix composites[END_REF].
Thin longitudinal sections of minicomposites were studied by transmission electron microscopy (TEM: Topcon 002B, Japan) after tensile test using bright-field (BF), high resolution (HR) and selected area electron diffraction (SAED) techniques. The samples were embedded in a ceramic cement (CERAMABOND 503, Aremco Products Inc., USA) and mechanically thinned. The thin sheets (~60 µm in thickness) were then ion-milled (GATAN PIPS, USA) to electron transparency.
RESULTS AND DISCUSSION
Only two absorption bands are seen on the transmittance FT-IR spectra at 810 cm -1 and 1380 cm -1 typical of h-BN; OH bonds are not detected (Fig. 1).
At the film surface, the B/N atomic concentration ratio determined by XPS is close to one.
After ionic etching, the carbon content due to surface pollution decreases drastically; the nitrogen deficit is due to a preferential etching (Fig. 2). Both analyses confirm the BN nature of the films.
Figure 3 displays a typical force-strain curve for SiC/BN/SiC minicomposites. 588 matrix cracks were detected after failure along the 25 mm gauge length. The composites exhibit a non-brittle behavior: a non-linear domain evidencing matrix microcracking and fibre/matrix debonding follows the initial linear elastic region up to a high force at failure (170 N).
Therefore, (i) the BN interphase acts as a mechanical fuse and (ii) the Hi-Nicalon fibers were not damaged during the interphase BN processing from TDMAB. Furthermore, the calculated is 230 MPa. This value corresponds to a good load transfer between the matrix and the fibers and is as high as the best values obtained with BN interphases processed from classical halogenated gases [3] [6].
TEM observation of minicomposite pieces after failure (Fig. 4) shows that the matrix cracks deflections are preferentially localized within the BN interphase. Figure 4.a exhibits a thin matrix microcrack with a small opening displacement that is stopped within the interphase before reaching the fiber. In figure 4.b, a larger matrix crack which has been widened by the ion-milling is observed. In that case, some BN material remains bonded on both the fiber and the matrix. Thus, neither the interface with the fiber as in reference [START_REF] Naslain | Boron nitride interphase in ceramic-matrix composites[END_REF] nor the interface with the matrix is a weak link. The role of mechanical fuse is played by the boron nitride interphase itself. This feature agrees with the good interfacial shear stress measured for these minicomposites and corresponds to a strong fiber bonding characterized by a high strength and a high toughness [START_REF] Droillard | Strong interface in CMCs, a condition for efficient multilayered interphases[END_REF].
In figure 4.c, a crack is observed within the interphase. A higher magnification in HR mode (Fig. 4.e) reveals that the orientation of the 002 BN planes seems to influence the crack path: the crack and the lattice fringes have the same curvature. Furthermore, the existence of two distinct BN 002 diffraction arcs in the SAED pattern (Fig. 4.d) is due to a preferential orientation parallel of the 002 planes to the fiber axis. This structural anisotropy promotes the mode II crack propagation observed in the interphase.
CONCLUSION
A BN interphase was processed by LP-CVD within SiC/SiC minicomposites from tris(dimethylamino)borane, a halogenide-free precursor. The structure of the BN material is anisotropic and allows deflecting the matrix cracks during mechanical damaging. This interphase is strongly bonded to the fiber and plays the role of a mechanical fuse. The good mechanical properties of the composites allow considering the TDMAB as a new alternative precursor to classical aggressive halogenated gases for LP-CVD boron nitride interphase processing.
Figure 1 :
1 Figure 1: Transmittance FT-IR spectra of the BN film.
Figure 2 :
2 Figure 2: XPS depth atomic concentration profiles for the BN film (sputter rate: 1 -4 nm/min).
Figure 3 :F
3 Figure 3: Tensile force-strain curve with unload-reload cycles for the minicomposites (for clarity only a few hysteresis loops are represented).
Figure 4 :
4 Figure 4: TEM observation of the SiC/BN/SiC minicomposite according to the BF mode (a), (b) and (c), the SAED technique (negative pattern of the Hi-Nicalon fiber and the interphase) (d) and the HR mode (e).
ACKNOWLEDGEMENT
The authors are grateful to G. Guimon from LPCM (University of Pau, France) for XPS analysis. |
00176521 | en | [
"math.math-ap"
] | 2024/03/05 22:32:13 | 2002 | https://hal.science/hal-00176521/file/DIE-sept02.pdf | Cyril Imbert
SOME REGULARITY RESULTS FOR ANISOTROPIC MOTION OF FRONTS
Keywords: AMS Subject Classifications: 35A21, 35B65, 35D99, 35J60, 35K55, 35R35. Part
We study the regularity of propagating fronts whose motion is anisotropic. We prove that there is at most one normal direction at each point of the front; as an application, we prove that convex fronts are C 1,1 . These results are by-products of some necessary conditions for viscosity solutions of quasilinear elliptic equations. These conditions are of independent interest; for instance they imply some regularity for viscosity solutions of nondegenerate quasilinear elliptic equations.
Introduction
Following [START_REF] Bellettini | Anisotropic motion by mean curvature in the context of Finsler geometry[END_REF][START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF], we study propagating fronts whose velocity field v Φ is given by the following geometric law:
v Φ = (κ Φ + g)n Φ ,
where n Φ and κ Φ are respectively the inward normal direction and the mean curvature associated with a Finsler metric Φ; g denotes a possible (bounded) driving force.
The main result of this paper states that under appropriate assumptions, there is at most one (outward or inward) "normal direction" at each point of the front.
In order to define the front past singularities, we use the level-set approach initiated by Barles [START_REF] Barles | Remark on a flame propagation model[END_REF] and developed by Osher and Sethian [START_REF] Osher | Fronts moving with curvature dependent speed: Algorithms based on hamilton-jacobi equations[END_REF]. This approach consists in describing the front Γ t at time t as the zero level-set of a (continuous or discontinuous) function u : Γ t = {x : u(x, t) = 0}. Choosing first a continuous function u 0 such that the initial front Γ 0 coincides with {x : u 0 (x) = 0} (consider for instance the signed distance function to Γ 0 ), u turns out to be a solution of the following Hamilton-Jacobi equation: where Du and D 2 u denotes the first and second derivative in x of the function u and Φ • denotes the dual metric associated with Φ. This equation is known as the anisotropic mean curvature equation. It is solved by using viscosity solutions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF]. The function u depends on the choice of u 0 , but not the front Γ t , even not the two families of sets O t = {x : u(x, t) > 0} and I t = {x : u(x, t) < 0} [START_REF] Evans | Motion of level sets by mean curvature, I[END_REF][START_REF] Gang | Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations[END_REF][START_REF] Ishii | Generalized motion of noncompact hypersurfaces with velocity having arbitrary growth on the curvature tensor[END_REF]. The definition of the front is therefore consistent and the notions of "outside"and "inside" become precise.
The study of the normal directions reduces to the study of the semi-jets of discontinuous semisolutions of (1.1). This latter study is persued by using necessary conditions derived for viscosity solutions of degenerate elliptic and parabolic quasilinear equations. Besides, these conditions are of independent interest. For instance, we derive from them regularity of viscosity solutions of nondegenerate quasilinear elliptic and parabolic equations.
The paper is organized as follows. In Section 2, we first give assumptions and recall definitions that are used in the paper. In particular, the Finsler metric and its dual are introduced and the definition of normal directions and semijets are recalled. In Section 3, we state and prove our main results (Theorem 1 and Corollary 1). Eventually, in Section 4, we present the necessary conditions used in the proof of Theorem 1.
Assumptions and definitions
In this section, we give assumptions and definitions that are used throughout the paper.
2.1. Anisotropic motion. In order to take into account the anisotropy and the inhomogeneity of the environment in which the front propagates, the metric induced by the Euclidian norm is replaced with a so-called Finsler metric. In our context, a Finsler metric Φ is the support function of a given compact set denoted by B Φ • (x) :
Φ(ζ, x) = max{ ζ, ζ * : ζ * ∈ B Φ • (x)}.
The set B Φ • (x) is referred to as the Wulff shape. Here are the assumptions we make concerning Φ and B Φ • (x).
A0. (i) The Wulff shape B Φ • (x) is a compact set that contains the origin in its interior and is symmetric with respect to it;
(ii) Φ ∈ C 2 (R n \{0} × R n ); (iii) for all x ∈ R n , ζ → [Φ(ζ, x)] 2 is strictly convex. For a given x ∈ R n , the dual metric Φ • is defined as the support function of the set B Φ (x) = {ζ ∈ R n : Φ(ζ,
→ Φ • (ζ, x
) is a support function; indeed, a support function is convex and linear along half-lines issued from the origin. Consequently:
D ζζ Φ • (ζ, x) 0 and ζ ∈ KerD ζζ Φ • (ζ, x),
where denotes the usual order associated with S n , the space of n × n symmetric matrices. A second example of motion is the following: consider a (riemannian
) metric Φ(ζ, x) = Φ(ζ) = Gζ, ζ
where G ∈ S n is definite positive. The associated dual metric turns out to be Φ
• (ζ * ) = G -1 ζ * , ζ * .
Finally, let us give a third example in which the inhomogeneity of the environment is taken into account: Φ(ζ, x) = a(x) Gζ, ζ , where a ∈ C 2 (R n ) and a(x) > 0 for all x ∈ R n . The reader can check that in these three examples the kernel of D ζζ Φ • (ζ, x) coincides with Span{ζ}. We next assume that the Finsler metric verifies such a property.
A1. ∀x ∈ R n , ∀ζ ∈ R n \{0}, KerD ζζ Φ • (ζ, x) = Span{ζ}.
We also need the following additional assumption.
A2. There exists L > 0 such that for all x, y ∈ R n and all
ζ * ∈ R n , |Φ • (ζ * , y) -Φ • (ζ * , x)| L |ζ * | |y -x|.
2.2. Semi-jets, P-subgradients and P-normals. We solve (4.1) and (4.2) by using viscosity solutions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF]. In order to ensure the existence of a solution (using for instance results from [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF][START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF]), we assume throughout the paper that the initial front is bounded. Unboundedness of the domain can be handled with results from [START_REF] Barles | Front propagation and phase field theory[END_REF]. The definition of viscosity solutions is based on the notion of semi-jets. Let Ω be a subset of R n and u be a numerical function defined on Ω and x be a point in Ω. A couple (X, p) ∈ S n × R n is a so-called subjet (resp. a superjet) of the function u at x (with respect to Ω) if for all y ∈ Ω :
1 2 X(y -x), y -x + p, y -x u(y) -u(x) + o(|y -x| 2 ) (2.2) resp. 1 2 X(y -x), y -x + p, y -x u(y) -u(x) + o(|y -x| 2 ) , (2.3)
where o(.) is a function such that o(h)/h → 0 as h → 0 + . The set of all the subjets (resp. superjets) of u at x is denoted by J 2,- Ω u(x) (resp. by J 2,+ Ω u(x)). In order to define viscosity solutions for parabolic equations, one must use so-called parabolic semi-jets P 2,- Ω×[0,T ] u(x, t); see [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] for their definition.
A vector p that there exists X ∈ S n such that (X, p) ∈ J 2,- Ω u(x) is a so-called P-subgradient [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF] of the function u :
∀y ∈ Ω, p, y -x u(y) -u(x) + O(|y -x| 2 ).
The set of all such vectors is referred to as the proximal subdifferential of the function u and it is denoted by ∂ P u(x). Analogously, a proximal superdifferential (hence P-supergradients) can be defined by ∂ P u(x) = -∂ P (-u)(x). It coincides with the sets of vectors p such that ∃X ∈ S n : (X, p) ∈ J 2,+ Ω u(x). The geometry of a set Ω can be investigated by studying subjets of the function denoted by Zero Ω defined on Ω and that is identically equal to 0. The proximal subdifferential of this function coincides with the proximal normal cone of Ω at x [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF]:
N P (Ω, x) = {p ∈ R n : ∀y ∈ Ω, p, y -x O(|y -x| 2 )}.
An element of N P (Ω, x) is referred to as a P-normal. If p is a P-normal of Ω at x and λ is a nonnegative number, then λp is still a P-normal. From the geometrical viewpoint, one can say that N P (Ω, x) is a cone, that is to say it is made of half-lines issued from the origin. Crandall, Ishii and Lions [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] proved that for a set with a C 2 boundary:
J 2,- Ω Zero(x) = {(S(x) -Y, λn(x)) : λ ≥ 0, Y 0},
where n(x) denotes the normal vector and S(x) denotes the second fundamental form extended to R n by setting S = 0 along Span{n(x)}. The proximal normal cone is therefore reduced to R + n(x) = {λn(x) : λ 0}. If Ω is a hyperplan and n = 0 denotes a normal vector from H ⊥ , then N P (Ω, x) is the whole line Span{n}.
Main results
In this section, we state and prove our main results, namely Theorem 1 and Corollary 1. The proof of Theorem 1 rely on necessary conditions verified by solutions of possibly degenerate elliptic and parabolic quasilinear equations; these conditions are presented in Section 4.
Theorem 1. Consider a Finsler metric Φ satisfying A0, A1 and A2. Then the associated propagating front Γ t , t > 0, has at most one "outward normal direction" (resp. "inward normal direction"), that is to say the proximal normal cone at any point of I t ∪ Γ t or at any point of I t (resp. O t ∪ Γ t or O t ) is at most a line.
Remarks. 1. Assumptions A0 and A2 ensure the existence and uniqueness of the solution u of (1.1). Assumption A1 can be seen as a regularity assumption on the Franck diagram.
2. Theorem 1 remains valid if the front "fattens" (see [START_REF] Souganidis | Front propagation: Theory and applications[END_REF] for details about the fattening phenomena).
Theorem 1 implies the regularity of convex fronts. See also Theorem 5.5 in [START_REF] Evans | Motion of level sets by mean curvature, III[END_REF]. Corollary 1. Let the metric Φ be independent of the position and such that A0, A1 are satisfied. Assume that the initial front Γ 0 is convex. Then the associated propagating front Γ t is also convex and is C 1,1 ; more precisely, I t ∪ Γ t and I t are convex and their boundary is C 1,1 .
Let us now prove these two results.
Proof of Theorem 1. Assumptions A0 and A2 ensure that the assumptions of Theorem 4.9 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF] are satisfied. Then, there exists a unique solution of (1.1). In order to prove Theorem 1, we must prove that for a given point of the boundary of I t , two P-normals p 1 and p 2 colinear. Let us choose λ such that λp 1 + (1 -λ)p 2 = 0. We know [START_REF] Barles | Front propagation and phase field theory[END_REF] that the function Zero It is a supersolution of (1.1). By applying Proposition 1 (see Section 4), we obtain:
p 1 -p 2 ∈ KerD ζζ Φ • (λp 1 + (1 -λ)p 2 ).
Using Assumption A1, we conclude p 1 -p 2 is colinear with λp 1 + (1 -λ)p 2 . We conclude that p 1 and p 2 are colinear.
We proceed analogously with the sets I t , O t ∪ Γ t and O t .
Proof of Corollary 1. The fact that the front is convex for any time t follows from Theorem 3.1 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF]. Choosing for u 0 the opposite of the signed distance function to Γ 0 , we ensure that the initial datum is Lipschitz and concave. Therefore, Theorem 2.1 in [START_REF] Nochetto | Numerical analysis of geometric motion of fronts[END_REF] implies that u is Lipschitz; this ensures that u has a sublinear growth. By applying Theorem 3.1 in [START_REF] Giga | Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains[END_REF], we know that x → u(x, t) is concave, hence I t ∪ Γ t and I t are convex sets. The Hahn-Banach theorem ensures the existence of a normal in the sense of convex analysis. Such a normal is also a P-normal [START_REF] Clarke | Nonsmooth Analysis and Control Theory[END_REF]. Using the fact that Zero It and Zero It∪Γt are supersolutions of (1.1) (see instance [START_REF] Barles | Front propagation and phase field theory[END_REF]), Theorem 1 implies that there is at most one P-normal. Hence there is exactly one normal in the sense of convex analysis and C 1,1 regularity follows.
Necessary conditions for elliptic and parabolic quasilinear equations
In the present section, we state necessary conditions that are verified by viscosity sub-and supersolutions (hence by solutions) of quasilinear elliptic equations on a domain Ω ⊂ R n :
- n i,j=1 a i,j (Du, u, x) ∂ 2 u ∂x i ∂x j + f (Du, u, x) = 0, ∀x ∈ Ω. (4.1)
These equations may be degenerate and/or singular at Du = 0. We also study the associated parabolic equations on Ω × [0, T ] :
∂u ∂t - n i,j=1 a i,j (Du, u, x, t) ∂ 2 u ∂x i ∂x j + f (Du, u, x, t) = 0, ∀(x, t) ∈ Ω × [0, T ].
(4.2) In the following, the n × n symmetric matrix with entries (a i,j ) is denoted by A. We assume that (4.1) and (4.2) are degenerate elliptic.
(E) For all p, u, x(, t), A(p, u, x(, t)) 0.
In Propositions 1 and 2, we prove that the difference of two P -subgradients (resp. P -supergradients) of a supersolution (resp. of a subsolution) of (4.1) or (4.2) is a degenerate direction, that is to say it lies in the kernel of A.
Proposition 1 (The elliptic case). Consider a supersolution (resp. a subsolution) u of (4.1), a point x ∈ Ω and two subjets (X i , p i ) ∈ J 2,- Ω u(x), i = 1, 2 (resp. two superjets (X i , p i ) ∈ J 2,+ Ω u(x), i = 1, 2). Then for any λ ∈ [0, 1] such that λp 1 + (1 -λ)p 2 = 0, the following holds true:
p 1 -p 2 ∈ KerA(λp 1 + (1 -λ)p 2 , u(x), x).
A straightforward consequence of Proposition 1 is the following result dealing with nondegenerate equations.
Corollary 2. Suppose that the equation (4.1) is nondegenerate, i.e., A(p, u, x)q, q > 0 if q = 0.
Then a solution u : Ω → R of (4.1) has "no corners", that is to say the function u at most one P-subgradient and at most one P-supergradient at any point x ∈ Ω. This corollary applies for instance to the equation associated with the search of minimal surfaces:
div Du 1 + |Du| 2 = 0 ⇔ -∆u + D 2 uDu, Du 1 + |Du| 2 = 0. (4.3)
Before proving Proposition 1, we state its parabolic version.
Proposition 2 (The parabolic case). Consider a supersolution (resp. a subsolution) u of (4.2), a point (x, t) ∈ Ω × [0, T ] and two parabolic subjets (X i , p i , α i ) ∈ P 2,- Ω×[0,T ] u(x, t), i = 1, 2 (resp. two parabolic superjets (X i , p i , α i ) ∈ P 2,+ Ω×[0,T ] u(x, t), i = 1, 2). Then for any λ ∈ [0, 1] such that λp 1 + (1 -λ)p 2 = 0, the following holds true:
p 1 -p 2 ∈ KerA(λp 1 + (1 -λ)p 2 , u(x), x, t).
Corollary 3. Suppose that the equation (4.2) is nondegenerate, i.e., A(p, u, x, t)q, q > 0 if q = 0.
Then a solution u : Ω × [0, T ] → R of (4.1) has "no corners", that is to say the function u has at most one P-subgradient and at most one Psupergradient at any point x ∈ R n .
The Hamilton-Jacobi equation associated with the motion by mean curvature of graphs is an example of nondegenerate quasilinear parabolic equation:
∂u ∂t -∆u + D 2 uDu, Du 1 + |Du| 2 = 0. ( 4.4)
A class of parabolic equations, including (4.4), is studied by a geometrical approach in [START_REF] Barles | Quasilinear parabolic equations, unbounded solutions and geometrical equations I[END_REF]. The proof of Proposition 1 relies on the following technical lemma.
Lemma 1. Consider an arbitrary set Ω and a function u : Ω → R. Let x be a point in Ω and (X i , p i ), i = 1, 2, be two subjets of u at x. Then for any matrix X ∈ S n such that X X i , i = 1, 2, any λ [0, 1] and any M > 0, the following holds true:
(X + M (p 1 -p 2 ) ⊗ (p 1 -p 2 ), λp 1 + (1 -λ)p 2 ) ∈ J 2,- Ω u(x).
Let us show how Lemma 1 implies Proposition 1.
Proof of Proposition 1. Let X ∈ S n be such that X X i for i = 1, 2 and consider any λ ∈ [0, 1] and any M > 0. By applying Lemma 1 to the supersolution u of (4.1) and by denoting p the vector λp 1 + (1 -λ)p 2 and q the vector p 1 -p 2 , we conclude that: (X + M q ⊗ q, p) ∈ J 2,- Ω u(x). As u is a supersolution of (4.1) and p = 0, the following holds true:
-tr [A(p, u(x), x)(X + M q ⊗ q)] + f (p, u(x), x) 0.
Dividing by M and letting M → +∞ yields: 0
A(p, u(x), x)q, q = tr [A(p, u(x), x)q ⊗ q] 0.
The first inequality follows from the ellipticity of (4.1). We conclude that q ∈ KerA(p, u(x), x).
If the function u is a subsolution, apply the lemma to the function -u and use it analogously.
One can easily give a parabolic version of this lemma and use it to prove Proposition 2. We omit these details and we turn to the proof of Lemma 1.
Proof of Lemma 1. By considering v(y) = u(x+y)-u(x), we may assume that x = 0 and u(x) = 0. Let us denote p = λp 1 + (1 -λ)p 2 and q = p 1 -p 2 . A straightforward calculus shows us that for any real number r such that |r| min( 2λ M , 2(1-λ) M ) : 1 2 M r 2 max{(1 -λ)r, -λr}.
Therefore, for any y such that | q, y | min( 2λ M , 2(1-λ) M ), we get: 1 2 M q, y 2 max{(1 -λ) q, y , -λ q, y }.
Finally, for any y in a neighbourhood of the origin such that x + y ∈ Ω, we get:
1 2 (X + M q ⊗ q)y, y + p, y = 1 2 Xy, y + 1 2 M q, y 2 + p, y
∂u ∂t -Φ • (Du, x) tr[D ζζ Φ • (Du, x)D 2 u] + D ζ Φ • (Du, x), Du |Du| +tr[D ζx Φ • (Du, x)] + g(Du, x, t) = 0,(1.1)
supported by the TMR "Viscosity solutions and their applications". 1263
max 1 2 X 1 y, y + (1 -λ) q, y + p, y , 1 2 X 2 y, y -λ q, y + p, y
We have therefore proved that (X + M q ⊗ q, p) ∈ J 2,- Ω-x v(0) = J 2,- Ω u(x). Remark. Using Lemma 1, necessary conditions can be derived for any general nonlinear elliptic equation F (D 2 u, Du, u, x) = 0 if (E) is satisfied and if X → F (X, p, u, x) is positively homogenous. |
01765230 | en | [
"phys.meca.msmeca"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765230/file/1804.02388.pdf | Grigor Nika
Andrei Constantinescu
DESIGN OF MULTI-LAYER MATERIALS USING INVERSE HOMOGENIZATION AND A LEVEL SET METHOD
Keywords: Topology optimization, Level set method, Inverse homogenization, Multi-layer material
This work is concerned with the micro-architecture of multi-layer material that globally exhibits desired mechanical properties, for instance a negative apparent Poisson ratio. We use inverse homogenization, the level set method, and the shape derivative in the sense of Hadamard to identify material regions and track boundary changes within the context of the smoothed interface. The level set method and the shape derivative obtained in the smoothed interface context allows to capture, within the unit cell, the optimal microgeometry. We test the algorithm by computing several multi-layer auxetic micro-structures. The multi-layer approach has the added benefit that contact during movement of adjacent "branches" of the micro-structure can be avoided in order to increase its capacity to withstand larger stresses.
Introduction
The better understanding of the behavior of novel materials with unusual mechanical properties is important in many applications. As it is well known the optimization of the topology and geometry of a structure will greatly impact its performance. Topology optimization, in particular, has found many uses in the aerospace industry, automotive industry, acoustic devices to name a few. As one of the most demanding undertakings in structural design, topology optimization, has undergone a tremendous growth over the last thirty years. Generally speaking, topology optimization of continuum structures has branched out in two directions. One is structural optimization of macroscopic designs, where methods like the Solid Isotropic Method with Penalization (SIMP) [START_REF] Bendsoe | Topology optimization: theory, methods and applications[END_REF] and the homogenization method [START_REF] Allaire | Shape Optimization by the Homogenization Methods[END_REF], [START_REF] Allaire | Shape optimization by the homogenization method[END_REF] where first introduced. The other branch deals with optimization of micro-structures in order to elicit a certain macroscopic response or behavior of the resulting composite structure [START_REF] Bendsoe | Generating optimal topologies in structural design using a homogenization method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF], [START_REF]Sigmund Materials with prescribed constitutive parameters: An inverse homogenization problem[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF]. The latter will be the focal point of the current work.
In the context of linear elastic material and small deformation kinematics there is quite a body of work in the design of mechanical meta-materials using inverse homogenization. One of the first works in the aforementioned subject was carried out by [START_REF]Sigmund Materials with prescribed constitutive parameters: An inverse homogenization problem[END_REF]. The author used a modified optimality criteria method that was proposed in [START_REF] Rozvany | Layout and Generalized Shape Optimization by Iterative COC Methods[END_REF] to optimize a periodic micro-structure so that the homogenized coefficients attained certain target values.
On the same wavelength the authors in [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF] used inverse homogenization and a level set method coupled with the Hadamard boundary variation technique [START_REF] Allaire | Conception optimale de structures[END_REF], [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF] to construct elastic and thermo-elastic periodic micro-structures that exhibited certain prescribed macroscopic behavior for a single material and void. More recent work was also done by [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF], where again inverse homogenization and a level set method coupled with the Hadamard shape derivative was used to extend the class of optimized micro-structures in the context of the smoothed interface approach [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]. Namely, for mathematical or physical reasons a smooth, thin transitional layer of size 2 , where is small, replaces the sharp interface between material and void or between two different material. The theory that [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF] develop in obtaining the shape derivative is based on the differentiability properties of the signed distance function [START_REF] Delfour | Shapes and Geometries. Metrics, Analysis, Differential Calculus, and Optimization, Advances in Design and Control[END_REF] and it is mathematically rigorous.
Topology optimization under finite deformation has not undergone the same rapid development as in the case of small strains elasticity, for obvious reasons. One of the first works of topology optimization in non-linear elasticity appeared as part of the work of [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF] where they considered a non-linear hyper-elastic material of St. Venant-Kirchhoff type in designing a cantilever using a level set method. More recent work was carried out by the authors of [START_REF] Wang | Design of materials with prescribed nonlinear properties[END_REF], where they utilized the SIMP method to design non-linear periodic micro-structures using a modified St. Venant-Kirchhoff model.
The rapid advances of 3D printers have made it possible to print many of these microstructures, that are characterized by complicated geometries, which itself has given way to testing and evaluation of the mechanical properties of such structures. For instance, the authors of [START_REF] Clausen | Topology optimized architectures with programmable Poisson's ratio over large deformations[END_REF], 3D printed and tested a variety of the non-linear micro-structures from the work of [START_REF] Wang | Design of materials with prescribed nonlinear properties[END_REF] and showed that the structures, similar in form as the one in figure 1, exhibited an apparent Poisson ratio between -0.8 and 0 for strains up to 20%. Preliminary experiments by P. Rousseau [START_REF] Rousseau | Design of auxetic metamaterials[END_REF] on the printed structure of figure 1 showed that opposite branches of the structure came into contact with one another at a strain of roughly 25% which matched the values reported in [START_REF] Clausen | Topology optimized architectures with programmable Poisson's ratio over large deformations[END_REF]. To go beyond the 25% strain mark, the author of [START_REF] Rousseau | Design of auxetic metamaterials[END_REF] designed a material where the branches were distributed over different parallel planes (see figure 2). The distribution of the branches on different planes eliminated contact of opposite branches up to a strain of 50%. A question remains whether or not the shape of the unit cell in figure 2 is optimal. We suspect that it is not, however, the novelty of the actual problem lies in its multi-layer character within the optimization framework of a unit cell with respect to two desired apparent elastic tensors. Our goal in this work is to design a multi-layer periodic composite with desired elastic properties. In other words, we need to specify the micro-structure of the material in terms of both the distribution as well as its topology. In section 2 we specify the problem setting, define our objective function that needs to be optimized and describe the notion of a Hadamard shape derivative. In section 3 we introduce the level set that is going to implicitly characterize our domain and give a brief description of the smoothed interface approach. Moreover, we compute the shape derivatives and describe the steps of the numerical algorithm. Furthermore, in Section 4 we compute several examples of multi-layer auxetic material that exhibit negative apparent Poisson ratio in 2D. For full 3D systems the steps are exactly the same, albeit with a bigger computational cost. Notation. Throughout the paper we will be employing the Einstein summation notation for repeated indices. As is the case in linear elasticity, ε ε ε(u u u) will indicate the strain defined by: ε ε ε(u u u) = 1 2 ∇u u u + ∇u u u , the inner product between matrices is denoted by A A A:B B B = tr(A A A B B B) = A ij B ji . Lastly, the mean value of a quantity is defined as M Y (γ) = 1 |Y | Y γ(y y y) dy y y.
Problem setting
We begin with a brief outline of some key results from the theory of homogenization [START_REF] Allaire | Shape Optimization by the Homogenization Methods[END_REF], [START_REF] Bakhvalov | Homogenisation: averaging processes in periodic media: mathematical problems in the mechanics of composite materials[END_REF], [START_REF] Cioranescu | Introduction to Homogenization[END_REF], [START_REF] Mei | Homogenisation methods for multiscale mechanics[END_REF], [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF], that will be needed to set up the optimization problem. Consider a linear, elastic, periodic body occupying a bounded domain Ω of R N , N = 2, 3 with period that is assumed to be small in comparison to the size of the domain. Moreover, denote by
Y = - 1 2 , 1 2
N the rescaled periodic unit cell. The material properties in Ω are represented by a periodic fourth order tensor A(y y y) with y y y = x x x/ ∈ Y and x x x ∈ Ω carrying the usual symmetries and it is positive definite:
A ijkl = A jikl = A klij for i, j, k, l ∈ {1, . . . , N } Ω Y Figure 3
. Schematic of the elastic composite material that is governed by eq. (2.1).
Denoting by f f f the body force and enforcing a homogeneous Dirichlet boundary condition the description of the problem is,
-div σ σ σ = f f f in Ω, σ σ σ = A(x x x/ ) ε ε ε(u u u ) in Ω, (2.1) u u u = 0 0 0 on ∂Ω.
We perform an asymptotic analysis of (2.1) as the period approaches 0 by searching for a displacement u u u of the form
u u u (x x x) = +∞ i=0 i u u u i (x x x, x x x/ )
One can show that u u u 0 depends only on x x x and, at order -1 , we can obtain a family of auxiliary periodic boundary value problems posed on the reference cell Y. To begin with, for any m, ∈ {1, . . . , N } we define E E E m = 1 2 (e e e m ⊗e e e +e e e ⊗e e e m ), where (e e e k ) 1≤k≤N is the canonical basis of R N . For each E E E m we have
-div y A(y y y)(E E E m + ε ε ε y (χ χ χ m )) = 0 0 0 in Y, y y y → χ χ χ m (y y y) Y -periodic, M Y (χ χ χ m ) = 0 0 0.
where χ χ χ m is the displacement created by the mean deformation equal to E E E m . In its weak form the above equation looks as follows:
Find χ χ χ m ∈ V such that Y A(y y y) E E E m + ε ε ε(χ χ χ m ) : ε ε ε(w w w) dy y y = 0 for all w ∈ V, (2.2)
where
V = {w w w ∈ W 1,2 per (Y ; R N ) | M Y (w w w) = 0}
. Furthermore, matching asymptotic terms at order 0 we can obtain the homogenized equations for u u u 0 ,
-div x σ σ σ 0 = f f f in Ω, σ σ σ 0 = A H ε ε ε(u u u 0 ) in Ω, (2.3)
u u u 0 = 0 0 0 on ∂Ω.
where A H are the homogenized coefficients and in their symmetric form look as follows,
A H ijm = Y A(y y y)(E E E ij + ε ε ε y (χ χ χ ij )) : (E E E m + ε ε ε y (χ χ χ m ))
J(S) = 1 2 A H -A t 2 η with S = (S 1 , . . . , S d ). (2.5)
where • η is the weighted Euclidean norm, A t , written here component wise, are the specified elastic tensor values, A H are the homogenized counterparts, and η are the weight coefficients carrying the same type of symmetry as the homogenized elastic tensor. We define a set of admissible shapes contained in the working domain Y that have a fixed volume by d . Thus, we can formulate the optimization problem as follows, inf
U ad = S i ⊂ Y is open, bounded, and smooth, such that |S i | = V t i , i = 1, . . . ,
S⊂U ad J(S) χ χ χ m satisfies (2.2) (2.6)
2.2. Shape propagation analysis. In order to apply a gradient descent method to (2.6) we recall the notion of shape derivative. As has become standard in the shape and topology optimization literature we follow Hadamard's variation method for computing the deformation of a shape. The classical shape sensitivity framework of Hadamard provides us with a descent direction. The approach here is due to [START_REF] Murat | Etudes de problmes doptimal design[END_REF] (see also [START_REF] Allaire | Conception optimale de structures[END_REF]). Assume that Ω 0 is a smooth, open, subset of a design domain D. In the classical theory one defines the perturbation of the domain Ω 0 in the direction θ θ θ as
(Id + θ θ θ)(Ω 0 ) := {x x x + θ θ θ(x x x) | x x x ∈ Ω 0 } where θ θ θ ∈ W 1,∞ (R N ; R N )
and it is tangential on the boundary of D. For small enough θ θ θ, (Id + θ θ θ) is a diffeomorphism in R N . Otherwise said, every admissible shape is represented by the vector field θ θ θ. This framework allows us to define the derivative of a functional of a shape as a Fréchet derivative.
Definition Definition Definition 2.2.1. The shape derivative of J(Ω 0 ) at Ω 0 is defined as the Fréchet derivative in W 1,∞ (R N ; R N ) at 0 0 0 of the mapping θ θ θ → J((Id + θ θ θ)(Ω 0 )):
J((Id + θ θ θ)(Ω 0 )) = J(Ω 0 ) + J (Ω 0 )(θ θ θ) + o(θ θ θ) with lim θ θ θ→0 0 0 |o(θ θ θ)| θ θ θ W 1,∞ , and J (Ω 0 )(θ θ θ) a continuous linear form on W 1,∞ (R N ; R N ).
Remark 1. The above definition is not a constructive computation for J (Ω 0 )(θ θ θ). There are more than one ways one can compute the shape derivative of J(Ω 0 ) (see [START_REF] Allaire | Conception optimale de structures[END_REF] for a detailed presentation). In the following section we compute the shape derivative associated to (2.6) using the formal Lagrangian method of J. Cea [START_REF] Céa | Conception optimale ou identification de formes: calcul rapide de la drive directionnelle de la fonction cout[END_REF].
Level set representation of the shape in the unit cell
Following the ideas of [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF], the d sub-domains in the cell Y labeled S i , i ∈ {1, . . . , d} can treat up to 2 d distinct phases by considering a partition of the working domain Y denoted by F j , j ∈ {1, . . . , 2 d } and defined the following way, Define for i ∈ {1, . . . , d} the level sets φ i ,
F 1 =S 1 ∩ S 2 ∩ . . . ∩ S d F 2 =S c 1 ∩ S 2 ∩ . . . ∩ S d . . .
φ i (y y y) = 0 if y y y ∈ ∂S i > 0 if y y y ∈ S c i < 0 if y y y ∈ S i
Moreover, denote by Γ km = Γ mk = F m ∩ F k where k = m, the interface boundary between the m th and the k th partition and let Γ = ∪ 2 d i,j=1i =j Γ ij denote the collective interface to be displaced. The properties of the material that occupy each phase, F j are characterized by an isotropic fourth order tensor
A j = 2 µ j I 4 + κ j - 2 µ j N I 2 ⊗ I 2 , j ∈ {1, . . . , 2 d }
where κ j and µ j are the bulk and shear moduli of phase F j , I 2 is a second order identity matrix, and I 4 is the identity fourth order tensor acting on symmetric matrices.
Remark 2. Expressions of the layer F k , 0 ≤ k ≤ 2 d in terms of the sub-domains S i , 1 ≤ k ≤ d is simply given by the representation of the number k in basis 2. For a number, k its representation in basis 2 is a sequence of d digits, 0 or 1. Replacing in position i the digit 0 with S i and 1 with S c i and can map the expression in basis 2 in the expression of the layer F i . In a similar way, one can express the subsequent formulas in a simple way. However for the sake of simplicity we shall restrain the expressions of the development in the paper to d = 2 and 0 ≥ j ≥ 4.
Remark 3. At the interface boundary between the F j 's there exists a jump on the coefficients that characterize each phase. In the sub-section that follows we will change this sharp interface assumption and allow for a smooth passage from one material to the other as in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF].
3.1. The smoothed interface approach. We model the interface as a smooth, transition, thin layer of width 2 > 0 (see [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]) rather than a sharp interface. This regularization is carried out in two steps: first by re-initializing each level set, φ i to become a signed distance function, d S i to the interface boundary and then use an interpolation with a Heaviside type of function, h (t), to pass from one material to the next,
φ i → d S i → h (d S i ).
The Heaviside function h (t) is defined as,
h (t) = 0 if t < -, 1 2 1 + t + 1 π sin π t if |t| ≤ , 1 if t > . (3.1)
Remark 4. The choice of the regularizing function above is not unique, it is possible to use other type of regularizing functions (see [START_REF] Wang | Color" level sets: A multiple multi-phase method for structural topology optimization with multiple materials[END_REF]).
The signed distance function to the domain S i , i = 1, 2, denoted by d S i is obtained as the stationary solution of the following problem [START_REF] Osher | Fronts propagating with curvature dependent speed: algorithms based on hamiltonjacobi formulations[END_REF],
∂d S i dt + sign(φ i )(|∇d S i | -1) = 0 in R + × Y, d S i (0, y y y) = φ i (y y y) in Y, (3.2)
where φ i is the initial level set for the subset S i . Hence, the properties of the material occupying the unit cell Y are then defined as a smooth interpolation between the tensors A j 's j ∈ {1, . . . , 2 d },
A (d S ) = (1 -h (d S 1 )) (1 -h (d S 2 )) A 1 + h (d S 1 ) (1 -h (d S 2 )) A 2 + (1 -h (d S 1 )) h (d S 2 ) A 3 + h (d S 1 ) h (d S 2 ) A 4 . (3.3)
where d S = (d S 1 , d S 2 ). Lastly, we remark that the volume of each phase is written as
Y ι k dy y y = V k
where ι k is defined as follows,
ι 1 = (1 -h (d S 1 )) (1 -h (d S 2 )), ι 2 = h (d S 1 ) (1 -h (d S 2 )), ι 3 = (1 -h (d S 1 )) h (d S 2 ), ι 4 = h (d S 1 ) h (d S 2 ).
(3.4)
Remark 5. Once we have re-initialized the level sets into signed distance functions we can obtain the shape derivatives of the objective functional with respect to each sub-domain S i . In order to do this we require certain differentiability properties of the signed distance function.
Detailed results pertaining to the aforementioned properties can be found in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF], [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]. We encourage the reader to consult their work for the details. For our purposes, we will make heavy use of Propositions 2.5 and 2.9 in [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] as well as certain results therein.
Theorem Theorem Theorem 3.1.1. Assume that S 1 , S 2 are smooth, bounded, open subsets of the working domain Y and θ θ θ 1 , θ θ θ 2 ∈ W 1,∞ (R N ; R N ). The shape derivatives of (2.6) in the directions θ θ θ 1 , θ θ θ 2 respectively are,
∂J ∂S 1 (θ θ θ 1 ) = - Γ θ θ θ 1 • n n n 1 η ijk A H ijk -A t ijk A * mqrs (d S 2 )(E k mq + ε mq (χ χ χ k ))(E ij rs + ε rs (χ χ χ ij )) -h * (d S 2 ) dy y y ∂J ∂S 2 (θ θ θ 2 ) = - Γ θ θ θ 2 • n n n 2 η ijk A H ijk -A t ijk A * mqrs (d S 1 ) (E k mq + ε mq (χ χ χ k )) (E ij rs + ε rs (χ χ χ ij )) -h * (d S 1 ) dy y y
where, for i = 1, 2, A * (d S i ), written component wise above, denotes,
A * (d S i ) = A 2 -A 1 + h (d S i ) A 1 -A 2 -A 3 + A 4 , (3.5) h * (d S i ) = ( 2 -1 + h (d S i )( 1 -2 -3 + 4 )) (3.6)
and j , j ∈ {1, . . . , 4} are the Lagrange multipliers for the weight of each phase.
Proof. For each k, we introduce the following Lagrangian for (u
u u k , v v v, µ µ µ) ∈ V × V × R 2d associated to problem (2.6), L(S S S, u u u k , v v v, µ µ µ) = J(S S S) + Y A (d S S S ) E E E k + ε ε ε(u u u k ) : ε ε ε(v v v) dy y y + µ µ µ • Y ι ι ι dy y y -V V V t , (3.7)
where µ µ µ = (µ 1 , . . . , µ 4 ) is a vector of Lagrange multipliers for the volume constraint, ι ι ι = (ι 1 , . . . , ι 4 ), and V V V t = (V t 1 , . . . , V t 4 ). Remark 6. Each variable of the Lagrangian is independent of one another and independent of the sub-domains S 1 and S 2 .
Direct problem. Differentiating L with respect to v v v in the direction of some test function w w w ∈ V we obtain,
∂L ∂v v v | w w w = Y A ijrs (d S S S ) (E k ij + ε ij (u u u k )) ε rs (w w w) dy y y,
upon setting this equal to zero we obtain the variational formulation in (2.2).
Adjoint problem. Differentiating L with respect to u u u k in the direction w w w ∈ V we obtain,
∂L ∂u u u k | w w w = η ijk A H ijk -A t ijk Y A mqrs (d S S S ) (E k mq + ε mq (u u u k )) ε rs (w w w) dy y y + Y A mqrs (d S S S ) ε mq (w w w) ε rs (v v v) dy y y.
We immediately observe that the integral over Y on the first line is equal to 0 since it is the variational formulation (2.2). Moreover, if we chose w w w = v v v then by the positive definiteness assumption of the tensor A as well as the periodicity of v v v we obtain that adjoint solution is identically zero, v v v ≡ 0.
Shape derivative. Lastly, we need to compute the shape derivative in directions θ θ θ 1 and θ θ θ 2 for each sub-domain S 1 , S 2 respectively. Here we will carry out computations for the shape derivative with respect to the sub-domain S 1 with calculations for the sub-domain S 2 carried out in a similar fashion. We know (see [START_REF] Allaire | Conception optimale de structures[END_REF]) that
∂J ∂S i (S S S) | θ θ θ i = ∂L ∂S i (S S S, χ χ χ k , 0 0 0, λ λ λ) | θ θ θ i for i = 1, 2. (3.8) Hence, ∂L ∂S 1 (θ θ θ 1 ) = η ijk A H ijk -A t ijk Y d S 1 (θ θ θ 1 ) ∂A mqrs ∂S 1 (d S S S )(E k mq + ε mq (u u u k )) (E ij rs + ε rs (u u u ij ))dy y y + Y d S 1 (θ θ θ 1 ) ∂A ijrs ∂d S 1 (d S S S )(E k ij + e yij (u u u k ))ε rs (v v v)dy y y + 1 Y -d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 (1 -h (d S 2 ))dy y y + 2 Y d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 (1 -h (d S 2
)) dy y y
+ 3 Y -d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 h (d S 2 ) dy y y + 4 Y d S 1 (θ θ θ 1 ) ∂h (d S 1 ) ∂d S 1 h (d S 2
) dy y y.
The term on the second line is zero due to the fact that the adjoint solution is identically zero. Moreover, applying Proposition 2.5 and then Proposition 2.9 from [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] as well as using the fact that we are dealing with thin interfaces we obtain,
∂L ∂S 1 (θ θ θ 1 ) = -η ijk A H ijk -A t ijk Γ θ θ θ 1 • n n n 1 A * mqrs (d S 2 ) (E k mq + ε mq (u u u k )) (E ij rs + ε rs (u u u ij )) dy y y + 1 Γ θ θ θ 1 • n n n 1 (1 -h (d S 2 )) dy y y -2 Γ θ θ θ 1 • n n n 1 (1 -h (d S 2
)) dy y y
+ 3 Γ θ θ θ 1 • n n n 1 h (d S 2 ) dy y y -4 Γ θ θ θ 1 • n n n 1 h (d S 2 ) dy y y
where n n n 1 denotes the outer unit normal to S 1 . Thus, if we let u u u k = χ χ χ k , the solution to the unit cell (2.2) and collect terms the result follows.
Remark 7. The tensor A * in (3.5) as well h * in (3.6) of the shape derivatives in Theorem 3.1.1 depend on the signed distance function in an alternate way which provides an insight into the coupled nature of the problem. We further remark, that in the smooth interface context, the collective boundary Γ to be displaced in Theorem 3.1.1, is not an actual boundary but rather a tubular neighborhood.
3.2.
The numerical algorithm. The result of Theorem 3.1.1 provides us with the shape derivatives in the directions θ θ θ 1 , θ θ θ 2 respectively. If we denote by,
v 1 = ∂J ∂S 1 (S S S), v 2 = ∂J ∂S 2 (S S S),
a descent direction is then found by selecting the vector field θ θ θ 1 = v 1 n n n 1 , θ θ θ 2 = v 2 n n n 2 . To move the shapes S 1 , S 2 in the directions v 1 , v 2 is done by transporting each level set, φ i , i = 1, 2 independently by solving the Hamilton-Jacobi type equation
∂φ i ∂t + v i |∇φ i | = 0, i = 1, 2. (3.9)
Moreover, we extend and regularize the scalar velocity v i , i = 1, 2 to the entire domain Y as in [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF]. The extension is done by solving the following problem for i = 1, 2,
-α 2 ∆θ θ θ i + θ θ θ i = 0 in Y, ∇θ θ θ i n n n i = v i n n n i on Γ, θ θ θ i Y-periodic,
where α > 0 is small regularization parameter. Hence, using the same algorithm as in [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], for i = 1, 2 we have:
3.2.1. Algorithm. We initialize S 0 i ⊂ U ad through the level sets φ i 0 defined as the signed distance function of the chosen initial topology, then 1. iterate until convergence for k ≥ 0: a. Calculate the local solutions χ χ χ m k for m, = 1, 2 by solving the linear elasticity problem
(2.2) on O k := S k 1 ∪ S k 2 .
b. Deform the domain O k by solving the Hamilton-Jacobi equations (3.9) for i = 1, 2.
The new shape O k+1 is characterized by the level sets φ k+1 i solutions of (3.9) after a time step ∆t k starting from the initial condition φ k i with velocity v i k computed in terms of the local problems χ χ χ m k for i = 1, 2. The time step ∆t k is chosen so that J(S S S k+1 ) ≤ J(S S S k ). 2. From time to time, for stability reasons, we re-initialize the level set functions φ k i by solving (3.2) for i = 1, 2.
Numerical examples
For all the examples that follow we have used a symmetric 100 × 100 mesh of P 1 elements. We imposed volume equality constraints for each phase. In the smooth interpolation of the material properties in formula (3.3), we set equal to 2∆x where ∆x is the grid size. The parameter is held fixed through out (see [START_REF] Allaire | Multi-phase structural optimization via a level set method[END_REF] and [START_REF] Michailidis | Manufacturing Constraints and Multi-Phase Shape and Topology Optimization via a Level-Set Method[END_REF]). The Lagrange multipliers were updated at each iteration the following way, n+1 j = n j -β Y ι n j dy y y -V t j , where β is a small parameter. Due to the fact that this type of problem suffers from many local minima that may not result in a shape, instead of putting a stopping criterion in the algorithm we fix, a priori, the number iterations. Furthermore, since we have no knowledge of what volume constraints make sense for a particular shape, we chose not to strictly enforce the volume constraints for the first two examples. However, for examples 3 and 4 we use an augmented Lagrangian to actually enforce the volume constraints,
L(S S S, µ µ µ, β β β) = J(S S S) - 4 i=1 µ i C i (S S S) + 4 i=1 1 2 β i C 2 i (S S S),
here C i (S S S) are the volume constraints and β is a penalty term. The Lagrange multipliers are updated as before, however, this time we update the penalty term, β every 5 iterations. All the calculations were carried out using the software FreeFem++ [START_REF] Hecht | New development in FreeFem++[END_REF].
Remark 8. We remark that for the augmented Lagrangian we need to compute the new shape derivative that would result. The calculations are similar as that of Theorem 3.1.1 and, therefore, we do not detail them here for the sake of brevity.
Example 1.
The first structure to be optimized is multilevel material that attains an apparent Poisson ratio of -1. The Young moduli of the four phases are set to E 1 = 0.91, E 2 = 0.0001, E 3 = 1.82, E 4 = 0.0001. Here phase 2 and phase 4 represent void, phase 2 represents a material that is twice as stiff as the material in phase 3. The Poisson ratio of each phase is set to ν = 0.3 and the volume constraints were set to V t 1 = 30% and V t 3 = 4%.
ijkl 1111 1122 2222 η ijkl 1 30 1 A H ijkl 0.12 -0.09 0.12 A t ijkl 0.1 -0.1 0.1 Table 1. Values of weights, final homogenized coefficients and target coefficients From figure 8 we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly 16% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. 11 we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly 15% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. 3. Values of weights, final homogenized coefficients and target coefficients Again just as in the previous two examples we observe that the volume constraint for the stiffer material is not adhered to the target volume, even though for this example a augmented Lagrangian was used. In this cases the algorithm used roughly 20% of the material with Poisson ratio 1.82 while the volume constraint for the weaker material was more or less adhered to the target constraint. The fourth structure to be optimized is multilevel material that attains an apparent Poisson ratio of -0.5. An augmented Lagrangian was used to enforce the volume constraints for this example as well. The Lagrange multiplier was updated the same way as before, as was the penalty parameter β. The Young moduli of the four phases are set to E 1 = 0.91, E 2 = 0.0001, E 3 = 1.82, E 4 = 0.0001. The Poisson ratio of each material is set to ν = 0.3, however, this times we require that the volume constraints be set to V
Conclusions and Discussion
The problem of an optimal multi-layer micro-structure is considered. We use inverse homogenization, the Hadamard shape derivative and a level set method to track boundary changes, within the context of the smooth interface, in the periodic unit cell. We produce several examples of auxetic micro-structures with different volume constraints as well as different ways of enforcing the aforementioned constraints. The multi-layer interpretation suggests a particular way on how to approach the subject of 3D printing the micro-structures. The magenta material is essentially the cyan material layered twice producing a small extrusion with the process repeated several times. This multi-layer approach has the added benefit that some of the contact among the material parts is eliminated, thus allowing the structure to be further compressed than if the material was in the same plane.
The algorithm used does not allow "nucleations" (see [START_REF] Allaire | Structural optimization using sensitivity analysis and a level set method[END_REF], [START_REF] Wang | Level-set method for design of multi-phase elastic and thermoelastic materials[END_REF]). Moreover, due to the non-uniques of the design, the numerical result depend on the initial guess. Furthermore, volume constraints also play a role as to the final form of the design.
The results in this work are in the process of being physically realized and tested both for polymer and metal structures. The additive manufacturing itself introduces further constraints into the design process which need to be accounted for in the algorithm if one wishes to produce composite structures.
Figure 1 .
1 Figure 1. A 3D printed material with all four branches on the same plane achieving an apparent Poisson ratio of -0.8 with over 20% strain. On subfigure (a) is the uncompressed image and on sub-figure (b) is the image under compression. Used with permission from [18].
Figure 2 .
2 Figure 2. A 3D printed material with two of the branches on a different plane achieving an apparent Poisson ratio of approximately -1.0 with over 40% strain. Sub-figure (a) is the uncompressed image and sub-figure (b) is the image under compression. Used with permission from [18].
Figure 4 .
4 Figure 4. Perturbation of a domain in the direction θ θ θ.
F 2 dFigure 5 .
25 Figure 5. Representation of different material in the unit cell for d = 2.
Figure 6 .
6 Figure 6. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 7 .Figure 8 .
78 Figure 7. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -1.
Figure 9 .
9 Figure 9. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 10 .Figure 11 .
1011 Figure 10. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -1.
Figure 12 .
12 Figure 12. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 13 .Figure 14 .
1314 Figure 13. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -0.5.
Figure 15 .
15 Figure 15. The design process of the material at different iteration steps. Young modulus of 1.82, Young modulus of 0.91, void.
Figure 16 .Figure 17 .
1617 Figure 16. On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio -0.5.
Table 2 .
2 3, however, this times we require that the volume constraints be set to V t 1 = 33% and V t 3 = 1%. Values of weights, final homogenized coefficients and target coefficients Again, from figure
ijkl 1111 1122 2222
η ijkl 1 30 1
A H ijkl 0.11 -0.09 0.12 A t ijkl 0.1 -0.1 0.1
Table 4 .
4 Values of weights, final homogenized coefficients and target coefficients
t 1 = 53%
Acknowledgments
This research was initiated during the sabbatical stay of A.C. in the group of Prof. Chiara Daraio at ETH, under the mobility grant DGA-ERE (2015 60 0009). Funding for this research was provided by the grant "MechNanoTruss", Agence National pour la Recherche, France (ANR-15-CE29-0024-01). The authors would like to thank the group of Prof. Chiara Daraio for the fruitful discussions. The authors are indebted to Grégoire Allaire and Georgios Michailidis for their help and fruitful discussions as well as to Pierre Rousseau who printed and tested the material in figure 1 & figure 2. |
01765261 | en | [
"info",
"shs.info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01765261/file/459826_1_En_15_Chapter.pdf | Jolita Ralyté
email: [email protected]
Michel Léonard
email: [email protected]
Evolution Models for Information Systems Evolution Steering
Keywords: Information Systems Evolution, IS evolution steering, IS evolution structure, IS evolution lifecycle, IS evolution impact
Sustainability of enterprise Information Systems (ISs) largely depends on the quality of their evolution process and the ability of the IS evolution steering officers to deal with complex IS evolution situations. Inspired by Olivé [1] who promotes conceptual schema-centric IS development, we argue that conceptual models should also be the centre of IS evolution steering. For this purpose we have developed a conceptual framework for IS evolution steering that contains several interrelated models. In this paper we present a part of this framework dedicated to the operationalization of IS evolution -the evolution metamodel. This metamodel is composed of two interrelated views, namely structural and lifecycle, that allow to define respectively the structure of a particular IS evolution and its behaviour at different levels of granularity.
Introduction
No matter the type and the size of the organization (public or private, big or small), sustainability of its Information Systems (ISs) is of prime importance to ensure its activity and prosperity. Sustainability of ISs largely depends on the quality of their evolution process and the ability of the officers handling it to deal with complex and uncertain IS evolution situations. There are several factors that make these situations complex, such as: proliferation of ISs in the organization and their overlap, independent evolution of each IS, non-existence of tools supporting IS evolution steering, various IS dimensions to be taken into account, etc. Indeed, during an IS evolution not only its information dimension (the structure, availability and integrity of data) is at stake. IS evolution officers have also to pay attention to its activity dimension (the changes in enterprise business activity supported be the IS), the regulatory dimension (the guarantee of IS compliance with enterprise regulation policies), and the technology dimension (the implementation and integration aspects).
In this context, we claim that there is a need for an informational engineering approach supporting IS evolution steering, allowing to obtain all the necessary information for an IS evolution at hand, to define and plan the evolution and to assess its impact on the organization and it ISs. We found the development of such an approach on conceptual modelling by designing a conceptual framework for IS evolution steering. Some parts of this framework were presented in [START_REF] Opprecht | Towards a framework for enterprise information system evolution steering[END_REF] and [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]. In this paper we pursue our work and present one of its componentsthe operationalization of the IS evolution through the metamodel of IS Evolution.
The rest of the paper is organized as follows: in section 2 we overview the context of our work -the conceptual framework that we are developing to support the IS evolution steering. Then, in section 3, we discuss the role and principles of conceptual modelling in handling IS evolution. In sections 4 and 5 we present our metamodel formalizing the IS evolution, its structural and lifecycle views, and illustrate their usage in section 6. Section 7 concludes the paper.
Context: A Framework for IS Evolution Steering
With our conceptual framework for IS evolution steering we aim to face the following challenges: 1) steering the IS evolution requires a thorough understanding of the underpinning IS domain, 2) the impact of IS evolution is difficult to predict and the simulation could help to take evolution decisions, 3) the complexity of IS evolution is due to the multiple dimensions (i.e. activity, regulation, information, technology) to be taken into account, and 4) the guidance for IS evolution steering is almost non-existent, and therefore needs to be developed. As shown in Fig. 1, the framework contains several components each of them taking into account a particular aspect of IS evolution steering and considering the evolution challenges listed above. Let us briefly introduce these components.
The IS Steering Metamodel (IS-SM ) is the main component of the framework having as a role to represent the IS domain of an enterprise. Concretely, it allows to formalize the way the enterprise ISs are implemented (their structure in terms of classes, operations, integrity rules, etc.), the way they support enterprise business and management activities (the definition of enterprise units, activities, positions, business rules, etc.), and how they comply with regulations governing these activities (the definition of regulatory concepts, rules and roles). Although IS-SM is not the main subject of this paper (it was presented in [START_REF] Opprecht | Towards a framework for enterprise information system evolution steering[END_REF][START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]), it remains the bedrock of the framework, and is necessary to be presented for better understanding other models and illustrations. IS-SM is also the kernel model for implementing an Informational Steering Information System -ISIS. ISIS is a meta-IS for steering enterprise IS (IS upon ISs according to [START_REF] Dinh | Towards a New Infrastructure Supporting Interoperability of Information Systems in Development: the Information System upon Information Systems[END_REF]). While enterprise ISs operate at business level, ISIS performs at the IS steering level. Therefore, we depict IS-SM in Fig. 2 mainly to make this paper self-explanatory, and we invite the reader to look at [5] for further details. The role of the Evolution Metamodel is to specify IS changes, and to assist the IS steering actor responsible for performing these changes. This metamodel comprises two interrelated views: structural and lifecycle. While the former deals with the extent and complexity of the IS evolution, the later supports its planning and execution. The Evolution Metamodel is the main subject of this paper, and is detailed in the following sections.
The Impact Space component provides mechanisms to measure the impact of IS changes on the enterprise IS, on the business activities supported by these ISs, and on the compliance with regulations governing enterprise activities. The impact model of a particular IS evolution is defined as a part of the IS-SM including the IS-SM elements that are directly or indirectly concerned by this evolution. An IS-SM element is directly concerned by the evolution if its instances undergo modifications, i.e. one or more instances of this element are created, enabled, disabled, modified, or deleted. An IS-SM element is indirectly concerned by the evolution if there is no modification on its instances but they have to be known to make appropriate decisions when executing the evolution.
The Responsibility Space (Ispace/Rspace) component [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF] helps to deal with responsibility issues related to a particular IS evolution. Indeed, each IS change usually concerns one or several IS actors (i.e. IS users) by transforming their information and/or regulation spaces (Ispace/Rspace). An IS actor can see her information/regulation space be reduced (e.g. some information is not accessible anymore) or in the contrary increased (e.g. new information is available, new actions has to be performed, new regulations has to be observed). In both cases the responsibility of the IS actor over these spaces is at stake. The Ispace/Rspace model is defined as a part of IS-SM. It allows for each IS evolution to create subsets of information, extracted from ISIS, that inform the IS steering officer how this evolution affects the responsibility of IS users.
Finally, the Evolution Steering Method provides guidelines to use all these aforementioned models when executing an IS evolution.
3 Modelling IS Evolution: Background and Principles
Background
Most of the approaches dealing with IS and software evolution are based on models and metamodels (e.g. [START_REF] Pons | Model evolution and system evolution[END_REF][START_REF] Burger | A change metamodel for the evolution of mof-based metamodels[END_REF][START_REF] Aboulsamh | Towards a model-driven approach to information system evolution[END_REF][START_REF] Kchaou | A mof-based change meta-model[END_REF][START_REF] Ruiz Carmona | TraceME: Traceability-based Method for Conceptual Model Evolution[END_REF]). They mainly address the structural aspects of IS evolution (for example, changing a hierarchy of classes, adding a new class) [START_REF] Pons | Model evolution and system evolution[END_REF], model evolution and transformations [START_REF] Burger | A change metamodel for the evolution of mof-based metamodels[END_REF], and the traceability of changes [START_REF] Kchaou | A mof-based change meta-model[END_REF][START_REF] Ruiz Carmona | TraceME: Traceability-based Method for Conceptual Model Evolution[END_REF]. They aim to support model-driven IS development, the automation of data migration, the evaluation of the impact of metamodel changes on models, the development of forward-, reverse-, and re-engineering techniques, the recording of models history, etc. The importance and impact of model evolution is also studied in [START_REF] Lehman | Software Evolution[END_REF] where the authors stress that understanding and handling IS evolution requires models, model evolution techniques, metrics to measure model changes and guidelines for taking decisions.
In our work, we also claim that the purpose of conceptual modelling in IS evolution steering is manifold, it includes the understanding, building, deciding and realising the intended IS changes. As per [START_REF] Lehman | Evolution as a noun and evolution as a verb[END_REF], the notion of IS evolution has to be considered as a noun and as a verb. As a noun it refers to the question "what" -the understanding of the IS evolution phenomenon and its properties. While as a verb, it refers to the questions "how" -the theories, languages, activities and tools which are required to evolve a software. Our metamodel for IS evolution steering (see Fig. 1) includes two complementary views, namely structural and lifecycle view, and so serves to cope with complex IS artefacts, usually having multiple views.
Models are also known as a good support for taking decisions. In case of IS evolution, usually, there are several possible ways to realise it, each of them having a different impact on enterprise ISs and even on its activities. Taking a decision without any appropriate support can be difficult and very stressful task. Finally, with a set of models, the realisation of IS evolution is assisted in each evolution step and each IS dimension.
Principles of IS Evolution
The focus of the IS evolution is to transform a current IS schema (ASIS-IS) into a new one (TOBE-IS), and to transfer ASIS-IS objects into TOBE-IS objects. We use ISIS (see the definition in section 2), whose conceptual schema is represented by IS-SM (Fig. 2), as a support to handle IS evolution. Indeed, ISIS provides a thorough, substantial information on the IS structure and usage, which, combined with other information outside of ISIS, is crucial to decide the IS evolution to pursue. Furthermore, ISIS is the centre of the management and the execution of the IS evolution processes both at the organizational and informatics levels. So, one main principle of IS evolution is always to consider these two interrelated levels: the ISIS and IS levels with their horizontal effects concerning only one level, and their vertical effects concerning both levels. In the following, to make a clear distinction between the IS and ISIS levels, we use the concepts of "class" and "object" at the IS schema level, and "element" and "instance" at the ISIS schema level.
IS evolution is composition of transformation operations, where the most simple ones are called atomic evolution primitives. Obtaining an initial list of atomic evolution primitives for an IS and its ISIS is simple: we have to consider all the elements of the ISIS schema, and, for each of them, all the primitives usually defined over an element: Search, Create, Read, Update, Delete (SCRUD). In the case of IS-SM as ISIS schema, there are 53 elements and so, 265 atomic evolution primitives. Since the aim of the paper is to present the principles of our framework for IS evolution steering, we simplify this situation by considering only the most difficult primitives Create and Delete. Nevertheless, there are still 106 primitives to be considered. The proposed conceptual framework for IS evolution steering is going to help facing this complexity.
Structural View of IS Evolution
An IS evolution transforms a part of the ASIS-IS ISP into ISP', which is a part of the TOBE-IS, in a way that the TOBE-IS is compliant with:
-the horizontal perspective: the instances of the new ISIS and the objects of TOBE-IS validate all the integrity rules defined respectively over ISIS and TOBE-IS; -the vertical perspective: the TOBE-IS objects are compliant with the instances of the new ISIS.
In a generic way, we consider that an overall IS evolution requires to be decomposed into several IS evolutions, and so the role of the structural IS evolution model view (shown in Fig. 3) consists in establishing the schema of each IS evolution as a schema of composition of evolution primitives defined over IS-SM to pursue the undertaken IS evolution.
An evolution primitive represents a kind of elementary particle of an evolution: we cannot split it into several parts without loosing qualities in terms of manageable changes and effects, robustness, smartness and performances, introduced in a following paragraph. The most basic evolution primitives are the atomic evolution primitives: some of them, like Create, Delete and Update, are classic, since the other, Enable, Disable, Block, Unblock are crucial for the evolution process.
Atomic Evolution Primitives
Since the ISIS schema (i.e. IS-SM) is built only by means of existentially dependencies1 , the starting point of the IS evolution decomposition is very simple -it consists of a list of atomic primitives: create, delete, update, enable, disable, block and unblock an instance of any IS-SM element. We apply the same principle at the IS level, so the IS schema steered by ISIS is also built by using only existential dependencies. Moreover, an instance/object is existentially dependent on its element/class.
These atomic primitives determine a set of possible states that any ISIS instance (as well as IS object) could have, namely created, enabled, blocked, disabled, and deleted. Fig. 4 provides the generic life cycle of an instance/object.
Once an instance is created, it must be prepared to be enabled, and so to be used at the IS level. For example, a created class can be enabled, and so to have objects at the IS level, only if its methods validate all the integrity rules whose contexts contain it. A created instance can be deleted if it belongs to a stopped evolution. Enabled instances are disabled by an evolution when they do not play any role in the targeted TOBE-IS. They are not deleted immediately for two reasons: the first one concerns the fact that data, operations, or rules related to them, which were valid before the evolution, still stay consistent for situations where continuity is mandatory, for instance due to contracts. The second one concerns the evolution itself: if it fails, it is necessary to come back to the ASIS-IS, and so, to enable again the disabled instances.
Enabled instances are blocked during a phase of an evolution process when it is necessary to avoid their possible uses at the IS level through objects and/or execution of operations. At the end of this phase they are unblocked (re-enabled). For instance an activity (an instance of the element Activity) can be blocked temporary because of the introduction of a new business rule. Finally when an instance is deleted, it disappears definitively.
Robust Generic Atomic Evolution Rules
The generic atomic evolution rules must be validated to assure the consistency of the evolution process. Indeed each atomic evolution primitive has effects on other elements than its targeted elements. For example, deleting an integrity rule has effects on the methods of several classes belonging to the context of this integrity rule. Below, we present two kinds of generic atomic evolution rules: the first concerns the horizontal and vertical evolution effects, while the second deals with the dynamic effects.
Evolution Effects Horizontally and Vertically. An evolution primitive is firstly an atomic operation on the ISIS. So, it must verify the integrity rules defined over the IS-SM model to manage the horizontal effects. For example, if an instance cl of the element Class is deleted, then all the instances clo i of the element Class Operation related with cl must be deleted due to the existential dependency between these two elements (see Fig. 2).
An evolution primitive is also an operation on the IS and has to manage the vertical effects of the conformity rules between ISIS instances and IS objects. For example, deleting cl induces also deleting all its objects in the IS.
Then, since the evolution operations on IS are executed from ISIS, they validate the integrity rules defined over IS, which are instances of ISIS.
Generic Dynamic Evolution Rules. The generic evolution rules concern the states of the ISIS elements produced by the use of atomic evolution primitives (Fig. 4): created, enabled, blocked, disabled, deleted, and especially the interactions between instances of different elements in different states. They must be observed only at the ISIS level.
Some generic rules concerning the states "created" and "deleted" are derived directly from the existentially dependencies. Considering the element Einf depending existentially on the element Esup, any instance of Einf may be in the state "created" only if its associated instance esup of Esup is in the state "created", and it must be in the state "deleted" if esup is in the state "deleted".
The generic rules concerning the states "blocked" and "disabled" require to con-sider another relation between the IS-SM elements, called "determined by", defined at the conceptual level and not the instance level. An element Esecond is strictly/weakly determined by the element Efirst if any instance esecond to be exploitable in IS must/can require to be associated to one or several instances efirst.
Then there is the following generic dynamic rule: any instance esecond must be in the state disabled/blocked/deleted if at least one of its efirst is in the state respectively disabled/blocked/deleted.
For instance, the element Operation is strictly determined by the element Class, because any operation to be executed at the IS level must be associated to at least one class (see Fig. 2). Then, if an operation is associated to one class in the state disabled/blocked, it also must be in the state disabled/blocked, even if it is also associated to other enabled classes.
The element Integrity Rule is weakly determined by the element Business Rule because integrity rules are not mandatory associated with a business rule. In the same way, all elements, like Class, associated with the Regulatory Element are weakly determined by it, because their instances are not mandatory associated to an instance of Regulatory Element.
Considering the following elements of the IS-SM models (see Fig. 2): Person, Position, Business Process (BP), Activity, Business Rule (BR), Role, Operation, Class, Integrity Rule (IR), Regulatory Element (RE), here is the list of relations strictly determined by (=>): BP => Activity, BR => Activity, Operation => Class, Operation => IR, IR => Class. The list of the relations weakly determined by (->) (in addition to the aforementioned ones with the Regulatory Element) is: IR -> BR, IR -> RE, Class -> RE, Operation -> RE, Role -> RE, BR -> RE, Activity -> RE, Position -> RE, Event -> RE, BP -> RE.
Robustness. Every aforementioned evolution primitive is robust if it manages all its horizontal and vertical effects and respects all the generic dynamic evolution rules. The use of only existential dependencies at the both levels, IS and ISIS, in our approach, facilitates reaching this quality. Nevertheless, at the IS level, such an approach requires that the whole IS schema (including static, dynamic and integrity rule perspectives) must be easily evolvable, and the IT system supporting the IS (e.g. a database management system) must provide an efficient set of evolution primitives [START_REF] Andany | Management of schema evolution in databases[END_REF].
Composite Evolution Primitives
The composite primitives are built by a composition of the atomic ones (Fig. 3). They are necessary to consider IS evolution at the management level [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF], but also for informational and implementation purposes. For instance, replacing an integrity rule by a new one can be considered logically equivalent to delete it and then to create the new one. But this logic is not pertinent if we consider the managerial, IS exploitation and implementation perspectives. It is much more efficient to build a composite evolution primitive "replace" built from the atomic primitives "create" and "delete".
A composite evolution primitive is robust, if it manages all its horizontal and vertical effects and respects all the generic dynamic evolution rules.
Managerial Effects
The managerial effects consider the effects of the IS evolution at the human level, and so concern the IS-SM elements Role, Activity, Position and Person. The evolution steering officers have to be able to assess whether the proposed evolution has a harmful effect on organization's activities or not, and to decide to continue or not this evolution. The evolution primitives are smart if they alert these levels by establishing a report of changes to all the concerned roles, activities, positions, and persons. To do that, they will use the responsibility space (Fig. 1) with its two sub-spaces: its informational space (Ispace) and its regulatory space (Rspace). This part was presented in [START_REF] Ralyté | Defining the responsibility space for the information systems evolution steering[END_REF]. Below in the paper, all primitives are smart.
Lifecycle View of IS Evolution
Evolution of an information system is generally a delicate process for an enterprise for several reasons. First, it cannot be realized by stopping the whole IS because all the activities supported by IS should be stopped and this situation is unthinkable in most cases. Second, it has impacts, especially on actors and on the organization of activities. It can even induce the need for reorganizing the enterprise. Third, it takes time and often requires to set up a process of adaptation to the changes for all concerned actors to enable them to perform their activities. Moreover, it concerns a large informational space of IS-SM and requires to be decomposed into partial evolutions called sub-evolutions. So, it requires a coordination model to synchronize all processes of these sub-evolutions as well as the process of the main evolution. Furthermore, it is a long process, with an important number of actors who work inside the evolution process or whose activities are changed by the evolution. Finally, most evolutions of ASIS-IS into TOBE-IS are generally nearly irreversible, because it is practically impossible to transform back TOBE-IS into ASIS-IS at least for two main reasons: (1) some evolution primitives, used by the evolution, can be irreversible themselves (e.g. the case of an existing integrity rule relaxed by the evolution), and (2) actors, and even a part of the enterprise, can be completely disoriented to go back to TOBE-IS after all the efforts they have done to adapt to ASIS-IS. So, a decision to perform an evolution must be very well prepared to decrease the risks of failure. For this purpose, we explore a generic lifecycle of an evolution, first at atomic primitive level, then at composite primitive level and finally at evolution level.
An atomic primitive can be performed stand-alone in two steps: (1) preparation and (2) execution or abort. They are defined as follows:
-Preparation: prepares the disabling list of ISIS instances and IS objects to be disabled if success, the creating list of ISIS instances and IS objects to be created if success, the list of reports of changes, and the blocking list of ISIS instances; -Execution: sends the reports of changes, blocks the concerned IS parts, disables/creates effectively contents of the disabling/creating lists, then unblocks the blocked IS parts.
The work done at the preparation step serves to decide whether the primitive should be executed or aborted. Finally, the execution of the primitive can succeed or fail. For example, it fails if it cannot block an IS part. As an example let us consider the deletion of a role: its creating list is empty and its disabling list contains all the assignments of operations to this role. Blocking these assignments signifies these operations cannot be executed by means of this role during the deletion of the role. It can fail if an IS actor is working through this role.
In the case of the atomic evolution primitive "Create an instance Cl of Class", the preparation step defines:
-how to fill the new class with objects, -how to position it in the IS schema by linking Cl to other IS classes by means of existential dependencies; -how to alert the managers of Role, Operation and Integrity Rule about the Cl creation.
Besides, it is important to create together with Cl its methods and attributes, and even the association classes between Cl and other IS classes. For that, we need a more powerful concept, the composite evolution primitive, as presented below.
A composite primitive is composed of other composite or atomic primitives, which builds a hierarchy of primitives. The top composite primitive is at the root of this hierarchy; the atomic primitives are at its leaves. Every composite primitive has a supervision step, which controls the execution of all its subprimitives. Only the top composite primitive has in addition a coordination step, which takes the same decision to enable or abort for all its sub primitives in the hierarchy. The main steps of a composite primitive life cycle are:
-Preparation: creates all direct sub-primitives of the composite primitive; -Supervision: determines the impacts and the managerial effects from the enabling lists and the creating lists established by the sub-primitives; -Coordination: takes the decision of enabling or aborting primitive processing and transmits it to the sub-primitives; -Training: this is a special step for the top primitive; it concerns training of all actors concerned by the whole evolution. This step is performed thanks to the actors' responsibility spaces.
The top composite primitive is successful if all its sub-primitives are successful; it fails if at least one among its sub-primitives fails. The life cycle of the atomic primitives must be adapted by adding the abort decision and by taking into account that enable/abort decisions are made by a higher level primitive. Fig. 5 illustrates the co-ordination between the composite primitive lifecycle and its sub-primitive lifecycle.
Thus, from the atomic evolution primitive "Create Class" we build the composite evolution primitive "C-Create Class" with the following sub-primitives:
-Create Class, which is used to create the intended class Cl and also all the new classes to associate Cl with other classes, as mentioned previously, -Create Class Concept, Create Class Attribute, -if necessary, Create Attribute, and Create Domain, -C-Create Method with its sub-primitives Create Method and Create Attribute Method.
Let us now look at the lifecycle of an entire evolution, which is a composition of primitives. During the processing of an evolution, the preparation step consists in selecting the list of composite primitives, whose processing will realize this Fig. 6. Coordination of the IS evolution lifecycle with the lifecycles of its top composite primitives evolution. Then, from the impacts and the managerial effects determined by the supervision steps of these composite primitives, the supervision step of the evolution determines a plan for processing these primitives. It decides which primitives can/must be executed in parallel and which in sequence. Next, the coordination step launches processing of primitives following the plan. After analyzing their results (success or failure), it decides to launch other primitives and/or to abort some of them. Finally, the evolution is finished and it is time to assess it. Indeed, the evolution processing transforms the enterprise and its ways of working, even if processing of some composite primitives fails. Due to the important complexity, it seems important to place the training step at the evolution level and not at the level of composite primitive. Of course, the training step of a composite primitive must be realized before its execution. But, in this way, it is possible to combine training steps of several composite primitives into one, and to obtain a more efficient training in the context of the evolution. Fig. 6 shows the coordination between the lifecycles of IS evolution and its top composite primitives.
Illustrating Example
To illustrate our approach, we use the example of a hospital. Fig. 7 depicts a small part of the kernel of its IS schema. In this example we will consider:
-one organizational unit: the general medicine department, -two positions: the doctor and the nurse, -two activities of a doctor: a 1 concerning the care of patients (visit, diagnostic, prescription) and a 2 concerning the management of the nurses working in her team. To illustrate an evolution case, let us suppose that now our hospital has to apply new rules for improving patients' safety. To this end, each doctor will be in charge to guarantee that nurses of her team have sufficient competences for administrating the drugs she can prescribe. So, the IS of the hospital must evolve, especially by introducing new classes: Nurse Drug that associates a nurse with a drug for which she is competent according to her doctor, and Doctor Drug that associates a doctor with a drug that she can prescribe. The TOBE-IS schema is shown in Fig. 8. The IS evolution is then composed of 2 top composite primitives, one around Doctor Drug (DD), the other one around Nurse Drug (ND). The first one is built from the composite primitive C-Create Class to create the instance DD of the ISIS element Class. Its preparation step specifies:
-the DD objects will be obtained from the ASIS-IS class Prescription; -DD will be existentially dependent on the IS classes Doctor and Drug, and Prescription will become existentially dependent on DD and no more directly dependent on Drug;
-the alerts for Role, Activities, Positions, Persons about the changes, especially in the creation an object of Prescription, which in TOBE-IS must be related to a DD object; -creation of DD objects, creation or modification of roles for reaching them; -the blocking list for its execution, which includes Doctor, Drug and Prescription.
The second composite primitive is built from the composite evolution primitive C-Create Class to create the instance ND of the ISIS element Class. Its preparation step specifies:
-the ND objects will be obtained from the ASIS-IS class Prescription; -ND will be existentially dependent on the IS classes Nurse and Drug, and Drug Delivery will become existentially dependent on ND; -the alerts for Role, Activities, Positions, Persons about the changes, especially in the creation an object of Drug Delivery, which must be related to a ND object; -creation of ND objects, creation or modification of roles for reaching them; -the blocking list for its execution, which includes Nurse, Drug and Drug Delivery.
In the case of this example, the execution process of the IS evolution after the training of involved actors is simple: to execute the top evolution composite primitives related to Doctor Drug and then to Nurse Drug.
Conclusion
Handling information systems evolution is a complex task that has to be properly defined, planned and assessed before its actual execution. The result of each IS evolution has impact on the sustainability on organization's ISs and also on the efficiency of the organization's activity. So this task is not only complex but also critical.
In this paper, we continue to present our work on a conceptual framework for IS evolution steering that aims to establish the foundation for the development of an Informational Steering Information System (ISIS). In particular, we dedicate this paper to the engineering aspects of the concept of IS evolution, and present its metamodel, which is one of the components in our framework (Fig. 1).
The role of the IS Evolution Metamodel consists in supporting the operationalization of the IS evolution. Therefore, it includes two views: structural and lifecycle. The structural view allows to progressively decompose a complex IS evolution into a set of atomic primitives going trough several granularity levels of composite primitives. The obtained primitives as robust because they follow generic evolution rules and take into account horizontal and vertical effects on ISIS and IS. They are also smart because they pay attention to the managerial effects of IS evolution at the human level. The lifecycle view helps to operate IS evolution at its different levels of granularity by providing a set of models and rules for progressing from one step to another.
To complete our framework for IS evolution steering we still need to define the Impact Space component that will provide mechanisms to measure the impact of IS evolution and to take decisions accordingly. With the IS Evolution Metamodel we have prepared the basis for developing the detailed guidance for IS evolution steering, which will complete our work on this conceptual framework.
Fig. 1 .
1 Fig. 1. Conceptual Framework for IS Evolution Steering
Fig. 2 .
2 Fig. 2. Simplified version of IS-SM. The right part (in white) shows the information model generic to any IS implementation, the left part (in red) represents enterprise business activity model, the top part (in grey) represents the regulatory model governing enterprise business and IS implementations. The multi-coloured elements represent pivot elements allowing to interconnect the information, activity and regulation models, and so, to capture how ISs support enterprise activities and comply with regulations.
Fig. 3 .
3 Fig. 3. Structural view of the IS evolution
Fig. 4 .
4 Fig. 4. Lifecycle of an instance of any element from IS-SM
Fig. 5 .
5 Fig. 5. Coordination of the lifecycle of a composite primitive (left) with the lifecycles of its sub-primitives (right); * indicates multiple transitions, dashed lines indicate that the step is under the responsibility of the lower or upper level model.
Fig. 7 .
7 Fig. 7. A small part of the ASIS-IS schema of the hospital
Fig. 8 .
8 Fig. 8. A part of the TOBE-IS schema of the hospital
A class C2 is existentially dependent on the class C1, if every object o2 of C2 is permanently associated to exactly one object o1 of C1; o2 is said to be existentially dependent on o1. The existential dependency is a transitive relation. One of its particular cases is the specialization. |
01765318 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01765318/file/bbviv7c.pdf | Eduardo Dur Án Venegas
Stéphane Le Diz Ès
Christophe Eloy
A coupled model for flexible rotors
Rotors are present in various applications ranging from wind turbines to helicopters and propellers. The rotors are often made of flexible materials which implies that their geometry varies when the operational conditions change. The intrinsic difficulty of rotor modeling lies in the strong coupling between the flow generated by the rotor and the rotor itself that can deform under the action of the flow. In this talk, we propose a model where the strong coupling between the flexible rotor and its wake is taken into account. We are particularly interested in configurations where the general momentum theory [START_REF] Sørensen | General momentum theory for horizontal axis wind turbines[END_REF] cannot be used (for example, for helicopters in descent flight).
The wake is described by a generalized Joukowski model. We assume that it is formed for each blade of a bound vortex on the blade and two free vortices of opposite circulation, same core size a, emitted at the radial locations R i and R e (see figure 1). These parameters are computed from the circulation profile Γ(r) obtained on the blade by applying locally at each radial location r the 2D Kutta-Joukowski formula
Γ(r) = 1 2 C L (α(r))U (r)c(r), (1)
where c(r) is the local chord, C L (α(r)) the lift coefficient of the chosen blade profile, α(r) the angle of attack of the flow, and U (r) the norm of the velocity. The vortex circulation Γ m is the maximum value of Γ(r), and the emission locations R i and R e are the radial distances of the centroid of ∂ r Γ on both sides of the maximum (see figure 1). The wake is computed using a free-vortex method [START_REF] Leishman | Principles of Helicopter Aerodynamics[END_REF]. Each vortex is discretized in small vortex segments for which the induced velocity can be explicitly obtained from the Biot-Savart law [START_REF] Saffman | Vortex Dynamics[END_REF]. We are considering helical wake structures that are stationary in the rotor frame. This frame is rotating at the rotor angular velocity Ω R and translating at a velocity V ∞ corresponding to an external axial wind. For a prescribed rotor of N blades, the wake structure is characterized by five non-dimensional parameters
λ = Ω R R b V ∞ , η = Γ m Ω R R 2 b , R * e = R e R b , R * i = R i R b , ε = a R b , (2)
where R b is the blade length. The aerodynamic forces exerted on the blade are calculated using the blade element theory [START_REF] Leishman | Principles of Helicopter Aerodynamics[END_REF]. From the wake solution are deduced the angle of attack and the velocity amplitude at each radial location on the blade in the rotor plane. Then, the loads are deduced from the lift and drag coefficients C L and C D of the considered blade profile. The blade deformation is obtained using a ribbon model for the blade [START_REF] Dias | meet Kirchhoff: A general and unified description of elastic ribbons and thin rods[END_REF]. This 1D model is a beam model that allows to describe the nonlinear coupling between bending and torsion. In the simplest cases, we assume uniform elastic properties of the blades which are characterized by a Poisson ratio ν and a non-dimensional Young modulus
E * = E/ρ b Ω 2 R 2 b
, where ρ b is the density of the blade.
A typical example with a simple blade geometry is shown in figure 2. In these figures are shown both the case of a rigid rotor and of a flexible rotor for the same operational conditions (same V ∞ and same Ω R ). We do see the effect of blade flexibility. The blades do bend and twist in the presence of the flow. Moreover, this bending and twisting also affect the wake. When the blade bends, the vortices move streamwise and inward, which impacts the expansion of the wake. The vortex circulation is also slightly modified as η changes from 0.0218 to 0.0216 when the blades bend.
Other examples will be presented and compared to available data. The question of the stability will also be addressed. Both flow instabilities and instabilities associated with the blade flexibility will be discussed.
Figure 1 :
1 Figure 1: Generalized Joukowski model. The parameters (Γ m , R i and R e ) of the model are computed from the circulation profile Γ(r) on the blade as explained in the text.
Figure 2 :
2 Figure 2: Illustration of the effect of blade flexibility on the wake structure and blade geometry. Dashed lines: wake and blades for the rigid case. solid lines: wake and blades for the flexible case. The undeformed blade is as illustrated in figure 1: it is a flat plate with a constant twist angle θ = -10 • and a linearly decreasing chord from c(r = 0.2R b ) = 0.1R b to c(r = R b ) = 0.07R b . The wake parameters of the rigid rotor are λ = 6.67, η = 0.0218, R * e = 0.99, R * i = 0.24, ε = 0.01. The flexible blades have the characteristics: E * = 10 6 , ν = 0.5. (a) 3D geometry of the rotor and of the wake. Only the deformation and the vortices emitted from a single blade are shown. (b) Locations of the vortices in the plane including a blade and the rotor axis. (c) Twist angle of the blade. (d) Bending of the blade. |
01765340 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2003 | https://insep.hal.science//hal-01765340/file/160-%20Drafting%20during%20swimming%20improves.pdf | Anne Delextrat
Véronique Tricot
Thierry Bernard
Fabrice Vercruyssen
Christophe Hausswirth
Jeanick Brisswalter
email: [email protected].
Pr Jeanick Brisswalter
Drafting during Swimming Improves Efficiency during Subsequent Cycling
Keywords: TRIATHLETES, HYDRODYNAMIC DRAG, OXYGEN KINETICS, HEMODYNAMICS, CADENCE
triathlon determinants highlighted that the metabolic demand induced by swimming could have detrimental effects on subsequent cycling or running adaptations (e.g., 3).
Experimental studies on the effect of prior swimming on subsequent cycling performance have led to contradictory results. Kreider et al. [START_REF] Kreider | Cardiovascular and thermal responses of triathlon performance[END_REF] have found that an 800-m swimming bout resulted in a significant decrease in power output (17%) during a subsequent 75-min cycling exercise. More recently, Delextrat et al. [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF] have observed a significant decrease in cycling efficiency (17.5%) after a 750-m swim conducted at a sprint triathlon competition pace when compared with an isolated cycling bout. In contrast, Laursen et al. [START_REF] Laursen | The effects of 3000-m swimming on subsequent 3-h cycling performance: implications for ultraendurance triathletes[END_REF] indicated no significant effect of a 3000-m swim performed at a long-distance triathlon competition pace on physiological parameters measured during a subsequent cycling bout. It is therefore suggested that the swimming section could negatively affect the subsequent cycling, especially during sprint triathlon, where the intensity of the swim is higher than during longdistance events.
Within this framework, we showed in a recent study [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF] that decreasing the metabolic load during a 750-m swim by using a wet suit resulted in a 11% decrease in swimming heart rate (HR) values and led to a 12% improvement in efficiency during a subsequent 10-min cycling exercise, when compared with swimming without a wet suit. The lower relative intensity when swimming with a wet suit is classically explained by a decrease in hydrodynamic drag. This decrease in hydrodynamic drag results from an increased buoyancy that allows the subjects to adopt a more horizontal position, thus reducing their frontal area [START_REF] Chatard | Effects of wetsuit use in swimming events[END_REF].
During swimming, hydrodynamic drag could also be reduced when swimming in a drafting position (i.e., swimming directly behind another competitor). The effects of drafting during short swimming bouts have been widely studied in the recent literature [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. The main factor of decreased body drag with drafting seems to be the depression made in the water by the lead swimmer [START_REF] Bentley | Specific aspects of contemporary triathlon[END_REF]. This low pressure behind the lead swimmer decreases the pressure gradient from the front to the back of the following swimmer, hence facilitating his displacement through the water [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF]. Within this framework, significant decreases in passive drag (i.e., drag forces exerted on subjects passively towed through the water in prone position [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF]) from 10% to 26% have been reported in a drafting position compared with isolated conditions (for review, (3)). Moreover, swimming in drafting position is associated with significant reductions in oxygen uptake (10%), HR (6.2%), and blood lactate concentration (11-31%) [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF].
During a multidisciplinary event, such as triathlon, the effect of drafting on subsequent performance has been studied only during the cycling leg. Hausswirth et al. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] showed that drafting during the cycle portion of a sprint triathlon led to a significant decrease in cycling energy expenditure (14%) compared with an individual effort, leading to a 4.1% improvement in performance during the subsequent 5-km run. To the best of our knowledge, no similar study has been conducted during a swim-bike trial, in order to evaluate the effects of drafting during swimming on subsequent cycling performance
The objective of the present study was therefore to investigate the effects of drafting during swimming on energy expenditure in the context of a swim-bike trial. We hypothesized that swimming in drafting position would be associated with a lower metabolic load during swimming and would reduce energy expenditure during subsequent cycling.
MATERIALS AND METHODS Subjects
Eight male triathletes competing at interregional or national level (age: 26 ± 6 yr, height: 183 ± 7 cm, weight: 74 ± 7 kg, body fat: 13 3%) participated to this study. They were all familiarized with laboratory testing. Average training distances per week were 6.6 km in swimming, 59 km in cycling, and 34 km in running, which represented 150 min, 135 min, and 169 min for these three disciplines, respectively. This training program included only one cross-training session (cycle-to-run) per week. The low distance covered by the triathletes during training, especially in cycling could be partly explained because the experiment was undertaken in winter, when triathletes usually decrease their training load in the three disciplines. Written consent was given by all the subjects before ail testing and the ethics committee for the protection of individuals gave their approval of the project before its initiation (Saint-Germain-en-Laye, France).
Protocol
Maximal oxygen uptake (V0 2 ".) and maximal aerobic power (MAP) determinations. The first test was a laboratory incremental test on a cycle ergometer to determine V0 2 . and MAP. After a 6-min warm-up at 150 W, the power output was increased by 25 W every 2 min until volitional exhaustion. The criteria used for the determination of V0 2 . were: a plateau in VO 2 despite the increase in power output, a HR over 90% of the predicted maximal HR, and a respiratory exchange ratio (RER) over 1.15 [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. Because V0 2 max was used as a simple descriptive characteristic of the population for the present study and was sot a primary dependent variable, the attainment of two out of three criteria was considered sufficient [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. The ventilatory threshold (VT) was calculated using the criteria of an increase in VE/VCO 2 with no concomitant increase of VE/ VCO 2 [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF].
Submaximal sessions. After this first test, each triathlete underwent three submaximal sessions separated by at least 48 h. The experimental protocol is described in Figure 1. All swim tests took place in the outdoor Olympic swimming pool of Hyères (Var, France) and were performed with a neoprene wet suit (integral wet suit Aquaman ® , Pulsar 2000, thickness: shoulders: 1.5 mm, trunk: 4.5 mm, legs: 1.5 mm, arms: 1.5 mm) The cycling tests were conducted adjacent to the swimming pool in order to standardize the duration of the swim-to-cycle transition (3 min). The first test was always a 750-m swim performed alone at a sprint triathlon competition pace (SA trial). It was used to determine the swimming intensity for each subject. The two other tests, presented in a counterbalanced order, comprised one swim-to-cycle transition performed alone (SAC trial) and one swim-to-cycle transition with a swimming bout performed in drafting position (SDC trial).
The SAC trial consisted of a 750-m swim at the pace adopted during SA, followed by a 15-min ride on the bicycle ergometer at 75% of MAP and at a freely chosen cadence (FCC). This intensity was chosen to be comparable with the cycling competition pace during a sprint triathlon reported in subjects of the same level by previous studies (e.g., [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]). Moreover, it was similar to those used in recent works studying the cycle-to-run transition in trained triathletes (e.g., 27). During the SDC trial, the subjects swam 750 m in drafting position (i.e., swimming directly behind a competitive swimmer in the same lane) at the pace adopted during SA. They then performed the 15-min ride at the same intensity as during SAC. The lead swimmer, who was used for ail the triathletes, was a highly trained swimmer competing at international level. To reproduce the swimming pace adopted during SA, the lead swimmer was informed of his performance every 50 m via visual feedback.
Measured Parameters
Swimming trials. During each swimming trial, the time to cover each 50 m and overall time were recorded. Subjects were instructed to keep the velocity as constant as possible. Stroke frequency (SF), expressed as the number of complete arm cycles per minute, was measured for each 50 m on a 20-m zone situated in the middle of the pool. The stroke length (SL) was calculated by dividing the mean velocity of each 20-m swim by the mean SF of each 20-m swim.
Immediately after each trial, the triathletes were asked to report their perceived exertion (RPE) using the 15-graded Borg scale (from 6 to 20 [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]).
Blood sampling. Capillary blood samples were collected from subjects' earlobes at the following times: 1 and 3 min after swimming (L1, L2), and at the third and 15th minutes of cycling (L3, L4). Blood lactate concentration (LA, mmol.L -1 ) was then measured by the lactate Pro TM LT-1710 portable lactate analyzer (Arkray, KDK, Japan). The highest of the two postswim (L1, L2) concentrations was considered as the postswim lactate value, because the time delay for lactate to diffuse from the muscles to the blood during the recovery from a swimming exercise has not been precisely established [START_REF] Lepers | Swimming-cycling transition modelisation of a triathlon in laboratory. Influence on lactate kinetics[END_REF].
Measurement of respiratory gas exchange. During the cycling trials, oxygen uptake (VO 2 ), HR, and respiratory parameters (expiratory flow: VE; respiratory frequency: RF) were monitored breath-bybreath and recorded by the Cosmed K4b 2 telemetric system (Rome, Italy).
HR was continuously monitored during swimming and cycling using a cardiofrequency meter (Polar Vantage, Kempele, Finland). Physiological solicitation of cycling was assessed using oxygen kinetics analysis (e.g., 29); energy expenditure was analyzed by gross efficiency calculation [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF].
Curve fitting. Oxygen kinetics were modeled according to the method used by Barstow et al. [START_REF] Barstow | Influence of muscle fiber type and pedal frequency on oxygen uptake kinetics of heavy exercise[END_REF]. Breath-by-breath VO 2 data were smoothed in order to eliminate the outlying breaths (defined as those that were lying outside two standard deviations of the local mean). For each trial (SDC and SAC), the time course of the VO 2 response after the onset of the cycling exercise was described by two different exponential models that were fit to the data with the use of nonlinear regression techniques in which minimizing the sum of squared error was the criterion for convergence.
The first mathematical model was a mono-component exponential model:
The second mathematical model was a two-component exponential model:
The use of one of these models depends on the relative exercise intensity [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. The mono-component exponential model characterizes the VO 2 response during an exercise of moderate intensity (i.e., below the lactate threshold). After a time delay corresponding to the transit time of blood flow from the exercising muscle to the lung (TD), VO 2 increases exponentially toward a steady state level. The VO 2 response is characterized by an asymptotic amplitude (A) and a time constant (T) defined as the time to reach 63% of the difference from final plateau value and baseline (V0 2 (b), corresponding to the value recorded on the bicycle before the onset of cycling). At higher intensities, the VO 2 response is modeled by a two-component exponential function. The first exponential term describes the rapid rise in VO 2 previously observed (the parameters TD 1 , A 1 , and τ 1 are identical to TD, A, and T of the monocomponent exponential model), whereas the second exponential term characterizes the slower rise in VO 2 termed "VO 2 slow component" that is superimposed on the rapid phase of oxygen uptake kinetics.
The parameters TD 2 , A2, and τ 2 represent, respectively, the time delay, asymptotic amplitude, and time constant for this exponential term. The computation of best-fit parameters was chosen by a computer program (SigmaPlot 7.0) so as to minimize the sum of the squared differences between the fitted function and the observed response.
Determination of cycling gross efficiency. Cycling gross efficiency (GE, %) was calculated as the ratio of work accomplished per minute (kJ•min-1 ) to metabolic energy expended per minute (kJ•min- 1) . Because relative intensity of the cycling bouts could be superior to VT, the aerobic contribution to metabolic energy was calculated from the energy equivalents for oxygen (according to respiratory exchange ratio value) and a possible anaerobic contribution was estimated using blood lactate increase with time (à lactate: 63 J-kg -1 .mM -1 ; 13). For this calculation, VO 2 and lactate increase was estimated from the difference between the 15th and the third minutes.
Pedal rate. All the cycling tests were performed on an electromagnetically braked cycle ergometer (SRM Jülich, Welldorf, Germany) The cycle ergometer was equipped with the triathletes' own pedals, and the handlebars and racing seat were fully adjustable both vertically and horizontally to reproduce conditions known from their own bicycles. The SRM system can maintain a constant power output independent of the pedal rate spontaneously adopted by the subjects.
Statistical Analysis
All the results were expressed as mean and standard deviation (mean SD). Differences between the two conditions (swimming alone or in drafting position) in physiological and biomechanical parameters were analyzed using a Student t-test for paired samples. The level of confidence was set at P < 0.05.
RESULTS
Maximal Test
The subjects' physiological characteristics recorded during the incremental cycling test are presented in Table 1. VO 2m a x values were close to those previously obtained for triathletes of the same level [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: effect of exercise duration[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF]. From VT values, it could be observed that the cycling bouts were performed at an intensity close to VT + 2%.
Swimming Trials
Performance. No significant difference in performance was observed between the two swimming trials (respectively for SAC and SDC: 638 ± 38 s and 637 ± 39 s, P > 0.05). The two 750-m swims were therefore performed at a mean velocity of 1.18 m-s -1 . In addition, the stroke characteristics (SR and SL) were not significantly different between SAC and SDC trials (mean SR: 33.2 ± 4.5 cycles•min - 1 vs 33.1 ± 5.1 cycles.min -1 , respectively, for SAC and SDC, P > 0.05; mean SL: 2.13 ± 0.29 rn•cycle -1 vs 2.15 ± 0.30 m•cycle -1 , respectively, for SAC and SDC, P > 0.05). During the SDC trial, the mean distance between the subjects (draftees) and the lead swimmer did not exceed 1 m.
Physiological parameters and RPE.
The HR values recorded during the last 5 min of swimming are presented in Figure 2. The main result shows that swimming in drafting position resulted in a significant mean decrease of 7% in HR values during the last 4 min of swimming in comparison with the isolated swimming bout (160 ± 15 beats.min -1 vs 172 ± 18 beats•min -1 , respectively, for SDC and SAC trials, Fig. 2, P < 0.05). Furthermore, postswim lactate values were significantly lower (29.3%) after the SDC session when compared with the SAC session (5.3 ± 2.1 mmol-L -1 vs 7.5 ± 2.4 mmol•L - 1 , respectively, for SDC and SAC trials, P < 0.05).
Finally, RPE values recorded immediately after swimming indicated that the subjects' perception of effort was significantly lower in the SDC trial than in the SAC trial (13 ± 2 vs 15 ± 1, corresponding to "rather laborious" versus "laborious" respectively for SDC and SAC trials, P < 0.05).
Cycling trials
VO 2 kinetics. All VO 2 responses were best fitted by a mono-component exponential model, except the 'O 2 responses of one subject during the SAC trial that were best described by a two-component exponential function. The occurrence of a slow component in this latter case is representative of a heavy-intensity exercise whereas the other subjects have exercised in a moderate-intensity domain [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Therefore, the parameters of the model for this subject are different (two-component exponential model vs mono-component exponential model) and could not be included in the same analysis. Figure 3 shows the breath-by-breath VO 2 responses during SAC and SDC trials for a representative subject (responses best fitted by a mono-component exponential model, Fig. 3A) as well as the breath-bybreath \ . /0 2 responses for the subject eliminated (responses best fitted by a two-component exponential model, Fig. 3B). Statistical analysis shows that baseline VO 2 values were sot significantly different between SAC and SDC trials (P > 0.05). However, we have observed that during the SAC trial, higher VO 2 values at the steady state level were attained more quickly than during the SDC trial (time constant values for SAC and SDC trials were, respectively, 17.1 ± 7.8 s vs 23.6 ± 10.1 s for VO 2 kinetics, P < 0.05).
Mean physiological parameters and RPE.
The influence of drafting, during prior swimming, on the mean physiological values measured during subsequent cycling is presented in Table 2. The statistical analysis shows that cycling efficiency was significantly higher in the SDC trial (4.8%) in comparison with the SAC trial (P < 0.05). The V02 . HR and lactate values measured during cycling were significant higher when previous swimming was performed alone compared with the drafting condition (Table 2, P < 0.05). However, no significant increase in blood lactate concentration with time was observed, indicating the main contribution of aerobic metabolism [START_REF] Di Prampero | Energetics of muscular exercise[END_REF]. Therefore, the decrease in gross efficiency during the SAC trial is related to higher VO 2 values. Furthermore, the subjects' RPE was significantly lower in the SDC trial compared to the SAC trial (15 ± 2 vs 17 ± 2, corresponding to "laborious" vs "very laborious," P < 0.05).
Pedal rate. The statistical analysis indicated a significant difference in pedal rate measured during cycling between the two conditions. A significantly lower pedal rate (5.6%) was observed in the SDC trial in comparison with the SAC trial (Table 2, P < 0.05).
DISCUSSION
The main result of the present study indicated a significant effect of swimming metabolic load on oxygen kinetics and efficiency during subsequent cycling at competition pace. Within this framework, a prior 750-m swim performed clone resulted in faster oxygen kinetics and a significantly higher global energy expenditure during subsequent cycling, in comparison with an identical swimming bout performed in a drafting position (P < 0.05).
Drafting during swimming and swimming metabolic load. The effects of drafting on energy expenditure during short-or long-distance events have been investigated over a variety of physical activities. Drafting has been shown to significantly reduce the metabolic load during swimming (2), cycling [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF], cross-country skiing [START_REF] Spring | Drag area of a cross-country skier[END_REF], and speed skating [START_REF] Van Ingen Schenau | The influence of air friction in speed skating[END_REF]. The lower energy cost observed in a drafting position is classically attributed to a decrease in aerodynamic or hydrodynamic drag [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. In this context, Bassett et al. [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF] have suggested that the decrease in drag associated with drafting was lower in swimming in comparison with terrestrial activities. This is because of the characteristics of swimming such as the relatively low velocity, the prone position, and the turbulence owing to the kicks of the lead swimmer. Decreases in passive hydrodynamic drag in drafting position from 10% to 26% have been reported in the literature [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. It should be noted that the active drag experienced by a subject while swimming is approximately 1.5-2 times greater than passive drag [START_REF] Di Prampero | Energetics of swimming in man[END_REF].
In this study, the HR values recorded during the two swimming bouts (Fig. 2) shows mean HR values corresponding respectively for SDC and SAC trials to 84.2% and 90.5% of HR max measured during cycling. Consequently, drafting involved a significant 7% decrease in HR during a 750-m swim (P < 0.05). Furthermore, the SDC trial was characterized by significant reductions in postswim lactate values (29.3%) and RPE values (20%), in comparison with the SAC trial.
The main factor classically evoked in the literature to explain the lower swimming energy cost in drafting position is the reduction of hydrodynamic drag owing to the body displacement of the leading swimmer. The extent to which hydrodynamic drag could be reduced in a drafting position depends on several factors, such as swimming velocity and the distance separating the draftee and the lead swimmer. Concerning the distance between the swimmers, there seems to be a compromise between the positive effect of the hydrodynamic wake created by the lead swimmer and the negative effect of the turbulence generated by his kicks [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Millet | Effects of drafting behind a two-or a six-beat kick swimmer in elite female triathletes[END_REF]. However, during triathlon, the draftee could follow the lead swimmer quite closely because triathletes usually adopt a two-beat kick that does not generate excessive turbulence.
The effects of drafting during short-distance swimming bouts have been well documented in the literature [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF][START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF]. However, during these experiments, the race conducted in drafting position was performed either at the same relative velocity as the isolated condition (2), or the subjects were asked to swim as fast as possible during the second half of the race [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chollet | The effects of drafting on stroking variations during swimming in elite male triathletes[END_REF]. Using a protocol comparable to the present study, Bassett et al. [START_REF] Bassett | Metabolic responses to drafting during front crawl swimming[END_REF] have observed during a 549-m swim (600 yards) performed at 1.20 m•s-1 (1.18 m•s -1 in the present study) significantly lower HR (6.2%), lactate (31%), and RPE values (21%) when the swimming bout was performed in a drafting position (P < 0.05), compared with an isolated effort. These results are in agreement with this previous study. One interesting result of this study is that the significant effect of drafting previously reported in the literature was observed even though our subjects were wearing a wet suit. It has been reported that the use of wet suit induced significant decreases in energy cost (from 7% to 22%) and active drag (from 12% to 16%) among different speeds (for review, [START_REF] Chatard | Effects of wetsuit use in swimming events[END_REF]). It could be concluded that during triathlon events, where subjects are wearing wet suits, drafting could further increase the reduction in metabolic load during swimming.
Drafting during swimming and cycling exercise. In the present study, the decrease in metabolic load associated with swimming in a drafting position involved two main modifications in physiological parameters during subsequent cycling. First, VO 2 kinetics, at the onset of cycling, were significantly slowed when the prior swimming bout was performed in a drafting position (slower time constant, τV02) compared with swimming alone (P < 0.05). Second, a significantly higher cycling efficiency, measured at steady state level, was observed in the SDC trial versus the SAC trial (+4.8%, P < 0.05).
The modification in VO 2 kinetics observed in the present study is in accordance with previous results reported in the literature. During the last decade, several investigations have analyzed the influence of previous exercise metabolic load on the rate of VO 2 increase at the onset of subsequent exercise. Gerbino et al. [START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF] have found that VO 2 kinetics during a high-intensity cycling exercise (i.e., greater than the lactate threshold) was significantly increased by a prior high-intensity cycling bout, whereas no effect was reported after a prior low-intensity exercise (i.e., lower than the lactate threshold). In addition, Bohnert et al. ( 4) have observed an acceleration of VO 2 kinetics when a cycling trial was preceded by a high-intensity arm-cranking exercise.
Many studies have been conducted in order to identify the mechanisms underlying the rate of VO 2 increase at the onset of exercise (e.g., [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Although these mechanisms are not clearly established, two major hypotheses are reported in the literature. Some authors suggest that VO 2 kinetics are limited by the rate of oxygen supply to the active muscle mass, whereas others report that the capacity of muscle utilization is the most important determinant of VO 2 responses at the onset of exercise [START_REF] Xu | Oxygen uptake kinetics during exercise[END_REF]. Concerning the hypothesis of oxygen transport limitation, Hughson et al. [START_REF] Hughson | Kinetics of ventilation and gas exchange during supin and upright cycle exercise[END_REF] investigated the influence of an improved perfusion of active muscle mass during cycling on the rate of VO 2 increases at the onset of exercise. These authors found that VO 2 kinetics at the onset of exercise were significantly faster when the perfusion of active muscle mass was augmented. In our study, several factors could be evoked to increase perfusion in the muscles of the lower limbs during cycling, such as previous metabolic load and pedal rate.
Gerbino et al. [START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF] suggested that the faster VO 2 kinetics observed during the second bout of two repeated high-intensity cycling exercises could be accounted for by the residual metabolic acidemia from previous high-intensity exercise, involving a vasodilatation and thus an enhancing blood flow to the active muscle mass at the start of subsequent cycling bout. In favor of this hypothesis, a higher metabolic acidemia was observed in the present study immediately after the swimming stage of the SAC trial in comparison with the SDC trial (postswim lactate values: 7.5 ± 2.4 mmol.L -1 vs 5.3 ± 2.1 mmol•L -1 for SAC and SDC trials, respectively, P < 0.05). Therefore, we suggest that the higher contribution of anaerobic metabolism to energy expenditure when swimming alone has involved a better perfusion of active muscular mass at the start of subsequent cycling exercise.
However, in this study, subjects adopted a higher pedal rate after the swimming bout performed alone. There is little information on the effects of pedal rate manipulation on cardiovascular adjustments during cycling. However, Gotshall et al. [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] have indicated an enhanced muscle blood flow with increasing cadences from 70 to 110 rpm. Indeed, the frequency of contraction and relaxation of the muscles of the lower limbs increases throughout high cadences, improving venous return and therefore heart filling. As a consequence, the skeletal muscle pump is progressively more effective, resulting in an over perfusion of the active muscle mass [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF]. According to this hypothesis, the significantly higher pedal rate reported in the present study in the SAC trial in comparison with the SDC trial (Table 2, P < 0.05) could have involved an increased blood flow to the muscles of the lower limbs. Therefore, both the higher contribution of anaerobic metabolism to energy expenditure during prior swimming and the higher pedal rates adopted during subsequent cycling in the SAC trial could account for the faster VO 2 kinetics observed at the onset of cycling in this trial in comparison with the SDC trial.
The second principal result of the present study indicated a significantly higher cycling efficiency during the SDC trial in comparison with the SAC trial (Table 2, P < 0.05). To the best of our knowledge, the effects of drafting during swimming on subsequent cycling adaptation have never been investigated. However, these results were similar to another study from our laboratory showing that wearing a neoprene wet suit reduced the metabolic load at the end of swimming and led to a 12% increase in subsequent cycling efficiency [START_REF] Delextrat | Effect of wet suit use on energy expenditure during a swim-to-cycle transition[END_REF]. In our study, subjects were wearing a wet suit, and our results indicated that drafting could lead to a further improvement of cycling efficiency.
In the context of multidisciplinary events, the effect of drafting on subsequent performance has been mainly studied during the cycle-to-run portion of a simulated sprint triathlon [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. For example, Hausswirth et al. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] reported that the significant reductions in VO 2 VE, HR, and blood lactate concentration during the cycle stage of a simulated sprint triathlon (0.75-km swim, 20-km cycle, 5-km run), observed when cycling was performed in drafting position in comparison with an isolated cycling stage, were related to significant increases in subsequent running velocity (4.1%). More recently, Hausswirth et al. [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] observed that drafting continuously behind a leader during the 20-km stage of a sprint triathlon resulted in a significantly lower cycling metabolic cost, in comparison with alternating drafting and leading every 500 m at the same pace. This lower metabolic cost led to a 4.2% improvement in velocity during a subsequent 5-km run [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. These authors suggested that during the drafting conditions (drafting position vs isolated cycling, or continuous vs alternate drafting), the decrease in energy cost of cycling is the main factor of running performance improvement. In the present study, the cycling bouts were conducted at constant speed. Therefore, no improvement in performance (i.e., velocity) could be observed. However, we recorded a 4.8% increase in cycling efficiency after a swimming bout performed in drafting position compared with an isolated swimming bout. This improvement in cycling efficiency could be mainly accounted for by the lower swimming relative intensity involving a lower state of fatigue in the muscles of the lower limbs at the beginning of subsequent cycling. Consequently, in long-distance events such as triathlon, where performance depends on the capacity to spend the lowest amount of metabolic energy during the whole race [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF], we suggest that the increase in cycling efficiency could lead to an improvement in performance. However, further studies are needed to investigate the effects of this improved cycling efficiency on running and total triathlon performance.
However, it should be noted that the possibility for athletes and coaches to put the results of the present study into practice could be limited by the Jack of cycling training of our subjects and by the difference between the intensity and duration of the cycling trials in this study and the metabolic load encountered during a sprint triathlon [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. Because cycling experience could lead to a lower variability in energy cost of locomotion, more training in cycling would be associated with a lower benefit of drafting. Furthermore, even if a measure of actual cycling performance improvements after drafting (such as time or power output) would have been more applicable to competition, the constant power output set in this study allowed the quantification of the modifications in energy expenditure during cycling, which is a main determinant of triathlon performance [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF]. Further studies are necessary to validate the effects observed in this study during a real triathlon event.
In conclusion, the results of the present study show that the metabolic load during swimming could have a significant effect on subsequent cycling performance during a sprint triathlon. In particular, a decrease in swimming relative intensity could lead to a significantly higher efficiency during subsequent cycling. These findings highlight that swimming behind another athlete is beneficial during triathlon events. Within this framework, further studies could include a running session to investigate more precisely the effects of drafting during the swimming bout of a sprint triathlon on total triathlon performance.
FIGURES and TABLES
FIGURE 1 -
1 FIGURE 1-Experimental protocol. L: blood sampling, K4 b 2 : installation of the Cosmed K4 b 2 analyzer.
FIGURE 2 -FIGURE 3 -
23 FIGURE 2-Changes in HR values during the fast 5 min of the two swimming trials (SAC and SDC). *Significant difference between SDC and SAC trials, P < 0.05.
TABLE 1 . Subjects' physiological characteristics during the cycling incremental test. VO2max (mL•min-1 .k -1 )
1 V0 2max , maximal oxygen uptake; MAP, maximal aerobic power; HR max, maximal heart rate; RER max , maximal respiratory exchange ratio; power output at VT, power output corresponding to the ventilatory threshold.
MAP (W) 75% of MAP (W) HRmax (beats•min-1) RERmax. Power Output at VT (W)
66.2 ± 6.8 343 ± 39 262 ± 29 190 ± 9 1.06 ± 0.05 258 ± 42
Table 2 . Effect of drafting during prior swimming on mean values of physiological parameters and pedal rate recorded during subsequent cycling exercice.
2 V0 2 , oxygen uptake; LA, blood lactate concentration; GE, gross efficiency; HR, heart rate; VE, expiratory flow; RF, respiratory frequency. * Significant difference between SAC and SDC trials; P < 0.05.
SDC SA
The authors acknowledge all the triathletes who took part to the experiment for their high cooperation and motivation. We are also grateful to Rob Suriano for his assistance with the language. |
01430561 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01430561/file/articleCurvature12.pdf | Francisco J Blanco-Rodríguez
Stéphane Le Dizès
Curvature instability of a curved Batchelor vortex
In this paper, we analyse the curvature instability of a curved Batchelor vortex. We consider this short-wavelength instability when the radius of curvature of the vortex centerline is large compared to the vortex core size. In this limit, the curvature instability can be interpreted as a resonant phenomenon. It results from the resonant coupling of two Kelvin modes of the underlying Batchelor vortex with the dipolar correction induced by curvature. The condition of resonance of the two modes is analysed in detail as a function of the axial jet strength of the Batchelor vortex. Contrarily to the Rankine vortex, only a few configurations involving m = 0 and m = 1 modes are found to become the most unstable. The growth rate of the resonant configurations is systematically computed and used to determine the characteristics of the most unstable mode as a function of the curvature ratio, the Reynolds number, and the axial flow parameter. The competition of the curvature instability with another short-wavelength instability, which was considered in a companion paper [Blanco-Rodríguez & Le Dizès, Elliptic instability of a curved Batchelor vortex, J. Fluid Mech. 804, 224-247 (2016)], is analysed for a vortex ring. A numerical error found in this paper which affects the relative strength of the elliptic instability is also corrected. We show that the curvature instability becomes the dominant instability in large rings as soon as axial flow is present (vortex ring with swirl).
Introduction
Vortices are ubiquitous in nature. They are subject to various instabilities induced by the interaction with their surroundings. In this work, we analyse the so-called curvature instability which is a short-wavelength instability induced by the local curvature of the vortex. We provide theoretical predictions for a curved vortex when the underlying vortex structure is a Batchelor vortex (Gaussian axial velocity and axial vorticity). This work is the follow-up of Blanco-Rodríguez & Le [START_REF] Blanco-Rodríguez | Elliptic instability of a curved Batchelor vortex[END_REF], hereafter BRLD16, where another short-wavelength instability, the elliptic instability, was analysed using the same theoretical framework.
These two instabilities are different from the long-wavelength instabilities which occur in vortex pairs [START_REF] Crow | Stability theory for a pair of trailing vortices[END_REF]) and helical vortices [START_REF] Widnall | The stability of a helical vortex filament[END_REF][START_REF] Quaranta | Long-wave instability of a helical vortex[END_REF]. Their characteristics strongly depend on the internal vortex structure and their wavelength is of the order of the vortex core size. When the vortex is weakly deformed, both instabilities can be understood as a phenomenon of resonance between two (Kelvin) modes of the underlying vortex and a vortex correction. For the elliptic instability, the resonance occurs with a quadripolar correction generated by the background strain field [START_REF] Moore | The instability of a straight vortex filament in a strain field[END_REF], while for the curvature instability, it is associated with a dipolar correction created by the vortex curvature [START_REF] Fukumoto | Curvature instability of a vortex ring[END_REF]. Numerous works have concerned the elliptic instability in the context of straight vortices [START_REF] Tsai | The stability of short waves on a straight vortex filament in a weak externally imposed strain field[END_REF][START_REF] Eloy | Three-dimensional instability of Burgers and Lamb-Oseen vortices in a strain field[END_REF]Fabre & Jacquin 2004a;[START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. The specific case of the curved Batchelor vortex has been analysed in BRLD16. Contrarily to the elliptic instability, the curvature instability has only been considered for vortices with uniform vorticity [START_REF] Fukumoto | Curvature instability of a vortex ring[END_REF][START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF].
Both elliptic and curvature instabilities have also been analysed using the local Lagrangian method popularized by [START_REF] Lifschitz | Local stability conditions in fluid dynamics[END_REF] [see [START_REF] Bayly | Three-dimensional instability of elliptical flow[END_REF]; [START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF] for the elliptic instability, [START_REF] Hattori | Short-wavelength stability analysis of thin vortex rings[END_REF][START_REF] Hattori | Short-wavelength stability analysis of a helical vortex tube[END_REF][START_REF] Hattori | Effects of axial flow on the stability of a helical vortex tube[END_REF] for the curvature instability]. This method can be used to treat strongly deformed vortices but it provides a local information on a given streamline only. When the vortex is uniform, as the Rankine vortex, the local instability growth rate is also uniform. In that case, a connection can be made between the local results and the global results obtained by analyzing the mode resonances [START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF][START_REF] Eloy | Stability of the Rankine vortex in a multipolar strain field[END_REF][START_REF] Fukumoto | The three-dimensional instability of a strained vortex tube revisited[END_REF][START_REF] Hattori | Short-wave stability of a helical vortex tube: the effect of torsion on the curvature instability[END_REF][START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF]. Le [START_REF] Dizès | Theoretical predictions for the elliptic instability in a twovortex flow[END_REF] used the local prediction at the vortex centre to estimate the global growth rate of the elliptic instability in a non-uniform vortex. Although a good agreement was demonstrated for the Lamb-Oseen vortex, no such link is expected in general.
The goal of the present work is to obtain global estimates for the curvature instability using the framework of [START_REF] Moore | The instability of a straight vortex filament in a strain field[END_REF] for the Batchelor vortex. Such an analysis was performed by [START_REF] Hattori | Modal stability analysis of a helical vortex tube with axial flow[END_REF] for a Rankine vortex. The passage from the Rankine vortex to the Batchelor vortex will turn out not to be trivial. The main reason comes from the different properties of the Kelvin modes in both vortices. In smooth vortices, Kelvin modes are affected by the presence of critical layers (Le Dizès 2004) which introduce singularities and damping [START_REF] Sipp | Widnall instabilities in vortex pairs[END_REF][START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF]. These singularities have to be monitored and avoided in the complex plane to be able to obtain the properties of the Kelvin modes from the inviscid equations as shown in [START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. In the present work, we shall also use the asymptotic theory of Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] to obtain an approximation of the Kelvin mode dispersion relation and analyse the condition of resonance.
The dipolar correction responsible of the curvature instability is also obtained by an asymptotic theory in the limit of small vortex core size [START_REF] Callegari | Motion of a curved vortex filament with decaying vortical core and axial velocity[END_REF]. This correction appears as a first order correction to the Batchelor vortex. The detail of the derivation can be found in [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF]. As for the elliptic instability, the coupling terms, as well as weak detuning and viscous effects are computed using an orthogonality condition. The final result is an expression for the growth rate of a given resonant configuration close to the condition of resonance. Each resonant configuration provides a growth rate expression. We shall consider up to 50 resonant configurations to extract the most unstable one. This will allows us to obtain the curvature instability diagram as a function of the curvature ratio and the Reynolds number.
The paper is organized as follows. In §2, the base flow and perturbation equations are provided. In §3, the analysis leading the growth rate expression of a resonant configuration is presented. The results for the Batchelor vortex are obtained in §4. We first provide the characteristics of the resonant modes, then the stability diagrams for the Batchelor vortex for a few values of the axial flow parameter. Section §5 provides an application of the results to a vortex ring with and without swirl (axial flow). In that section, we analyse the competition of the curvature with the elliptic instability using the results of BRLD16. A numerical error affecting the strength of the elliptic instability has been found in this paper. It is corrected in a corrigendum which is presented in appendix D. The last section §6 gives a brief summary of the main results of the paper.
Problem formulation
Base flow
The first description of the base flow was provided by [START_REF] Callegari | Motion of a curved vortex filament with decaying vortical core and axial velocity[END_REF]. Here, as in BRLD16, we mainly follow the presentation given in [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF]. The vortex is considered in the local Frenet frame (t, n, b) attached to the vortex centerline and moving with the structure. We assume that the vortex is concentrated (i.e. thin), which means that its core size a is small compared to the local curvature radius R c of the vortex centerline and the shortest separation distance δ to other vortex structures. For simplicity, we consider a single small parameter ε = a/R c , and assume that δ = O(R c ).
The internal vortex dynamics is described using the "cylindrical" coordinate system (r, ϕ, s) constructed from the Frenet frame (see Fig. 1).
The velocity-pressure field of the base flow is expanded in power of ε as U = U 0 + εU 1 + • • •. The leading order contribution is the prescribed Batchelor vortex of velocity field U 0 = (0, V (0) (r), W (0) (r), P (0) (r)) with
V (0) = 1 -e -r 2 r , W (0) = W 0 e -r 2 . (2.1)
As in BRLD16, spatial and time scales have been non-dimensionalized using the core size a and the maximum angular velocity of the vortex Ω (0) max = Γ/(2πa 2 ), Γ being the vortex circulation. The axial flow parameter W 0 is defined as the ratio
W 0 = W (0) max Ω (0) max a . (2.2)
We assume that W 0 0.5 such that the vortex remains unaffected by the inviscid swirling jet instability [START_REF] Mayer | Viscous and inviscid instabilities of a trailing vortex[END_REF]. We also implicitly assume that the weak viscous instabilities occurring for small values of W 0 (Fabre & Jacquin 2004b;[START_REF] Dizès | Large-Reynolds-number asymptotic analysis of viscous centre modes in vortices[END_REF] remain negligible in the parameter regime that is considered. In the following, we shall also use the expression of the angular velocity Ω (0) (r) and vorticity ζ (0) (r):
Ω (0) (r) = 1 -e -r 2 r 2 , ζ (0) (r) = 2e -r 2 . (2.3)
As explained by [START_REF] Blanco-Rodríguez | Internal structure of vortex rings and helical vortices[END_REF], the first order correction is a dipolar field which can be written as
U 1 ∼ ε Re U (1) e iϕ = ε 2 iU (1) (r) V (1) (r) W (1) (r) P (1) (r) e iϕ + c.c. , (2.4)
where expressions for U (1) , V (1) , W (1) and P (1) are provided in appendix A. It is worth emphasizing that these expressions only depend on the local characteristics of the vortex at leading order. In particular, they do not depend on the local torsion. For helices, torsion as well as the Coriolis effects associated with the change of frame appear at second order [START_REF] Hattori | Short-wavelength stability analysis of a helical vortex tube[END_REF]. The above expression then describes the internal structure of both helices and rings up to the order ε. This contrasts with the quadripolar correction responsible of the elliptic instability which appears at second order. This quadripolar correction varies according to the global vortex geometry and is different for rings and helices even if they have the same local curvature.
Perturbation equations
The perturbations equations are obtained by linearizing the governing equations around the base flow
U = U 0 + εU 1 + • • •.
As shown in BRLD16, if the perturbation velocitypressure field is written as u = (-iu, v, w, p) we obtain up to o(ε) terms a system of the form :
(i∂ t I + i∂ s P + M) u = ε e iϕ N (1)
+ + e -iϕ N
(1) -
u + i Re Vu (2.5)
where the operators I, P, M = M(-i∂ ϕ ), N
± = N (1) ± (-i∂ ϕ , -i∂ s ), V = V(-i∂ ϕ , -i∂ s ) are defined in Appendix B. (1)
The left-hand side corresponds to the inviscid perturbation equations of the undeformed Batchelor vortex. The first term on the right-hand side is responsible of the curvature instability, while the second term accounts for the viscous effects on the perturbations. By introducing viscous effects in this equation, we implicitly assume that the Reynolds number
Re = Ω (0) max a 2 ν = Γ 2πν ,
with ν the kinematic viscosity, is of order 1/ε.
Instability description
Curvature instability mechanism
The mechanism of the curvature instability is similar to that of the elliptic instability.
The instability results from a resonant coupling of two Kelvin modes of the undeformed axisymmetric vortex with non-axisymmetric corrections. Two Kelvin modes of characteristics (ω A , k A , m A ) and (ω B , k B , m B ) are resonantly coupled via the dipolar correction if they satisfy the condition of resonance (assuming m A < m B )
ω A = ω B , k A = k B , m A = m B -1. (3.1)
Fukumoto ( 2003) further demonstrated that the coupling is destabilizing only if the energy of the modes is opposite or if the frequency vanishes. It leads to a growth of the Kelvin mode combination with a maximum growth rate scaling as ε.
Formal derivation of the growth rate formula
For each resonant configuration, a growth rate expression can be obtained from an orthogonality condition as we did for the elliptic instability (see BRLD16). We consider a combination of two Kelvin modes of azimuthal wavenumber m A and m B = m A + 1 close to the their condition of resonance (3.1):
u = Aũ A (r) e im A ϕ + Bũ B (r) e im B ϕ e iks -iωt , (3.2)
where
k is close to k A = k B = k c , and ω close to ω A = ω c and ω B = ω c + i Im(ω B ).
We assume that the resonance is not perfect. The mode B will exhibit a weak critical layer damping given by Im(ω B ) (imaginary part of ω B ). The functions ũA (r) and ũB (r) are the eigenfunctions of the Kelvin modes which satisfy
(ω A I -k A P + M(m A )) ũA = 0, (3.3) (ω B I -k B P + M(m B )) ũB = 0, (3.4)
with a prescribed normalisation:
pA ∼ r→0 r |m A | , pB ∼ r→0 r |m B | . (3.5)
If we plug (3.2) in (2.5), we obtain for the components proportional to e im A ϕ and e im B ϕ :
A ωI -kP + M(m A ) - i Re V(m A , k) ũA = BεN (1) -(m B , k)ũ B , (3.6)
B ωI -kP + M(m B ) - i Re V(m B , k) ũB = AεN (1) + (m A , k)ũ A . (3.7)
Relations between the amplitudes A and B are obtained by projecting these equations on the subspace of the adjoint Kelvin modes. We define the adjoint eigenfunctions ũ † A and ũ † B of the Kelvin modes as the solutions to the adjoint equations of (3.3)-(3.4) with respect to the scalar product
< u 1 , u 2 >= ∞ 0 u 1 • u 2 r dr = ∞ 0 (u 1 u 2 + v 1 v 2 + w 1 w 2 + p 1 p 2 ) r dr.
(3.8)
We then obtain
ω -ω c -Q A (k -k c ) -i V A Re A = εC AB B, (3.9) ω -ω c -i Im(ω B ) -Q B (k -k c ) -i V B Re B = εC BA A, (3.10)
where the coefficients of these equations are given by
Q A = < ũ † A , P ũA > < ũ † A , I ũA > , Q B = < ũ † B , P ũB > < ũ † B , I ũB > , (3.11) V A = < ũ † A , V ũA > < ũ † A , I ũA > , V B = < ũ † B , V ũB > < ũ † B , I ũB > , (3.12) C AB = < ũ † A , N (1)
+ (m B , k B ) ũB > < ũ † A , I ũA > , C BA = < ũ † B , N (1)
-(m A , k A ) ũA > < ũ † B , I ũB > . (3.13)
The formula for the complex frequency ω is then finally given by
ω -ω c -i Im(ω B ) -Q B (k -k c ) -i V B Re ω -ω c -Q A (k -k c ) -i V A Re = -ε 2 N 2 , (3.14) with N = -C AB C BA .
(3.15)
The right-hand side of (3.14) represents the coupling terms responsible of the curvature instability. The left-hand side of (3.14) gives the dispersion relation of each Kelvin mode close to the resonant point. It is important to mention that none of the coefficients
Q A , Q B , V A , V B
and N depends on the normalization chosen for the Kelvin modes.
Instability results for the Batchelor vortex profile
4.1. Resonant Kelvin modes The main difficulty of the analysis is to determine the Kelvin modes that satisfy the condition of resonance (3.1). A similar problem was already addressed in [START_REF] Lacaze | Elliptic instability in a strained Batchelor vortex[END_REF]. The Kelvin modes are here defined from the inviscid equations. Two kinds of Kelvin modes are found to exist: the regular and neutral Kelvin modes which can easily be obtained by integrating the inviscid perturbation equations in the physical domain and the singular and damped Kelvin modes which require a particular monitoring of the singularities of the perturbation equations in the complex plane. We shall see below that the condition of resonance always involves a singular mode.
The singularities of the inviscid perturbation equations are the critical points r c where ω -kW (0) (r c ) -mΩ (0) (r c ) = 0. When Im(ω) > 0, these singularities are in the complex plane, and do not affect the solution in the physical domain (real r). However, one of such critical points may cross the real axis when Im(ω) becomes negative. As explained in Le [START_REF] Dizès | Viscous critical-layer analysis of vortex normal modes[END_REF], the inviscid equations must in that case be integrated on a contour in the complex r plane that avoids the critical point from below (resp. above) if the critical point has moved in the lower (resp. upper) part of the complex plane. On such a contour, the solution remains regular and fully prescribed by the inviscid equations. On the real axis, the inviscid solution is however not regular anymore. As illustrated in [START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF], it no longer represents the vanishing viscosity limit of a viscous solution in a large interval of the physical domain. The Kelvin mode forms by the contour deformation technique is damped and singular. The inviscid frequency of the mode then possesses a negative imaginary part, which corresponds to what we call the critical layer damping rate. By definition, the critical layer damping rate is independent of viscosity.
A mode cannot be involved in a resonance if it is too much damped. In the asymptotic framework, the growth rate associated with the resonance is expected to be O(ε), so the damping rate of the modes should a priori be asymptotically small of order ε. However, in practice, we shall consider values of ε up to 0.2, and the maximum growth rate will turn out to be around 0.05 ε. We shall then discard all the modes with a damping rate whose absolute value exceeds 0.01.
Predictions from the WKBJ analysis
Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] showed that information on the spectrum of the Kelvin modes can be obtained using a large k asymptotic analysis. They applied their theory to the Batchelor vortex and were able to categorize the neutral Kelvin modes in four different types: regular core modes, singular core modes, regular ring modes, singular ring modes. For each m, they provided the region of existence of each type of mode in a (kW 0 , ω)
-2.5 -2 -1.5 -1 -0.5 0 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 k W0 ω (1,2) (0,1)
(2,3)
Figure 2. Prediction from the WKBJ analysis of the domains of parameters in the (kW0, ω) plane where resonance between two Kelvin modes (mA, mA + 1) is possible. Only positive frequencies are considered. A symmetrical plot is obtained for negative frequencies.
plane. The energy of the waves can also be deduced from the asymptotic expression of the dispersion relation as shown in Le [START_REF] Dizès | Inviscid waves on a Lamb-Oseen vortex in a rotating stratified fluid: consequences on the elliptic instability[END_REF]. It is immediately found that regular core modes and regular ring modes are always of negative energy, while singular modes have positive energy. The condition of resonance can then easily be analysed. One just needs to superimpose the domains of existence of each pair of modes of azimuthal wavenumbers m and m+1, to find the regions of possible resonance. The final result is summarized in Fig. 2. For positive frequencies, only three different regions are obtained corresponding to (m A , m B ) = (0, 1), (1, 2) and (2, 3) (Negative frequencies are obtained by symmetry changing m → -m and k → -k). No intersection of the domains of existence of the modes m and m + 1 are obtained for m larger than 2. In each region of Fig. 2, we always find that the mode A is a regular core mode of negative energy, while the mode B is a singular core mode of positive energy. Each branch crossing is therefore expected to provide an instability.
As shown in Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF], both types of core modes have an asymptotic dispersion relation of the form
k rt 0 ∆(r) Φ(r) dr = (|m| + 2l) π 2 l = 0, 1, 2, . . . (4.1)
where
∆(r) = 2Ω (0) (r)ζ (0) (r) -Φ 2 (r), (4.2) Φ(r) = ω -mΩ (0) (r) -kW (0) (r), (4.3)
and r t is a turning point defined by ∆(r t ) = 0. The integer l is a branch label which measures the number of oscillations of the mode in the vortex core. The larger l, the more oscillating is the mode. Singular modes differ from regular modes by the presence of a critical point r c > r t where Φ(r c ) = 0 in their radial structure. In the WKBJ description, this critical point does not create any damping at leading order. However, it makes the eigenfunction singular. It will justify the use of a complex integration path in the numerical resolution of the mode.
Numerical determination of the Kelvin modes
The characteristics of the resonant modes are obtained by integrating numerically Eqs. (3.3)-(3.4). The numerical code is based on a Chebyshev spectral collocation method, essentially identical to that used in Fabre & Jacquin (2004b). The eigenvalue problem is solved in a Chebyshev domain (-1,1) on 2(N + 1) nodes which is mapped on a line in the complex-r plane using the mapping
r(x; A c , θ c ) = A c tanh(x) e i θc , (4.4)
where A c is a parameter close to 1 that controls the spreading of the collocation points, and θ c is the small inclination angle of the path in the complex r plane. We typically take θ c ≈ π/10 such that the critical point of the singular mode is avoided. As in Fabre & Jacquin (2004b), we take advantages of the parity properties of the eigenfunctions by expressing for odd m (resp. even m), w and p on odd polynomials (resp. even) and ũ and ṽ on even polynomials (resp. odd). It leads to a discretized eigenvalue problem of order 4N , which is solved using a global eigenvalue method. We also use an Arnoldi algorithm in order to follow specific eigenvalues and easily find the condition of resonance. In most computations, the value N = 200 was found to be adequate. This collocation method was also used to determine the adjoint modes and compute the integrals that define the coefficients in the growth rate equation (3.14).
Typical results for the eigenvalues are shown in Fig. 3. In this figure, we compare the numerical results with the theoretical formula (4.1). The good agreement demonstrates the usefulness of the asymptotic approach to obtain valuable estimates for the condition of resonance.
Stability diagram
Lamb-Oseen vortex
In this section, we assume that there is no axial flow. The underlying vortex is then a Lamb-Oseen vortex. For this vortex, Kelvin mode properties have been documented in Le [START_REF] Dizès | An asymptotic description of vortex Kelvin modes[END_REF] and [START_REF] Fabre | The Kelvin waves and the singular modes of the Lamb-Oseen vortex[END_REF] for m = 0, 1, 2, 3. It was shown that the singular core modes become strongly damped as soon as the critical layer singularity moves in the vortex core. This gives a constraint on the frequency of the mode B which has to be small. As a consequence, we immediately see that the only modes (m A , m B ) that can possibly resonate are the modes (m A , m B ) = (0, 1). Moreover, the constraint on the frequency implies that only large branch labels of the mode m A = 0 will be able to resonate with a weakly damped mode m B = 1. In figure 4, we show the crossing of the first m A = 0 and m B = 1 branches in the (k, ω) plane. Only the modes with a damping rate smaller (in absolute value) than 0.01 are in solid lines. We observe that the branch label of the m A = 0 modes must be 6 or larger to cross the first m B = 1 branch in the part where it is only weakly damped. The characteristics of these first resonance points are given in table 1. We also give in this table, the value of the coefficients of Eq. (3.14) at each resonant point. For each resonant configuration, we can then plot the growth rate Im(ω) of the curvature instability as a function of the wavenumber k for any Re and ε. An example of such a plot is provided in Fig. 5. In this figure, we have plotted only the first four resonant configurations. Other configurations have been computed corresponding to labels [2, 6], [2, 7], etc but their growth rates were found to be much weaker for Re 10 5 . The spatial structure of the most unstable resonant configurations are also been given in Fig. 5. We have plotted the vorticity contours for a particular phase which maximizes the relative amplitude of the Kelvin mode m A = 0. This mode is then clearly visible in each case. If we had chosen a phase such that e iks-iωt = i, we would have seen the mode m B = 1 only.
l B = 2 l A = 2 l A = 1 l B = 1 l A = 9
We have systematically computed the maximum growth rate and obtained the most unstable mode characteristics for all ε 0.22 and Re 10 5 . The result is displayed in Fig. 6 where the maximum growth rate is shown in the (ε, Re) plane. The labels of the most unstable configurations are also shown in this plot. We can note that only 3 resonant configurations can become the most unstable corresponding to the crossing of the first branch of the Kelvin mode m B = 1 with the 7th to 9th branch of the Kelvin mode m A = 0. In particular, the resonant configuration [6, 1] observed in Fig. 5 never becomes the most unstable configuration although this configuration possesses the largest coupling coefficient N (see table 1). This is directly related to the property mentioned above: the critical layer damping rate Im(ω B ) of the mode m B = 1 is too large.
Figure 6 provides the stability diagram of the Lamb-Oseen vortex with respect to the curvature instability. It is important to emphasize the large value of the Reynolds number needed for instability. Even for a value as large as ε = 0.2, the critical Reynolds number for instability is Re c ≈ 6000. We shall see in the next section that axial flow will strongly decrease this value.
-1.6 -1.5 -1.4 -1.3 -1.2 -1.1 0 1 2 3 4 x 10 -3 k Im(ω) [6,1] [7,1] [8,1] [9,1] [7, 1] [8, 1] [9, 1]
Effects of the axial flow
The characteristics of the Kelvin modes strongly vary with the parameter W 0 . Additional branch crossings involving smaller branch labels are obtained as W 0 increases. As explained in the previous section, resonances between m A = 1 and m B = 2, as well as between m A = 2 and m B = 3 become a priori possible (see Fig. 2). However, they involve very high branch labels which implies that they never become the most unstable modes for moderate Reynolds numbers (Re 10 5 ) .
W0 = 0.2 W0 = 0.4 -3 -2.5 -2 -1.5 -1 0 0.002 0.004 0.006 0.008 0.01 0.012 k Im(ω) [4,2] [3,1] [4,4] [4,1] [5, 1] [3,2] [2,1] [4,5] [5, 3] [4,3] -2 -1.8 -1.6 -1.4 -1.2 -1 0 0.005 0.01 0.015 k Im(ω) [2,2] [2,4] [2,5] [3,2] [2,3] [2,1] [3,1] [5, 1] [3, 1] [4, 1] [4, 2] [3, 1] [2, 2] -1 0 1 -1 0 1 -5 0 5 -1 0 1 -1 0 1 -4 -2 0 2 4 -1 0 1 -1 0 1 -5 0 5 -1 0 1 -1 0 1 -4 -2 0 2 4 -1 0 1 -1 0 1 -4 -2 0
For the parameters W 0 = 0.1, 0.2, 0.3, 0.4 and 0.5, we have considered the crossing points of the seven first branches of the Kelvin modes m A = 0 and m B = 1. Each crossing point corresponds to a mode resonance. At each crossing point, we have computed the coefficients of the growth rate expression. In Fig. 7, we have plotted the growth rate curves obtained from Eq. (3.14) for W 0 = 0.2 and 0.4, and for ε = 0.2 and Re = 5000. Contrarily to the Lamb-Oseen vortex, more resonant configurations can now become unstable. Moreover, they involve smaller branch labels. The spatial structure of the main resonant configurations have also been provided in Fig. 7 for this set of parameters. As in Fig. 5, we have plotted the vorticity contour for a particular phase which maximizes the relative amplitude of the Kelvin mode m A = 0 . Note that the spatial structure of the resonant mode [3, 1] is different for W 0 = 0.2 and W 0 = 0.4: this difference is not only associated with the different values of the coefficient B/A obtained from (3.6), but also with an effect of W 0 on the Kelvin modes.
If we take the maximum value of the growth rate over all possible k for each ε and Re, we obtain the plots shown in Fig. 8. The same colormap and contour levels have been used as in Fig. 6 for comparison. We clearly see that the growth rates are larger in the presence of axial flow. The region of instability is also much larger. In these plots, we have indicated the labels of the most unstable modes. As for the Lamb-Oseen vortex, the most unstable configuration changes as ε or Re varies. However, the branch labels of the Kelvin modes are now smaller in the presence of axial flow. This property explains in part the larger growth rates of the configurations with jet. Indeed, the viscous damping of the modes with the smallest labels is the weakest. The impact of viscosity is therefore weaker on these modes. Yet, the resonant configuration with the smallest labels are not necessarily the most unstable because they may also exhibit a larger critical layer damping, or a smaller coupling coefficient N [see equation (3.14)].
In tables 2 and 3 of appendix C, we have provided the characteristics of the main resonant configurations for W 0 = 0.2 and W 0 = 0.4. The data for the other resonant configurations and for other values of W 0 are available as supplementary material.
Competition with the elliptic instability in a vortex ring
The results obtained in §4 can readily be applied to the vortex ring by using ε = a/R where R is the radius of the ring and a the core radius.
As first shown by [START_REF] Widnall | The instability of short waves on a vortex ring[END_REF], the vortex ring is also subject to the elliptic instability. This instability appears at the order ε 2 , so it is a priori smaller. Yet, the short wavelength instability observed experimentally in a vortex ring without swirl has always been attributed to the elliptic instability (see the review by [START_REF] Shariff | Vortex rings[END_REF]. It is therefore natural to provide a more precise comparison of the growth rates of both instabilities.
In BRLD16, we have obtained theoretical predictions for the elliptic instability in a vortex ring with a Batchelor profile. As for the curvature instability, growth rate contour plots can be obtained for the elliptic instability in a (ε, Re) plane using the data of this paper. It should be noted that an error of a factor 2 was found in some of the coefficients of the elliptic instability growth rate formula. This error, which is corrected in appendix D, does not affect the main conclusion of this paper but modifies the relative importance of the elliptic instability with respect to the curvature instability.
The comparison of the elliptic instability with the curvature instability is shown in Fig. 9 for three values of the axial flow parameters (W 0 = 0, 0.2, 0.4). In this figure, we have plotted the largest value of both instability growth rates in the (ε, Re) plane. We have also indicated where each instability appears and becomes dominant over the other one. Interestingly, we observe that depending on the value of W 0 the region of dominance of the curvature instability changes. For the case without axial flow [Fig. 9(a)], the elliptic instability domain is larger than the curvature instability domain and the elliptic instability is always the dominant instability. For the other two cases W 0 = 0.2 and W 0 = 0.4, the situation is different: there is a balance between both instabilities. For both cases, curvature instability is dominant over the elliptic instability for small ε while it is the opposite for large ε. Yet, there are some differences between both cases. For W 0 = 0.2, we observe that the curvature instability is the first instability to appear as Re is increased for all ε < 0.2. For W 0 = 0.4, the elliptic instability domain is larger and extends to smaller values of the Reynolds numbers than for the other two cases. It is also the dominant instability for all Reynolds numbers as soon as ε is larger than 0.1.
These plots have interesting implications. First, it explains why the curvature instability has never been observed in vortex ring without swirl. For such a vortex ring, the elliptic instability is always stronger than the curvature instability. Second, it implies that the curvature instability should be visible in a vortex ring with swirl if ε is smaller than 0.1 and the Reynolds number larger than 10000.
It should also be noted that due to the different inviscid scalings, which are in ε for the curvature instability growth rate and in ε 2 for the elliptic instability growth rate, the curvature instability should always become dominant over the elliptic instability whatever W 0 if ε is sufficiently small and the Reynolds number sufficiently large. This tendency is clearly seen in figures 9(b,c) for W 0 = 0.2 and W 0 = 0.4. For W 0 = 0 (fig. 9(a)), the change of dominance of both instabilities occurs for a much larger Reynolds number.
Conclusion
In this work, we have provided the characteristics of the curvature instability for a Batchelor vortex for several axial flow parameters. We have shown that although a same resonant coupling is active as in a Rankine vortex, the characteristics of the resonant configurations are very different owing to the critical layer damping of many Kelvin modes. We have shown that this effect precludes the resonance of Kelvin modes with azimuthal wavenumbers larger than m = 3. Moreover, when it occurs, the resonance of modes (m A , m B ) = (1, 2) or (2, 3), involves a Kelvin mode with a very high complexity (large branch label) which is strongly sensitive to viscous effects. For moderate Reynolds numbers (Re 10 5 ), we have then found that the most unstable configuration always involves Kelvin modes of azimuthal wavenumbers m A = 0 and m B = 1. We have analysed the condition of resonance of the 7 first branches (9 for the Lamb-Oseen vortex) for several axial flow parameters to identify the most unstable configuration.
For the case without axial flow (Lamb-Oseen vortex), we have shown that the most unstable configuration involves the first branch of the Kelvin mode of azimuthal wavenumber m B = 1 and the seventh to nineth branch of the Kelvin mode of azimuthal wavenumber m A = 0, depending on the Reynolds number and ε (for Re 10 5 ). The high value of the branch label implies a larger viscous damping and therefore a weaker growth rate of the curvature instability for this case. In the presence of axial flow, resonant configura-tions with smaller branch labels were shown to become possible. The instability growth rate was then found to be larger than without axial flow. We have presented the characteristics of the most unstable configurations for two axial flow parameters W 0 = 0.2, W 0 = 0.4. The data provided as supplementary material can be used to obtain the instability characteristics for other values of W 0 (W 0 = 0, 0.1, 0.2, 0.3, 0.4, 0.5).
We have applied our results to the vortex ring and analysed the competition of the curvature instability with the elliptic instability. We have shown that the elliptic instability is always dominant without axial flow. However, the situation changes in the presence of axial flow which provides hope in observing this instability in vortex rings with swirl.
The present results can also be applied to helical vortices as they only depend on the local vortex curvature. By contrast, the elliptic instability characteristics in helices depend on the helix pitch and on the number of helices (Blanco-Rodríguez & Le Dizès 2016). Whether the curvature instability dominates the elliptic instability must then be analysed on a case by case basis. All the elements to perform such an analysis are now available.
Our analysis has been limited to a particular model of vortices. In the very large Reynolds number context of aeronautics, other models have been introduced to describe the vortices generated by wing tips [START_REF] Moore | Axial flow in laminar trailing vortices[END_REF][START_REF] Spalart | Airplane trailing vortices[END_REF]. It would be interesting to analyse the occurrence of the curvature instability in these models as well as the competition with the elliptic instability (Fabre & Jacquin 2004a;[START_REF] Feys | Elliptical instability of the Moore-Saffman model for a trailing wingtip vortex[END_REF]. 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0
, P = W (0) 0 0 0 0 W (0) 0 0 0 0 W (0) 1 0 0 -1 0 , ( B 1)
M(-i∂ϕ) = Ω (0) i∂ ϕ -2 Ω (0) 0 ∂ r -ζ (0) Ω (0) i∂ ϕ 0 i r ∂ ϕ -W (0) r 0 Ω (0) i∂ ϕ 0 1 r + ∂ r -i r ∂ ϕ 0 0 , (B 2) V(-i∂ϕ, -i∂ s ) = ∆ - 1 r 2 2i r 2 ∂ ϕ 0 0 2i r 2 ∂ ϕ ∆ - 1 r 2 0 0 0 0 ∆ 0 0 0 0 0 , ( B
N
(1)
± (-i∂ϕ, -i∂ s ) = 1 2 D (1) ± ± U (1) r U (1) r + 2 V (1) r -2W (0) 0 V (1) r + V (1)
r D
(1)
± ± V (1) r ± U (1) r ±2W (0) 0 W (1) r -W (0) ± W (1) r ∓ W (0) D (1) ± ∓ V (0) -ri∂ s 1 ±1 ri∂ s 0 , ( B 4)
where
D
(1)
± = ±U (1) ∂ r - V (1)
r i∂ ϕ -T w i∂ s , T w = W (1) + rW (0) , (B 5) 6)
T v = V (1) + rV (0) , ∆ = ∂ 2 r + 1 r ∂ r + 1 r 2 ∂ 2 ϕ + ∂ 2 s . ( B
Figure 1 .
1 Figure 1. Sketch of the vortex structure and definition of the local Frenet frame (adapted from BRLD16).
Figure 3 .
3 Figure 3. Analysis of the branch crossing for the Batchelor vortex at W0 = 0.2. Plot of Re(ω) versus kW0 of the first branches of the Kelvin modes of azimuthal wavenumber mA (in blue) and mB = mA + 1 (in green). The branch labels are also indicated. Solid lines: numerical results. Dashed lines: WKBJ predictions. The domains shown in figure 2 where branch crossings are expected have also been indicated. (a): (mA, mB) = (0, 1); (b): (mA, mB) = (1, 2).
Figure 4 .
4 Figure 4. Frequency versus wavenumber of the Kelvin modes of the Lamb-Oseen vortex for mA = 0 (blue) and mB = 1 (red) in the frequency-wavenumber domain where resonance exists. The real part of the frequency is plotted in solid lines when |Im(ω)| < 0.01 (neutral or weakly damped modes) and in dotted lines when |Im(ω)| > 0.01 (strongly damped modes).
Figure 5 .Figure 6 .
56 Figure 5. Top: Temporal growth rate of the curvature instability as a function of the axial wavenumber for the Lamb-Oseen vortex (W0 = 0) at ε = 0.1, Re = ∞ (dashed line) and Re = 10 5 . The label [lA, lB] corresponds to the branch indices of the resonant configuration. It means that the resonant configuration is formed of the lAth branch of the Kelvin mode mA = 0 and the lBth branch of the Kelvin mode mB = 1. Bottom: Vorticity contours in a (x, y) cross section of modes [7, 1], [8, 1], and [9, 1] for the parameters indicated by a star on the top graph (that is at k = kc). The vorticity is defined by (3.2) at a time t and location s such that e iks-iωt = 1 with A = 1. The black circle indicates the vortex core radius.
2 4Figure 7 .
27 Figure 7. Top: Temporal growth rate of the curvature instability as a function of the axial wavenumber for the Batchelor vortex at ε = 0.2 and Re = 5000 for W0 = 0.2 (left) and W0 = 0.4 (right). Bottom: Vorticity contours in a (x, y) cross section of modes [3, 1], [4, 1], and [4, 2] for W0 = 0.2 (left) and modes [3,1] and [2.2] for W0 = 0.4 (right). See caption of Fig. 5 for more information.
Figure 8 .Figure 9 .
89 Figure 8. Maximum growth rate contours of the curvature instability in the (ε,Re) plane for the Batchelor vortex. (a): W0 = 0.2; (b): W0 = 0.4. See caption of Fig. 6.
Table 1 .
1 Characteristics of the first resonant configurations (mA, mB) = (0, 1) of label[lA, lB] for the Lamb-Oseen vortex (W0 = 0).
3)
Table 2 .
2 Same as table 1 for the Batchelor vortex with W0 = 0.2.
† Email address for correspondence: [email protected]
Acknowledgments
This work received support from the French Agence Nationale de la Recherche under the A*MIDEX grant ANR-11-IDEX-0001-02, the LABEX MEC project ANR-11-LABX-0092 and the ANR HELIX project ANR-12-BS09-0023-01.
Appendix A. Dipolar correction
The first order correction is given by
where
with
We have used the index r to denote derivative with respect to r (for example
Appendix B. Operators
The operators appearing in equation (2.5) are given by
Appendix C. Tables
In this section, we provide the coefficients of the growth rate formula (3.14) for the dominant instability modes for the three cases W 0 = 0, W 0 = 0.2 and W 0 = 0.4.
Blanco-Rodríguez and Le Dizès
Appendix D. Elliptic instability of a curved Batchelor vortex -Corrigendum
Due to a normalisation mistake, a systematic error has been made in the values of the coefficients R AB and R BA in the dispersion relation (4.7) of Blanco-Rodríguez & Le [START_REF] Blanco-Rodríguez | Elliptic instability of a curved Batchelor vortex[END_REF]. The correct values are twice those indicated in this paper for all the modes. This modifies the values given in table 2 and formulas (C2n-q), (C3n-q), (C4n-q). For instance, in table 2 the correct value of R AB for the mode (-2, 0, 1) at
This error affects the y-scale of the plots (c) and (d) of figure 5 which has to be multiplied by two, and those of figure 6, which has to be divided by two. It also changes all the figures obtained in section 8. The correct figures (available on request) are nevertheless qualitatively similar if we multiply the y-scale of all the plots by a factor 2.
The comparison with [START_REF] Widnall | The instability of the thin vortex ring of constant vorticity[END_REF] done in section 8.1 for a vortex ring is also slightly modified. With the correct normalisation, the inviscid result of [START_REF] Widnall | The instability of the thin vortex ring of constant vorticity[END_REF] for the Rankine vortex is σ max /ε 2 = [(0.428 log(8/ε) -0.455) 2 -0.113] 1/2 while we obtain for the Lamb-Oseen vortex σ max /ε 2 = 0.5171 log(8/ε) -0.9285. The Lamb-Oseen vortex ring is thus less unstable than the Rankine vortex ring as soon as ε > 0.039 for the same reason as previously indicated. |
01430563 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01430563/file/article13.pdf | S Le Dizès
E Villermaux
Capillary jet breakup by noise amplification
A liquid jet falling by gravity ultimately destabilizes by capillary forces. Accelerating as it falls, the jet thins and stretches, causing the capillary instability to develop on a spatially varying substrate. We discuss quantitatively the interplay between instability growth, jet thinning and longitudinal stretching for two kinds of perturbations, either solely introduced at the jet nozzle exit, or affecting the jet all along its length. The analysis is conducted for any values of the liquid properties for sufficiently large flow rate. In all cases, we determine the net gain of the most dangerous perturbation for all downstream distances, thus predicting the jet length, the wavelength at breakup and the resulting droplet size.
Introduction
Seemingly simple questions are not always the simplest to answer quantitatively. A canonical illustration of this affirmation is the apparently simple problem of a liquid thread, falling from a nozzle by it own weight under the action of gravity, as shown in figure 1. As it falls, the thread eventually fragments into drops, a fact that we understand because it has locally a columnar shape, and thus suffers a capillary instability. But how far from the nozzle exit does breakup happen ? Even a distracted look at the possible scenarii lets one glimpse the potential difficulties of a precise analysis: a distance z is the product of a velocity u by a time τ z = u τ.
(1.1) Capillary breakup occurs within a time τ which depends on the thread radius h, on the liquid density ρ, viscosity η and surface tension γ, and we know that most of this time is spent at developing an instability about the quasi-columnar shape of the thread, the subsequent phenomena occurring around the pinching instant at the drops separation being comparatively much faster [START_REF] Eggers | Physics of fluid jets[END_REF]. The time τ is either the capillary time ρh 3 /γ when inertia and surface tension are solely at play, or the viscous capillary time ηh/γ if viscous effects dominantly slow down the unstable dynamics. When the jet issues from the nozzle ballistically, keeping its velocity and radius constant, the problem is indeed simple, and amounts to estimate correctly the relevant timescale τ to compute the so-called 'Liquid intact length' of the jet (see the corresponding section in [START_REF] Eggers | Physics of fluid jets[END_REF] for a complete discussion and experimental references, including the case when the jet suffers a shear instability with the surrounding environment). Subtleties arise when the axial velocity of the jet depends on axial distance z.
A jet falling in the direction of gravity accelerates. If fed at a constant flow rate at the nozzle, stationarity implies that the thread radius thins with increasing distances from the exit. Therefore, if both u and h depend on downstream distance, which estimates will correctly represent the breakup distance z in equation (1.1) ? Those at the nozzle exit, those at the breakup distance, or a mixture of the two ? As the radius thins, the instability h(z, t) h 0 u 0 z λ max d max Figure 1: Four successive panels showing a liquid jet (density 950 kg/m 3 , viscosity η = 50×10 -3 Pa s) issuing from a round tube with radius h 0 = 2 mm at velocity u 0 = 1 cm/s, stretching in the gravity field (aligned with the z direction), and thinning as it destabilizes through the growth of bulges separated by λ max at breakup, producing stable drops of diameter d max .
may switch from an inertia to a viscous dominated régime. Then, which timescale τ should be considered to compute z ?
The detailed problem is even more subtle : The capillary instability amplifies preferentially a varicose perturbation, adjacent bulges along the thread feeding on the thinner ligament linking them (figure 1). The most amplified wavelength is proportional to h, the other wavelengths having a weaker growth rate. Since the jet accelerates, mass conservation of the incompressible liquid also implies that the distance between two adjacent instability crests increases with larger distances from the nozzle exit. The capillary instability has thus to compete with another phenomenon, namely jet stretching, characterized by another timescale (∂ z u) -1 . There are thus three timescales which may potentially contribute to τ , and which all depend intrinsically on the distance to the nozzle. Deciding a-priori which one will dominate and how is a hazardous exercise.
Deciphering the relative importance of the coupled effects mentioned above requires an instability analysis accounting for both the substrate deformation (jet stretching), and for the modification of the local instability dispersion relation as the jet thins (to describe the growing relative influence of viscosity). That question has been envisaged in the very viscous limit by [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF], for the particular case where u increases linearly with z by [START_REF] Frankel | Stability of a capillary jet with linearly increasing axial velocity (with application to shaped charges)[END_REF][START_REF] Schlichting | Boundary Layer Theory[END_REF], and more recently by [START_REF] Senchenko | Shape and stability of a viscous thread[END_REF], [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF] and [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF] for a gravitationally accelerated jet.
These last authors quantified the maximum gain that perturbations can reach at a given location using a local plane wave decomposition (WKBJ approximation). By choosing adequately the gain needed for breakup, they were able to collapse measurements of the breakup distance on a theoretical curve. They also obtained an asymptotic expression in the viscous regime consistent with the anticipated scaling law which compares the viscous capillary timescale based on the current jet radius to the stretching time of the jet.
In the present work, we use a similar approach as [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF] by searching maximum perturbation gains using WKBJ approximations. In addition to providing much more details, we extend their analysis in several ways. We first consider all the regimes ranging from very viscous to inviscid. We then compare the maximum gain and the most dangerous frequency of the perturbations for two types of excitation: (1) nozzle excitation (the perturbation is introduced at the nozzle only); (2) background noise (the perturbation is present everywhere). We finally provide predictions for the breakup wavelength and the resulting droplet size.
The paper is organized as follows: In §2, we present the mathematical formulation by providing the model for the base flow and the perturbations. An expression of the perturbation gain is derived using the WKBJ framework. In §3, the result of the optimization procedure maximizing the gain is provided for each type of excitation. The break up distance, the most dangerous frequency, the wavelength and the droplet size are analysed as functions of the gain and fluid viscosity (Ohnesorge number Oh). Asymptotic formulas for weak and strong viscosity (small and large Oh) are provided in this section, though their derivation is moved in an appendix at the end of the paper. For nozzle excitation, a peculiar behavior of the optimal perturbation observed for intermediate Ohnesorge numbers (0.1 < Oh < 1) is further discussed in §4. We show that the peak of the breakup wavelength obtained for Oh ≈ 0.3 is related to a property of the local dispersion relation outside the instability band. The results are compared to local predictions in §5 and applied to realistic configurations in §6.
Mathematical formulation
We consider an axisymmetric liquid jet falling vertically by the action of gravity g. The jet has a radius h 0 and a characteristic velocity u 0 at the nozzle (figure 1). The fluid has a density ρ, a viscosity ν = η/ρ, and a surface tension γ. The surrounding environment is considered as evanescent, and is neglected.
Base Flow
Spatial and time variables are non-dimensionalized using the radius h 0 , and the capillary time τ c = ρh 3 0 /γ respectively. The base flow is governed by three parameters
Q = u 0 ρh 0 γ , The flow rate, (2.1a) Oh = ν ρ γh 0 , The Ohnesorge number, (2.1b) Bo = ρgh 2 0 γ
The Bond number.
(2.1c)
One could alternatively use the Weber number We = Q 2 instead of the dimensionless flow rate. The Ohnesorge number is the ratio of the viscous capillary timescale to the capillary timescale. We describe the liquid jet by the one-dimensional model [START_REF] Trouton | On the coefficient of viscous traction and its relation to that of viscosity[END_REF][START_REF] Weber | Zum zerfall eines flüssigkeitsstrahles[END_REF][START_REF] Eggers | Physics of fluid jets[END_REF])
∂A ∂t + ∂ (Au) ∂z = 0, (2.2a) ∂u ∂t + u ∂u ∂z = 3 Oh 1 A ∂ ∂z A ∂u ∂z + ∂K ∂z + Bo, (2.2b) with K = 4AA zz -2A 2 z [4A + A 2 z ] 3/2 - 2 [4A + A 2 z ] 1/2 , (2.3)
where u(z, t) is the local axial velocity, A = h 2 is the square of the local radius h(z, t), z is the axial coordinate oriented downward, t is the time variable, A z and A zz are respectively, the first and second derivative of A with respect to z. The boundary conditions at the nozzle are
A(z = 0, t) = 1, u(z = 0, t) = Q. (2.4)
The stationary base flow satisfies
∂ (A 0 U 0 ) ∂z = 0, (2.5a) U 0 ∂U 0 ∂z = 3 Oh 1 A 0 ∂ ∂z A 0 ∂U 0 ∂z + ∂K 0 ∂z + Bo . (2.5b)
The first equation gives
A 0 U 0 = Q. (2.6)
We will consider the régime where the jet base flow is inertial and given at leading order by
U 0 ∂U 0 ∂z = Bo . (2.7)
This hypothesis amounts to neglect viscous and curvature effects in the jet evolution.
Because it accelerates as it falls, the jet gets thinner and slender. Curvature effects along z thus soon vanish (unless the jet is initially very small, see [START_REF] Rubio-Rubio | On the thinnest steady threads obtained by gravitational stretching of capillary jets[END_REF], and viscous stresses applying on the jet cross section are also soon overcomed by the gravity force (beyond a physical distance from the nozzle of order νu 0 /g, see [START_REF] Clarke | The asymptotic effects of surface tension and viscosity on an axiallysymmetric free jet of liquid under gravity[END_REF]. Equations (2.6) and (2.7) thus give
U 0 (z) = 2 Bo z + Q 2 , (2.8a) A 0 (z) = Q 2 Bo z + Q 2 . (2.8b)
Plugging these expressions in the viscous and curvature terms of equation (2.2b), one observe that they are both decreasing with z. Viscous and curvature terms are therefore negligible along the entire jet, if they are already negligible in the vicinity of the nozzle exit. This is satisfied if the flow rate is sufficiently large, and more precisely if the following conditions are met
Q 1, (2.9a) Q 2 Bo, (2.9b) Q 3 Bo Oh . (2.9c)
Note that if the parameters Q, Bo and Oh are defined from the local values of U 0 and A 0 , conditions (2.9a-c) are always satisfied sufficiently far away from the nozzle (e.g. [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF]. Since the phenomena we will describe result from a dynamics which integrates over distances much larger than the jet initial radius, we use here (2.8) as a good approximation of the base flow everywhere.
For simplicity, we assume in the sequel that Q is the only large parameter, Bo and Oh being of order 1 or smaller. Both U 0 and A 0 then vary with respect to the slow variable
Z = z z o + 1, (2.10) as U 0 (Z) = Q √ Z, (2.11a) A 0 (Z) = 1/ √ Z, (2.11b)
where
z o = Q 2 2 Bo (2.12)
is the (large) nondimensionalized variation scale of the base flow.
Perturbations
We now consider linear perturbations (u p , A p ) in velocity and cross-section to the above base flow. These perturbations satisfy the linear system
∂A p ∂t = - ∂(A p U 0 + A 0 u p ) ∂z , (2.13a
)
∂u p ∂t + ∂u p U 0 ∂z = 3 Oh A 0 ∂ ∂z A 0 ∂u p ∂z + A p ∂U 0 ∂z - A p A 0 ∂ ∂z A 0 ∂U 0 ∂z + ∂L(A p ) ∂z , (2.13b)
where L(A p ) is the linear operator obtained by linearizing K -K 0 around A 0 . We want to analyze these perturbations in the 'jetting' regime when the jet is globally stable. More precisely, we do not consider the global transition that leads to dripping and which has been studied elsewhere [START_REF] Dizès | Global modes in falling capillary jets[END_REF][START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF][START_REF] Rubio-Rubio | On the thinnest steady threads obtained by gravitational stretching of capillary jets[END_REF]. We are interested in the growth of the perturbations that give rise to the formation of droplets far away from the nozzle. In this regime, the jet is convectively unstable: the perturbations are advected downstream as they grow. We expect droplets to form when the perturbation has reached a sufficiently large amplitude. Of particular interest is the maximum amplitude that perturbations can reach at a given location z f from a fixed level of noise. This amounts to calculate the maximum spatial gain that perturbations can exhibit at a given downstream location. For this purpose, we will consider two situations:
(a) Fluctuations are mainly present at the nozzle as in laboratory experiments where the jet nozzle is vibrated for instance [START_REF] Sauter | Stability of initially slow viscous jets driven by gravity[END_REF]. In that case, we are interested in the spatial gain at z f of perturbations generated at the nozzle z = 0.
(b) The jet is subject to a background noise which acts at every z location. In that case, we are interested in the maximum gain at z f of perturbations which originates from anywhere along the jet. In other words, we are interested in the spatial gain between z i and z f , where z i is chosen such that the gain is maximum. Obviously, the gain in that case is larger than in (a), since z = 0 is one particular excitation location among the many possible in that case.
The base flow is stationary; a temporal excitation at a given location with a fixed frequency leads to a temporal response in the whole jet with the same frequency. As the jet can be forced on A or on u, we expect two independent spatial structures associated with each frequency. If we write (u p , A p ) = (ũ, Ã)e -iωt + c.c.,
(2.14) the normalized solution forced in u at the nozzle will satisfy Ã(z = 0) = 0, ũ(z = 0) = 1, while the one forced in A at the nozzle will satisfy Ã(z = 0) = 1, ũ(z = 0) = 0. A linear combination of these two solutions can be used to obtain the normalized solution forced in u or forced in A at any location z i .
We then define a spatial gain in A from z i to z f from the solution forced in
A at z i by G A (z i , z f ) = | Ã(z f )|. Similarly, we define a spatial gain in u from z i to z f from the solution forced in u at z i by G u (z i , z f ) = |ũ(z f )|.
Both U 0 and A 0 depend on the slow spatial variable Z. Anticipating that the typical wavelength will be of order 1, a local plane wave approximation (WKBJ approximation) can be used [START_REF] Bender | Advanced mathematical methods for scientists and engineers[END_REF]. In other words, each time-harmonic perturbation amplitude can be written as a sum of expressions of the form (WKBJ approximation)
(ũ, Ã) = (v(Z), a(Z))e izo Z k(s)ds , (2.15)
where k(Z), v(Z) and a(Z) depend on the slow variation scale of the base flow. With the WKBJ ansatz, the perturbations equations become at leading order in 1/z o
(-iω + ikU 0 )a + ikA 0 v = 0, (2.16a) (-iω + ikU 0 )v = -3 Oh k 2 v + ik 2A 3/2 0 (1 -k 2 A 0 )a. (2.16b)
These two equations can be simultaneously satisfied (by non-vanishing fields) if and only if
(-iω + ikU 0 ) 2 + 3 Oh k 2 (-iω + ikU 0 ) - k 2 2 √ A 0 (1 -k 2 A 0 ) = 0.
(2.17)
This equation provides k as a function of Z. Expressions for v(Z) and a(Z) can be obtained by considering the problem to the next order (see appendix B).
Among the four possible solutions to (2.17), only the two wavenumbers corresponding to waves propagating downstream are allowed. As explained in [START_REF] Bers | Space-time evolution of plasma instabilities-absolute and convective[END_REF] (see also [START_REF] Huerre | Local and global instabilities in spatially developing flows[END_REF], these wavenumbers are the analytic continuations for real ω of functions satisfying m(k) > 0 for large m(ω). They are well-defined in the convective regime that we consider here.
If ω = ωQ with ω = O(1), the wavenumbers associated with the downstream propagating waves can be expanded as
k ∼ k o + k 1 Q (2.18)
where k o is found to be identical for both waves:
k 0 = ω U 0 = ωA 0 . (2.19)
At the order 1/Q, we get
k 1 = -i k 0 A 3/4 0 √ 2 1 -A 0 k 2 0 + 9 Oh 2 √ A 0 k 2 0 2 - 3 Oh k 2 0 A 0 2 .
(2.20)
The two wavenumbers are obtained by considering the two possible values of the square root. Although both waves are needed to satisfy the boundary conditions at the nozzle, the solution is rapidly dominated downstream by a single wave corresponding to the wavenumber with the smallest imaginary part.
Both the solution forced in A and the solution forced in u are thus expected to have a similar WKBJ approximation (2.15). The main contribution to the two gains G A (z i , z f ) and G u (z i , z f ) is therefore expected to be the same and given by the exponential factor
G(z i , z f ) = e S(Zi,Z f ) , (2.21)
where
S(Z i , Z f ) = -z o Z f Zi m(k)(Z) dZ = - z o Q Z f Zi m(k 1 )(Z) dZ. (2.22)
This implicitly assumes that 1), the WKBJ approach remains valid but the gain (2.21) is of same order of magnitude as the variation of v and a. In that case, one should a priori take into account the amplitude v and a provided in Appendix B and apply explicitly the boundary conditions at the forcing location. It leads to different gains for a forcing in velocity and a forcing in radius.
z o /Q = Q/2 Bo is large. When z o /Q = O(
The gain G is associated with the temporal growth of the local perturbation. Indeed, S can be written as
S = z o Z f Zi σ(k l (Z), Oh l (Z)) τ c l (Z)U 0 (Z) dZ, (2.23)
where σ(k, Oh) is the growth rate of the capillary instability for the 1D model:
σ(k, Oh) = k √ 2 1 -k 2 + 9 Oh 2 k 2 2 - 3 Oh k 2 2 .
(2.24)
The local wavenumber k l (Z), local Ohnesorge Oh l (Z) and local capillary time scale τ c l (Z) vary as
k l (Z) = ωZ -3/8 , (2.25a) Oh l (Z) = Oh Z 1/8 , (2.25b) τ c l (Z) = Z -3/8 . (2.25c)
In the following, we write S as
S = z o √ 2Q S(Z f , Z i , Oh, ω), (2.26) with S(Z i , Z f , ω, Oh) = ω Z f Zi z -7/8 1 -ω 2 z -3/2 + 9 Oh 2 ω 2 2 z -5/4 - 3 Oh ω √ 2 z -5/8 dz.
(2.27) Our objective is to find the frequency ω that gives the largest value of S at a given Z f . For the type of perturbations in case (a) (nozzle excitation), Z i = 1, and we are looking for
S (a) max (Z f , Oh) = max ω S(1, Z f , ω, Oh).
(2.28)
For the type of perturbations in case (b) (background noise), the gain is maximized over all Z i between 1 and Z f , so
S (b) max (Z f , Oh) = max ω max 1≤Zi<Z f S(Z i , Z f , ω, Oh).
(2.29) For z > 1, the integrand in the expression of S is always positive when ω < 1. This means that as long as ω
(a)
max ≤ 1, the gain cannot be increased by changing Z i , and we have S
(b) max = S (a) max . When ω (a)
max > 1, the perturbation starts to decrease before increasing further downstream. In that case, the gain can be increased by considering larger Z i . More precisely, Z i has to be chosen such that the integrand starts to be positive which gives Z i = ω 4/3 . In this regime,
S (b) max (Z f , Oh) = max ω S(ω 4/3 , Z f , ω, Oh).
(2.30)
Both S
Quantitative results
The results of the optimization procedure are shown in figure 2 for both nozzle excitation and background noise. Both the maximum gain and the most dangerous frequency are plotted versus the rescaled distance z f /z o to the nozzle for Oh ranging from 10 -4 to 10 3 . The same results are shown as level curves in the (z f /z o , Oh) plane in figure 3. As expected, S max grows as z f /z o increases or Oh decreases (see figure 2(a)). The most dangerous frequency follows the same trend (see figure 2(b)). As already mentioned above, nozzle excitation [case (a)] and background noise [case (b)] provide the same results when ω max ≤ 1. The contour ω max = 1 has been reported in figure 3(a) as a dotted line. On the left of this dotted line, the contours of maximum gain are then the same for both cases. When ω max is larger than 1, background noise gain becomes larger than nozzle excitation gain. The most dangerous frequency for background noise also becomes larger than for nozzle excitation. Note however that significant differences are only observed in an intermediate regime of Oh (typically 10 -2 < Oh < 1 ) for large values of S (S > 5) (see figure 3).
Figure 3 can be used to obtain the distance of the expected transition to jet breakup and droplet formation. Assume that a gain of order G t ≈ e 7 , that is S t = 7 is enough for the transition, a value commonly admitted in boundary layers instabilities [START_REF] Schlichting | Boundary Layer Theory[END_REF]. From (2.26), we can deduce the value of S needed for transition If the fluid collapses in a single drop between two pinch-off, the distance between two droplets is given by the wavelength at breakup λ max = 2π/A 0 (z f )/ω max , deduced from (2.19), and the droplet diameter is
S t = S t √ 2Q/z o = S t 2 √ 2 Bo /Q (3.1) ≈ 20 Bo /Q (3.2) = 20 τ c g/u 0 , (3.3
d max ∼ [6λ max A 0 (z f )] 1/3 ∼ 12π ω max 1/3 . (3.4)
These two quantities are plotted in figure 4 for a few values of S t as a function of Oh.
What is particularly remarkable is that the drop diameter remains mostly constant in the full interval 10 -3 < Oh < 10 2 whatever the noise level for both cases [figure 4(b)]. Yet, in this interval of Oh, the breakup distance z f varies by a factor 1000 [figure 3(a)], while the wavelength varies by a factor 20 or more [figure 4(a)]. In the case of background noise, z f and λ max increase with Oh. We observe the same evolution in the case of noise excitation for small S t . However, the curves of both cases depart from each other for large values of S t (for instance S t = 10) with a surprising local peak for case (a) close to Oh ≈ 0.3. As we shall see in section §4, this peak is associated with a larger damping of the perturbation outside the instability range for moderate Oh.
In figures 3 and 4, we have also plotted the asymptotic behaviors of the different quantities obtained for large Oh and small Oh. The details of the derivation are provided in appendix A. We provide below the final result only. This scaling law, which was also derived by [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF], expresses that breakup occurs when the local capillary instability growth rate overcomes the stretching rate of the jet. Indeed and coming back to dimensional quantities, the velocity and local radius vary far from the nozzle as U 0 ∼ √ 2gz and h ∼ √ Q * /(2gz) 1/4 , respectively where
Q * = U 0 h 2 is the dimensional flow rate.
The local stretching rate is then given by ∂ z U 0 ∼ g/(2z) while the viscous capillary growth rate based on the current radius is of order γ/(ηh) = γ(2gz) 1/4 /(η √ Q * ). The latter overcomes the former at a distance z f of order (η/γ) 4/3 g 1/3 (Q * ) 2/3 . In terms of dimensionless parameters, this gives
z f /h 0 ∝ Oh 4/3 Bo 1/3 Q 2/3 , (3.6)
which is essentially the scaling deduced from (3.5) if one remembers that S t ∝ Bo /Q and z 0 ∝ Q 2 / Bo. In that viscous regime, the most dangerous frequencies are not the same in cases (a) and (b). This implies that the wavelengths λ max at the point of transition, and the droplet diameter d max are also different. For case (a), we obtain from (A 9) and (3.5) ω (a) max ∼ α a S 2/3 t Oh 1/6 , with α a = 3 3/4 2 7/4 ≈ 0.678, (3.7) which gives λ (a) max ∼ β a Oh 1/2 , with β a = 4π 3 1/4 ≈ 16.54, (3.8a)
d (a) max ∼ γ a S -2/9 t
Oh -1/18 , with γ a = π 1/3 3 1/12 2 15/12 ≈ 3.82.
(3.8b)
For case (b), we obtain from (A 11) and (3.5)
ω (b) max ∼ α b S 8/9 t
Oh 2/9 , with α b = 3 2 7/3 ≈ 0.595, (3.9) which gives
λ (b) max ∼ β b S -2/9 t
Oh 4/9 , with β b = 2 31/12 π ≈ 18.83, (3.10a)
d (b) max ∼ γ b S -8/27 t
Oh -2/27 , with γ b = π 1/3 2 13/9 ≈ 3.99.
(3.10b)
A naive local argument like the one leading to equation (3.6) would predict for λ max the most unstable local wavelength at z f . As it will be shown in section §5, this fails at making the correct predictions, precisely because it ignores the stretching history of the fluid particles, and of the corresponding unstable modes. Equation (3.6) is thus consistent with a local argument, but the local argument does not incorporate the whole truth.
Low viscosity (small Oh)
In the weakly viscous regime (Oh 1), both noise and nozzle excitations are expected to give the same breakup distance z f . This distance is well approximated by
z f /z 0 ≈ η 0 S 8/7 t with η 0 ≈ 3.45, (3.11) when z f /z o > 3.74, that is S t > 1.32.
Again, as in the previous viscous limit, this scaling law expresses that breakup occurs when the local capillary instability growth rate overcomes the stretching rate. The local jet stretching rate is still ∂ z U 0 ∼ g/(2z) while the inviscid capillary growth rate based on the current radius is now of order γ/ρh 3 = γ/ρ(2gz) 3/8 /(Q * ) 3/4 . The latter overcomes the former at a distance of order (Q * ) 6/7 g 1/7 (ρ/γ) 4/7 . In terms of dimensionless parameters, it gives
z f /h 0 ∝ Bo 1/7 Q 6/7 , (3.12)
which is essentially the scaling in equation (3.11) with S t ∝ Bo /Q and z 0 ∝ Q 2 / Bo. In this regime, the most dangerous frequency is also the same in both cases and given by ω max = α 0 S 6/7 t , with α 0 ≈ 0.79, (3.13) which gives
λ max ∼ β 0 S -2/7 t
, with β 0 ≈ 14.82, (3.14a)
d max ∼ γ 0 S -2/7 t
, with γ 0 ≈ 3.63.
(3.14b)
Again and for the same reason, naive local scaling fails at representing these scaling laws adequately.
Comparison with 3D predictions
In this section, we focus on the regime of intermediate values of Oh for which the asymptotic expressions do not apply. We address the peculiar behavior of the optimal perturbation in the case of nozzle excitation in this regime. In figure 4(a,b)), we have seen that for S t = 10 both λ max and d max exhibit a surprising kink around Oh ≈ 0.3. The same non-monotonic behavior has also been observed on the break-up distance z f /z 0 as a function of Oh (see figure 3(a)). These surprising behaviors are associated with the particular properties of the perturbations outside the instability domain. Indeed, for large S t , the optimal perturbation is obtained for ω max > 1. The local wavenumber of the perturbation which is ω at the nozzle is then larger than 1 close to the nozzle, that is in the stable regime [see figure 5(a)]. The optimal perturbation excited from the nozzle is thus first spatially damped before becoming spatially amplified. This damping regime explains the smaller gain obtained by nozzle excitation compared to background noise. It turns out that the strength of this damping is not monotonic with respect to Oh and exhibits a peak for an intermediate value of Oh. Such a peak is illustrated in figure 5 where we have plotted the (local) temporal growth rate of the perturbation versus Oh for a few values of the (local) wavenumber. We do observe that for the values of k satisfying k ≥ 1, that is outside the instability band, the local growth rate exhibits a negative minimum for Oh between 0.1 and 1. The presence of this damping regime naturally questions the validity of our 1D model. The 1D model is indeed known to correctly describe the instability characteristics of 3D axisymmetric modes [START_REF] Eggers | Physics of fluid jets[END_REF]. But, no such results exist in stable regimes. In fact, the 1D dispersion relation departs from the 3D dispersion relation of axisymmetric modes when k > 1. This departure is visible in figure 5 where we have also plotted the local growth rate obtained from the 3D dispersion relation given in [START_REF] Chandrasekhar | Hydrodynamic and hydromagnetic stability[END_REF], p. 541. Significant differences are observed but the 3D growth rates exhibit a similar qualitative behavior as a function of Oh. In particular, there is still a damping rate extremum in the interval 0.1 < Oh < 1. We can therefore expect a similar qualitative behavior of the perturbation outside the instability range with the 3D model.
In figure 6, we compare the optimization results for the nozzle excitation obtained with the 1D model with those obtained using the 3D dispersion relation of Chandrasekhar. This is done by replacing the function S in (2.27) by
S (3D) (Z i , Z f , ω, Oh) = √ 2 Oh Z f Zi y 2 (x, J) -x 2 dz, (4.1)
where
x = x(z, ω) = ω z 3/4 , J = J(z, Oh) = 1 Oh 2 z 1/4 , (4.2)
and y = y(x, J) is given by
2x 2 (x 2 + y 2 ) I 1 (x) I 0 (x) 1 - 2xy x 2 + y 2 I 1 (x)I 1 (y) I 1 (y)I 1 (x) -(x 4 -y 4 ) = J xI 1 (x) I 0 (x) (1 -x 2 ). (4.3)
As expected, differences can be observed between 1D and 3D results for the largest value of S t (S t = 10). However, the trends remain the same. Close to Oh ≈ 0.3, the breakup distance exhibits a plateau, the frequency a minimum, the wavelength and the drop diameter a peak. These peaks have a smaller amplitude for the 3D dispersion relation and are slightly shifted to higher values of Oh. For S t = 1, no difference between both models are observed. This can be understood by the fact that the perturbation does not exhibit a period of damping for such a small value of S t . The 1D model therefore perfectly describes the gain of 3D perturbations, which turns out to be the same as for background noise for Oh < 2 (see figure 3(a)).
Comparison with local predictions
In this section, our goal is to compare the results of the optimization procedure with predictions obtained from the local dispersion relation. We have seen in section 2 that the gain can be related to the local temporal growth rate of the perturbation along the jet [see expression (2.23)]. Both the local capillary time scale τ c l and the Ohnesorge number Oh l vary with Z [see expressions (2.25b,c)]. At a location Z, the maximum temporal growth rate (normalized by the capillarity time at the nozzle) is given by
σ max l (Z) = Z 3/8 2 √ 2 + 6 Oh Z 1/8 , (5.1)
and is reached for the wavelength (normalized by h 0 ) (see [START_REF] Eggers | Physics of fluid jets[END_REF])
λ l (Z) = 2π Z 1/4 2 + 3 √ 2 Oh Z 1/8 . (5.2)
If we form a drop from this perturbation wavelength at location, we would then obtain a drop diameter (normalized by h 0 )
d l (Z) = (12π) 1/3 Z 1/4 2 + 3 √ 2 Oh Z 1/8 1/6 . (5.3)
As the local growth rate increases downstream, a simple upperbound of the gain is then obtained by taking the exponential of the product of the maximum growth rate by the time T i needed to reach the chosen location. The time T i is the free fall time given by
T i = Q Bo ( √ Z -1).
(5.4)
In figure 7, we have plotted the product σ max l T i at the location predicted for the transition assuming that a gain e 7 is needed for such a transition. In figure 7(a), this quantity is plotted as a function of the transition location z f /z o . As expected, we obtain the chosen value for the transition (i. e. 7) for small z f /z o . For large z f /z o , the product σ max l T i also goes to a constant for background noise whatever the Ohnesorge number. However, it has a contrasted behavior for nozzle excitation, with an important increase with z f /z o for the value Oh = 0.3.
In figure 7(b), σ max l T i is plotted as a function of Oh, for different values of S t , that is for different values of the ratio Bo /Q in view of (3.1). For large and small Oh we recover the estimates deduced using (3.5) and (3.11):
σ max l T i ∼ 10.5 as Oh → ∞,
(5.5a)
σ max l T i ∼ 20.68 -11.14 S -4/7 t
as Oh → 0.
(5.5b)
For background noise, σ max l T i varies smoothly between these two extreme values. A completely different evolution is observed for nozzle excitation: a local peak forms between 0.1 < Oh < 1 with an amplitude increasing with S t . This phenomenon is related to the damping of the optimal perturbation discussed in the previous section. We have indeed seen that for nozzle excitation, large gain (that is large t ) are obtained for perturbations exhibiting a damping period prior to their growth. Thus, the growth has to compensate a loss of amplitude. The damping being the strongest for intermediate Oh, the transition is pushed the farthest for these values, explaining the largest growth of the Oh = 0.3 curve in figure 7(a) and the peaks of figure 7(b).
We have seen that the optimal procedure provides a wavelength and a droplet size as a function of Z and Oh only. These quantities are compared to the local estimates (5.2) and (5.3) in figure 8. Both nozzle excitation (solid lines) and background noise (dashed lines) are considered for Oh = 0.01, 0.3 and 10. We observe that the local predictions (dotted lines) always underestimate the wavelength and the drop diameter. For the wavelength, the ratio with the local estimate typically increases with z f /z 0 and Oh. The gap is the strongest for the nozzle excitation case, especially for intermediate Oh (see curve for Oh = 0.3) for which the local estimate is found to underestimate the wavelength by a factor as high as 25 for z f /z 0 = 10 3 .
Contrarily to the wavelength, the drop diameter follows the same trend as the local prediction as a function of z f /z 0 . For both noise excitation and background noise, the diameter decreases with the break-up location.
For large or small Oh, the behaviors of the wavelength and drop diameter obtained by the optimization procedure and local consideration can be directly compared using the results obtained in Appendix A. For large Oh, the local prediction reads
λ l /h 0 ∼ β l Oh 1/2 Z -3/16 f , with β l = 2π2 1/4 √ 3 ≈ 12.94, (5.6a)
d l /h 0 ∼ γ l Oh 1/6 Z -11/49 f , with γ l = (2π) 1/3 (3 √ 2) 1/6 ≈ 2.35, (5.6b)
while the optimization procedure gives
λ (a) max /h 0 ∼ β (a)
Oh 1/2 , with β (a) ≈ 16.54, (5.7a)
λ (b) max /h 0 ∼ β (b) Oh 2/3 Z -1/6 f , with β (b) ≈ 22.83, (5.7b) d (a) max /h 0 ∼ γ (a) Oh 1/6 Z -1/6 f
, with γ (a) ≈ 4.63, (5.7c)
d (b) max /h 0 ∼ γ (b) Oh 2/9 Z -2/9 f
, with γ (b) ≈ 5.16.
(5.7d)
For small Oh, the local estimates are
λ nv l /h 0 ∼ β nv l Z -1/4 f , with β nv l ≈ 2π √ 2 ≈ 8.88, (5.8a)
d nv l /h 0 ∼ γ nv l Z -1/4 f , with γ nv l = (12π) 1/3 (2) 1/6 ≈ 3.76, (5.8b)
while the optimization procedure gives for Z f > 4.74 (see appendix A)
λ max /h 0 ∼ β nv Z -1/4 f
, with β nv ≈ 20.20, (5.9a)
d max /h 0 ∼ γ nv Z -1/4 f
, with γ nv ≈ 4.94.
(5.9b)
Applications
We now apply the results to a realistic configuration obtained from an nozzle of radius h 0 = 1 mm in a gravity field with g = 9.81 m/s 2 . We consider three fluids: water (at 20 • ) for which γ ≈ 72 10 -3 N/m, ν ≈ 10 -6 m 2 /s; and two silicon oils of surface tension γ ≈ 21 10 -3 N/m and of viscosity ν ≈ 5 10 -5 m 2 /s and ν ≈ 3 10 -4 m 2 /s respectively. For these three fluids, we take ρ ≈ 10 3 kg/m 3 as a fair order of magnitude.
For water, we obtain Oh = 3.7 10 -3 , Bo = 0.13 and a parameter Q = 3.72 u 0 with the velocity u 0 at the nozzle expressed in m/s. For the silicon oils, we get Bo = 0.46 and Q = 6.9 u 0 and two values of Oh: Oh = 0.46 and Oh = 2. The conditions of validity (2.9a-c) of the inertial solution then require u 0 to be (much) larger than u c = 0.26 m/s for the water, and u c = 0.15 m/s for the silicon oils.
In figure 9, we have plotted the theoretical predictions for the breakup location, the frequency, the wavelength and the drop diameter as the fluid velocity at the nozzle is varied from u c to 10 u c , that is for Q varying from 1 to 10. We have chosen S t = 7 for the background noise transition, and S t = 4 for the transition by the nozzle excitation. A smaller value of S t has been chosen for the nozzle excitation to describe controlled conditions of forcing. Figure 9(a) shows that for the three fluids the transition by the nozzle excitation can be reached before the background noise transition. The values obtained for the breaking length are comparable to the experimental values reported in [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF]. They measured a normalized breaking length of order 100-150 for the silicon oil of ν ≈ 5 10 -5 m 2 /s from a nozzle of same diameter for flow rates ranging from Q = 0.5 to Q = 1.3. Figure 9(b) provides the most dangerous frequency of the excitation. For the three cases, the frequency for the nozzle excitation is relatively closed to the neutral frequency Q of the jet at the nozzle. For both silicon oils, this frequency is however much smaller than the frequency obtained by the background noise transition, especially for small Q.
The break-up wavelength shown in figure 9(c) exhibits a different behavior with respect to the flow rate Q for the nozzle excitation and the background noise. It decreases monotonically with Q for the noise excitation while it increases for the background noise up to an extremum before starting decreasing. For the three fluids, noise excitation provides a larger wavelength than background noise for small Q, but the opposite is observed above a critical value of Q which increases with Oh. Note that for small Q, the wavelengths obtained for noise excitation are comparable for both silicon oils. Both curves would even cross if a larger value of S t was considered. This property is related to the non-monotonic behavior of the breakup wavelength already discussed above [see figure 6(c)].
Contrarily to the wavelength, the droplet size [figure 9(d))] is not changing much with Q and is comparable for the three fluids. Nozzle excitation provides larger droplets but this effect is significant for the smallest values of Q only.
Finally note that the differences between the 1D and 3D predictions for the nozzle excitation are barely visible. A very small departure of the wavelength curves can be seen for the silicon oils only. This confirms both the usefulness and the validity of the 1D model.
Conclusion and final remarks
At the end of this detailed study, we are now in position to answer the questions raised in the Introduction: The breakup distance from the orifice of a jet falling by its own weight can indeed be understood by comparing two timescales. The relevant timescales are the capillary destabilization time (viscous, or not) based on the local jet radius, and the inverse of the local jet stretching rate. Breakup occurs, in both viscous and inviscid régimes as discussed in section 3, when the latter overcomes the former, a fact already known [START_REF] Villermaux | The formation of filamentary structures from molten silicates: Pele's hair, angel hair, and blown clinker[END_REF][START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF]. However, we have also learned that this aspect is only a tiny piece of the problem as a whole. This simple local rule, if naively extended to estimate the wavelength of the perturbation breaking the jet would predict that the wavelength is proportional to the local jet radius in the inviscid case for instance. This prediction was found to always underestimate the wavelength at breakup. The most dangerous wavelength and the drop diameter account for the stretching history of the fluid particles as they travel along the jet; this is the reason why their values are different depending on whether the perturbations are introduced at the jet nozzle only, or through a background noise affecting the jet all along its extension. An optimal theory computing the gain of every mode as the jet deforms and accelerates, was thus necessary to answer the -seemingly simple-question of its breakup. It has, in addition, revealed the existence of an unexpected non-monotonic dependency of the most dangerous wavelength λ max with respect to Oh.
We have also provided quantitative results assuming that a spatial gain of e 7 of the linear perturbations was sufficient for breakup. This value of the critical gain is an ad hoc criterion that assumes a particular level of noise and which neglects the possible influence of the nonlinear effects. It would be interested to test this criterion with experimental data.
Our analysis has focused on capillary jet whose base state is in an inertial regime. Close to the nozzle, especially if the flow rate is small, a viscous dominated regime is expected [START_REF] Senchenko | Shape and stability of a viscous thread[END_REF]. We have not considered such a regime here. But a similar WKBJ analysis could a priori be performed with a base flow obtained by resolving the more general equations (2.2) if the jet variation scale remains large compared to the perturbation wavelength. However, far from the nozzle, the jet always becomes inertial. The growth of the perturbation is therefore expected to be the same as described above. For this reason, the optimal perturbation obtained from background noise could be the same. Indeed, we have seen that in order to reach a large gain (S t > 5 or so), the optimal perturbation should be introduced far from the nozzle. If the jet is in the inertial regime at this location, the same gain is then obtained. This point was already noticed in [START_REF] Javadi | Delayed capillary breakup of falling viscous jets[END_REF].
For nozzle excitation, the entire evolution of the jet contributes the optimal perturbation. We have seen that large gains (S t > 5) are obtained by perturbations which exhibit a spatial damping before starting to grow. We have also seen that this damping regime is only qualitatively described by the 1D model. We do not expect a better description if the jet is dominated by viscous effects. Moreover, it is known that in this regime nonparallel effects are also important close to the nozzle (Rubio-Rubio et al. 2013) which invalidates the WKBJ approach. For this regime, it would be interesting to perform an optimal stability analysis using more advanced tools [START_REF] Schmid | Nonmodal stability theory[END_REF] to take into account non-parallel effects and non-modal growth.
Note finally that we have computed the perturbation gain by considering the exponential terms of the WKBJ approximation only. A better estimate could readily be obtained by considering the complete expression of the WKBJ approximation. This expression which has been provided in appendix B involves an amplitude factor which contains all the other contributions affecting the growth of the perturbation. Different expressions are obtained for A and u which in particular implies that different gains are obtained for the velocity and the jet radius. It is important to mention that the other contributions are not limited to a simple correcting factor associated with the local stretching [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF][START_REF] Eggers | Physics of fluid jets[END_REF]. Other contributions associated with the z-dependence of the local wavenumber and local jet profile are equally important, leading to expressions which are not simple even in the large or small Oh limit. with
S 1 (Z i , Z f , X ω ) = X -3/4 ω Z f Xω ZiXω X -7/8 1 + 9 2X 5/4 - 3 √ 2X 5/8 dX, (A 3a) S 2 (Z i , Z f , X ω )) = X -1/2 ω Z f Xω ZiXω X -19/8 2 1 + 9 2X 5/4 dX. (A 3b)
and
X ω = (Oh ω) -8/5 . (A 4)
When Z f is not too large, we are in a configuration where: (1) Z i X ω 1 and Z f X ω 1. In that case, we can write
S 1 ∼ 2 √ 2 9 (Z 3/4 f -Z 3/4 i ) - X 5/4 ω 108 √ 2 (Z 2 f -Z 2 i ) + O(X 5/2 ω Z 13/4 f ) (A 5a) S 2 ∼ 2 √ 2 9X 5/4 ω 1 Z 3/4 i - 1 Z 3/4 f + O(Z 1/2 f ) (A 5b) which gives S ∼ 2 √ 2 9 Oh (Z 3/4 f -1 -ω 2 + ω 2 Z -3/4 f ) - Z 2 f -1 108 √ 2 Oh 3 ω 2 + O Z 1/2 f Oh 3 , Z 13/4 f Oh 5 ω 4 (A 6)
in case (a) (nozzle excitation) and
S ∼ 2 √ 2 9 Oh (Z 3/4 f -2ω + ω 2 Z -3/4 f ) - Z 2 f -ω 8/3 108 √ 2 Oh 3 ω 2 + O Z 1/2 f Oh 3 , Z 13/4 f Oh 5 ω 4 (A 7)
in case (b) (background noise) with Z i = ω 4/3 . In case (a), the maximum gain is obtained for
ω (a) max ∼ Z 2 f -1 48 Oh 2 (1 -Z -3/4 f ) 1/4 (A 8) that is ω (a) max ∼ Z 1/2 f 2 3 1/4 Oh 1/2 (A 9)
for large Z f , and equals
S (a) max ∼ 2 √ 2Z 3/4 f 9 Oh 1 -Z -3/4 f - Z 1/4 f 2 √ 3 Oh + O Z 1/4 f Oh 2 , Z 5/4 f Oh 3 . (A 10)
In case (b), the maximum gain is obtained for
ω (b) max ∼ Z 2/3 f 48 1/3 Oh 2/3 (A 11)
and equals The condition that Z f X max ω 1 does not give any restriction in case (b). However, it requires in case (a)
S (b) max ∼ 2 √ 2Z 3/4 f 9 Oh 1 - 3 2/3 2 4/3 Oh 2/3 Z 1/12 f . ( A
Z f Oh 4 . ( A 13)
When Z f Oh 4 , another limit has to be considered for case (a): (2) Z i X ω 1 and Z f X ω 1. In this limit, we have This estimate applies only when Z f Oh 4 . The asymptotic formulae are compared to numerical results in figures 10 for case (a) and in figure 11 for case (b). In both cases, we have plotted the maximum gain S max and the most dangerous frequency (the frequency that provides the maximum gain) versus Z f for Oh = 1, 10, 100, 1000. It is interesting to see that in case (a) the maximum gain and the most dangerous frequency both collapse on a single curve when plotted as a function of the variable Z f / Oh 4 with an adequate normalization (see figure 12). When Oh is small, viscous effects come into play if we go sufficiently far away for the nozzle because the local Ohnesorge number increases algebraically with the distance to the nozzle.
S 1 ∼ 8Z 8 f X 5/8 ω - I o X 3/4 ω (A 14a) S 2 ∼ 2 √ 2 9X 5/4 ω Z 3
Here, we shall assume that we remain inviscid in the whole domain of integration, that is 1 -ω 2 z -3/2 + 9 Oh 2 ω 2 2 z -5/4 -3 √ 2 ω Oh z -5/8 ∼ 1 -ω 2 z -3/2 . (A 18) with Y ω = ω -4/3 . Because in the inviscid limit, perturbations are neutral when they do not grow, case (a) and case (b) provide the same gain. For 1 < Z f < Z c f ≈ 4.74, the maximum gain is reached for ω < 1, i.e. Y ω > 1. The location Z c f is given by the vanishing of ∂ Yω S for Y ω = 1 and Z i = 1:
-7 8
Z c f 1 s -7/8 1 -s -3/2 ds + (Z c f ) 1/8 1 -(Z c f ) -3/2 = 0. (A 20)
For Z c f < Z f Oh -8 , the maximum gain is reached for This estimate is compared to numerical values in figure 13. We do observe a convergence of the maximum gain and most dangerous frequency curves toward the inviscid limit as Oh decreases. Note however that the convergence is slower for nozzle excitation (case (a)).
ω max ∼ Z f Z c f 3/4 ≈ 0.311Z
propagating waves can be formed such that at the orifice a = 1 and v = 0 or a = 0 and v = 1.
In the inviscid regime (Oh 1), equation (B 6) can be integrated explicitly for any A 0 as
a (i) (z) = C A 5/2 0 (z) k 1 (z) , ( B 8)
where C is a constant. It is interesting to compare this expression to the expression a ∼ A 0 that would have been obtained by the argument of [START_REF] Tomotika | Breaking up of a drop of viscous liquid immersed in another viscous fluid which is extending at a uniform rate[END_REF], that is by considering the solution as a uniformly stretched fluid cylinder (see [START_REF] Eggers | Physics of fluid jets[END_REF]).
Figure 2 :
2 Figure 2: Maximum gain S max (a) and most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines) as a function of the distance z f /z o = Z f -1 to the nozzle. From bottom to top, Oh takes the values 1000, 100, 10, 1, 0.1, 0.01, 10 -4 .
are obtained using standard Matlab subroutines.
Figure 3 :
3 Figure3: Level curves of the maximum gain S max (a) and of the most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines) in the (z f /z o , Oh) plane. The dashed lines correspond to the asymptotic limits (3.11) and (3.5) for small and large Oh respectively. On the left of the ω max = 1 curve (indicated as a grey line in (a)), solid and dashed lines are superimposed.
) and from figure 3(a) the position z f /z o where such a value of S is reached in case (a) or (b).
Figure 4 :
4 Figure 4: Wavelength at break-up (a) and resulting droplet diameter (b) versus Oh for background noise (dashed lines) and nozzle excitation (solid lines). The different curves correspond to the transition level St = 0.1, 1, 10. The thin dashed lines correspond to the asymptotic expressions for small and large Oh.
Figure 5 :
5 Figure 5: Comparison of 1D and 3D local dispersion relations. Solid line: 1D dispersion relation. Dashed line: 3D dispersion relation for axisymmetric modes. (a) Temporal growth rate versus the wavenumber k for various Oh. (b) Temporal growth rate versus Oh for fixed wavelengths.
Figure 6 :
6 Figure 6: Characteristics of the response to nozzle excitation versus Oh for various values of S t and two different stability models (Solid line: 1D; Dashed line: 3D axisymmetric). (a) Break-up distance. (b) Most dangerous frequency (c) Wavelength at break-up. (d) Drop diameter.
Figure 7 :
7 Figure 7: Maximum local temporal growth rate σ max l normalized by the free fall time T i at the breakup location assuming breakup for a gain e 7 . Solid line: nozzle excitation; dashed line: background noise. (a) Variation with respect to the breakup location z f /z o for different Oh. (b) Variation with respect to Oh for different values of S t . The dotted lines in (b) are the asymptotic predictions (5.5a,b).
Figure 8 :
8 Figure 8: Wavelength at break-up (a) and drop diameter (b) versus the break-up location z f /z 0 for various values of Oh. Solid line: nozzle excitation; dashed line: background noise.
Figure 9 :
9 Figure 9: Characteristics at break-up by nozzle excitation or background noise for a jet of radius h 0 = 1 mm assuming that break-up occurs when the perturbation gain has reached e St . Solid lines: Nozzle excitation with S t = 4. Dot-dash lines: Nozzle excitation with S t = 4 using the 3D dispersion relation. Dashed lines: Background noise with S t = 7. Black lines: Water; Red lines: Silicon oil of ν = 5 10 -5 m 2 /s (SO50); Green lines: Silicon oil of ν = 3 10 -4 m 2 /s (SO300); (a): Break-up location; (b): Most dangerous frequency; (c): Wavelength at break up; (d): Drop diameter.
Figure 10 :
10 Figure 10: Maximum gain S (a) max (a) and most dangerous frequency ω(a) max (b) of the perturbations excited at the nozzle as a function of the distance Z f to the nozzle. Solid lines: numerical results. Dashed and dotted lines: asymptotic results obtained for large Oh for Z f Oh 4 [formulae (A 10) and (A 9)] and Z f Oh 4 [formulae (A 17) and (A 16)] respectively. Oh takes the values 1, 10, 100, 1000.
Figure 11 :
11 Figure 11: Maximum gain S (b) max (a) and most dangerous frequency ω (b) max (b) of the perturbations excited from background noise as a function of the distance Z f to the nozzle. Solid lines: numerical results. Dashed lines: asymptotic results [formulae (A 12) and (A 11)]. From top to bottom, Oh takes the values 1, 10, 100, 1000.
Figure 12 :
12 Figure12: Same plots as fig.10but with rescaled variables versus Z f / Oh 4 . In (a), the dotted line is (A 17) while the dashed line is the first term of (A 10). In (b), the dotted and dashed line are (A 16) and (A 9) respectively.
Figure 13 :
13 Figure 13: Maximum gain S max (a) and most dangerous frequency ω max (b) of the perturbations excited from background noise (dashed lines) and at the nozzle (solid lines )as a function of the distance z f /z o = Z f -1 to the nozzle. From bottom to top, Oh takes the values 0.1, 0.01, 0.001. The formulae (A 22) and (A 21) are indicated as a solid gray line in (a) and (b) respectively.
Acknowledgments
We acknowledge support from the French Agence Nationale de la Recherche under the ANR FISICS project ANR-15-CE30-0015-03.
Appendix A. Asymptotic regimes
In this appendix, we provide asymptotic expressions for S max and ω max in the viscous and inviscid regimes, that is for Oh → ∞ and Oh → 0 respectively. A.1. Maximum gain in the viscous regime (Oh → ∞) When Oh → ∞, the expression of the integrand in (2.27) can be simplified, and in the whole domain of integration, we can use the approximation
(A 1) such that (2.27) can be written as
Appendix B. WKBJ analysis
In this section, we provide the full expression of the WKBJ approximation of each downward propagative wave. Each wave is searched into the form
where k(Z), v(Z) and a(Z) depend as the base flow on the slow spatial variable
These equations give at leading order (2.16a,b) from which we can deduce the dispersion relation (2.17) that defines k(Z). If we now replace v in the right hand side of (B 2a) by its leading order expression in term of a, we obtain an expression for v valid up to
Plugging this expression in (B 2b) with U 0 = Q/A 0 , we obtain the following equation for a(z)
with
This equation is valid for both downward and upward propagating waves.
For large Q, it can be simplified for the downward propagating wavenumbers using ω = ωQ and k 6) where k 1 (Z) is given by (2.20). The amplitude v(Z) is then deduced from a(Z) using (B 3) at leading order in Q:
The two downward propagating waves possess different expressions for k 1 , and thus different amplitudes a and v. This guarantees that a combination of the two downward |
01554090 | en | [
"phys.phys.phys-data-an"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01554090/file/1607.01980.pdf | Simon Labouesse
Marc Allain
Member, IEEE Jérôme Idier
Awoke Negash
Thomas Mangeat
email: [email protected]
Penghuan Liu
Anne Sentenac
Sébastien Bourguignon
Joint reconstruction strategy
Keywords: Super-resolution, fluorescence microscopy, speckle imaging, near-black object model, proximal splitting
come
I. INTRODUCTION
In Structured Illumination Microscopy (SIM), the sample, characterized by its fluorescence density ρ, is illuminated successively by M distinct inhomogeneous illuminations Im. Fluorescence light emitted by the sample is collected by a microscope objective and recorded on a camera to form an image ym. In the linear regime, and with a high photon counting rate 1 , the dataset {ym} M m=1 is related to the sample ρ via [START_REF] Goodman | Introduction to Fourier Optics[END_REF]
ym = H ⊗ (ρ × Im) + εm, m = 1 • • • M, ( 1
)
where ⊗ is the convolution operator, H is the microscope point spread function (PSF) and εm is a perturbation term accounting for (electronic) noise in the detection and modeling errors. Since the spatial spectrum of the PSF [i.e., the optical transfer function (OTF)] is strictly bounded by its cut-off frequency, say, νpsf, if the illumination pattern Im is homogeneous, then the spatial spectrum of ρ that can be retrieved from the image ym is restricted to frequencies below νpsf. When the illuminations are inhomogeneous, frequencies beyond νpsf can be recovered from the low resolution images because the illuminations, acting as carrier waves, downshift part of the spectrum inside the OTF support [START_REF] Heintzmann | Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating[END_REF], [START_REF] Gustafsson | Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy[END_REF]. Standard SIM resorts to harmonic illumination patterns for which the reconstruction of the super-resolved image can be easily done by solving a linear system in the Fourier domain. In this case, the gain in resolution depends on the OTF support, the illumination cut-off frequency and the available signal-to-noise ratio (SNR). The main drawback of SIM is that it requires the knowledge of the illumination patterns and thus a stringent control of the experimental setup. If these patterns are not known with sufficient accuracy [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], severe artifacts appear in the reconstruction. Specific estimation techniques have been developed for retrieving the parameters of the periodic patterns from the images [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF]- [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF], but they can fail if the SNR is too low or if the excitation patterns are distorted, e.g., by inhomogeneities in the sample refraction index. The Blind-SIM strategy [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF] has been proposed to tackle this key issue, the principle being to retrieve the sample fluorescence density without the knowledge of the illumination patterns. In addition, speckle illumination patterns are promoted instead of harmonic ones, the latter being much more difficult to generate and control. From the methodological viewpoint, this strategy relies on the simultaneous (joint) reconstruction of the fluorescence density and of the illumination patterns. More precisely, joint reconstruction is achieved through the iterative resolution of a constrained least-squares problem. However, the computational time of such a scheme clearly restricts the applicability of the method.
This paper provides a global re-foundation of the joint Blind-SIM strategy. More specifically, our work develops two specific, yet complementary, contributions:
• The joint Blind-SIM reconstruction problem is first revisited, resulting in an improved numerical implementation with execution times decreased by several orders of magnitude. Such an acceleration relies on two technical contributions. Firstly, we show that the problem proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] is equivalent to a fully separable constrained minimization problem, hence bringing the original (large-scale) problem to M sub-problems with smaller scales. Then, we introduce a new preconditioned proximal iteration (denoted PPDS) to efficiently solve each sub-problem. The PPDS strategy is an important contribution of this article: it is provably convergent [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF], easy to implement and, for our specific problem, we empirically observe a superlinear asymptotic convergence rate. With these elements, the joint Blind-SIM reconstruction proposed in this paper is fast and can be highly parallelized, opening the way to real-time reconstructions.
• Beside these algorithmic issues, the mechanism driving superresolution (SR) in this blind context is investigated, and a connection is established with the well-known "Near-black object" effect introduced in Donoho's seminal contribution [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF]. We show that the SR relies on sparsity and positivity constraints enforced by the unknown illumination patterns. This finding helps to understand in which situation super-resolved reconstructions may be provided or not. A significant part of this work is then dedicated to numerical simulations aiming at illustrating how the SR effect can be enhanced. In this perspective, our simulations show that two-photon speckle illuminations potentially increase the SR power of the proposed method.
The pivotal role played by sparse illuminations in this SR mechanism also draws a connexion between joint Blind-SIM and other random activation strategies like PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] or STORM [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]; see also [START_REF] Mukamel | Statistical deconvolution for superresolution fluorescence microscopy[END_REF], [START_REF] Min | FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data[END_REF] for explicit sparsity methods applied to STORM. With PALM/STORM, unparalleled resolutions result from an activation process that is massively sparse and mostly localized on the marked structures. With the joint Blind-SIM strategy, the illumination pattern playing the role of the activation process is not that "efficient" and lower resolutions are obviously expected. Joint Blind-SIM however provides SR as long as the illumination patterns enforce many zero (or almost zero) values in the product ρ × Im: the sparser the illuminations, the higher the expected resolution gain with joint Blind-SIM. Such super resolution can be induced by either deterministic or random patterns. Let us mention that random illuminations are easy and cheap to generate, and that a few recent contributions advocate the use of speckle illuminations for super-resolved imaging, either in fluorescence [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF], [START_REF] Oh | Sub-Rayleigh imaging via speckle illumination[END_REF] or in photo-acoustic [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF] microscopy. In these contributions, however, the reconstruction strategies are derived from the statistical modeling of the speckle, hence, relying on the random character of the illumination patterns. In comparison, our approach only requires that the illuminations cancel-out the fluorescent object and that their sum is known with sufficient accuracy. Finally, we also note that [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF] corresponds to an early version of this work. Compared to [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF], several important contributions are presented here, mainly: the super-resolving power of Blind-SIM is now studied in details, and a comprehensive presentation of the proposed PPDS algorithm includes a tuning strategy for the algorithm parameter that allows a substantial reduction of the computation time.
The remainder of the paper is organized as follows. In Section II, the original Blind-SIM formulation is introduced and further simplified; this reformulation is then used to get some insight on the mechanism that drives the SR in the method. Taking advantage of this analysis, a penalized Blind-SIM strategy is proposed and characterized with synthetic data in Section III. Finally, the PPDS algorithm developed to cope with the minimization problem is presented and tested in Section IV, and conclusions are drawn in Section V.
II. SUPER-RESOLUTION WITH JOINT BLIND-SIM ESTIMATION
In the sequel, we focus on a discretized formulation of the observation model [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. Solving the two-dimensional (2D) Blind-SIM reconstruction problem is equivalent to finding a joint solution ( ρ, { Im} M m=1 ) to the following constrained minimization problem [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]:
min ρ,{Im} M m=1 ym -Hdiag(ρ) Im 2 (2a) subject to m Im = M × I0 (2b) and ρn ≥ 0, Im;n ≥ 0, ∀m, n (2c)
with H ∈ R P ×N the 2D convolution matrix built from the discretized PSF. We also denote ρ = vect(ρn) ∈ R N the discretized fluorescence density, ym = vect(ym;n) ∈ R P the m-th recorded image, and Im = vect(Im;n) ∈ R N the m-th illumination with expected spatial intensity I0 = vect(I0;n) ∈ R N + (this latter quantity may be spatially inhomogeneous but it is supposed to be known). Let us remark that (2) is a biquadratic problem. Block coordinate descent alternating between the object and the illuminations could be a possible minimization strategy, relying on cyclically solving M + 1 quadratic programming problems [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF]. In [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF], a more efficient but more complex scheme is proposed. However, the minimization problem (2) has a very specific structure, yielding a fast and simple strategy, as shown below.
A. Reformulation of the optimization problem
According to [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF], let us first consider problem (2) without the equality constraint (2b). It is equivalent to M independent quadratic minimization problems:
minq m ym -Hqm 2 (3a) subject to qm ≥ 0, (3b)
where we set qm := vect(ρn × Im;n). Each minimization problem (3) can be solved in a simple and efficient way (see Sec. IV), hence providing a set of global minimizers { qm} M m=1 . Although the latter set corresponds to an infinite number of solutions ( ρ, { Im} M m=1 ), the equality constraint (2b) defines a unique solution such that qm = vect( ρn × Im;n) for all m:
ρ = Diag(I0) -1 q (4a) ∀m Im = Diag( ρ) -1 qm (4b)
with q := 1 M m qm. The solution (4) exists as long as I0;n = 0 and ρn = 0, ∀n. The first condition is met if the sample is illuminated everywhere (in average), which is an obvious minimal requirement. For any pixel sample such that ρn = 0, the corresponding illumination Im;n is not defined; this is not a problem as long as the fluorescence density ρ is the only quantity of interest. Let us also note that the following implication holds:
I0;n ≥ 0, qm;n ≥ 0 =⇒ Im,n ≥ 0 and ρn ≥ 0.
Because we are dealing with intensity patterns, the condition I0;n ≥ 0 is always met, hence the positivity granted for both the density and the illumination estimates, i.e., the positivity constraint (2c), is granted by [START_REF] Heintzmann | Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating[END_REF]. Indeed, it should be clear that combining [START_REF] Goodman | Introduction to Fourier Optics[END_REF] and (4) solves the original minimization problem (2): on the one hand, the equality constraint (2b) is met since2
m Im = Diag( ρ) -1 q = M I0 (5)
and on the other hand, the solution (4) minimizes the criterion given in (2a) since it is built from { qm} M m=1 , which minimizes (3a). Finally, it is worth noting that the constrained minimization problem (2) may have multiple solutions. In our reformulation, this ambiguity issue arises in the "minimization step" (3): while each problem (3) is convex quadratic, and thus admits only global solutions (which in turn provide a global solution to problem (2) when recombined according to (4a)-(4b)), it may not admit unique solutions since each criterion (3a) is not strictly convex3 in qm. Furthermore, the positivity constraint (3b) prevents any direct analysis of these ambiguities. The next subsection underlines however the central role of this constraint in the joint Blind-SIM strategy originally proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF].
B. Super-resolution unveiled
Whereas the mechanism that conveys SR with known structured illuminations is well understood (see [START_REF] Gustafsson | Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy[END_REF] for instance), the SR capacity of joint blind-SIM has not been characterized yet. It can be made clear, however, that the positivity constraint (2c) plays a central role in this regard. Let H + be the pseudo-inverse of H [START_REF] Golub | Matrix computation[END_REF]Sec. 5.5.4]. Then, any solution to the problem (2a)-(2b), i.e, without positivity constraints, reads
ρ = Diag(I0) -1 (H + y + q ⊥ ) (6a) Im = Diag( ρ) -1 (H + ym + q ⊥ m ), (6b)
with y = 1 M m ym, and q ⊥ = 1 M m q ⊥ m where q ⊥ m is an arbitrary element of the kernel of H, i.e. with arbitrary frequency components above the OTF cutoff frequency. Hence, the formulation (2a)-(2b) has no capacity to discriminate the correct high frequency components, which means that it has no SR capacity. Under the positivity constraint (2c), we thus expect that the SR mechanism rests on the fact that each illumination pattern Im activates the positivity constraint on qm in a frequent manner.
A numerical experiment is now considered to support this assertion. A set of M collected images are simulated following [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] with the PSF H given by the usual Airy pattern that reads in polar coordinates
H(r, θ) = k 2 0 π J1(r k0 NA) k0 r 2 , r ≥ 0, θ ∈ R, ( 7
)
where J1 is the first order Bessel function of the first kind, NA is the objective numerical aperture set to 1.49, and k0 = 2π/λ is the free-space wavenumber with λ the emission/excitation wavelength. The ground truth is the 2D 'star-like' fluorescence pattern depicted in Fig. 1(left). The image sampling step for all the simulations involving the star pattern is set 4 to λ/20. For this numerical simulation, the illumination set {Im} M m=1 consists in M = 200 modified speckle patterns, see Fig. 2(A). More precisely, a first set of illuminations is 4 For an optical system modeled by [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF], the sampling rate of the (diffraction-limited) acquisition is usually the Nyquist rate driven by the OTF cutoff frequency ν psf = 2k 0 NA. A higher sampling rate is obviously needed for the super-resolved reconstruction, the up-sampling factor between the "acquisition" and the "processing" rates being at least equal to the expected SR factor. Here, we adopt a common sampling rate for any simulation involving the star-like pattern (even with diffraction-limited images), as it allows a direct comparison of the reconstruction results. obtained by adding a positive constant (equal to 3) to each speckle pattern, resulting in illuminations that never activate the positivity constraint in (3). On the contrary, the second set of illuminations is built by subtracting a small positive constant (equal to 0.2) to each speckle pattern, the negative values being set to zero. The resulting illuminations are thus expected to activate the positivity constraint in [START_REF] Goodman | Introduction to Fourier Optics[END_REF]. For both illumination sets, low-resolution microscope images are simulated and corrupted with Gaussian noise; in this case, the standard deviation was chosen so that the SNR of the total dataset is 40 dB. Corresponding reconstructions of the first product image q1 obtained via the resolution of ( 3) is shown in Fig. 2(B), while the retrieved sample (4a) is shown in Fig. 2(C); for each reconstruction, the spatial mean I0 in (4a) is set to the statistical expectation of the corresponding illumination set. As expected, the reconstruction with the first illumination set is almost identical to the deconvolution of the wide-field image shown in Fig. 1(upper-right), i.e., there is no SR in this case. On the contrary, the second set of illuminations produces a super-resolved reconstruction, hence establishing the central role of the positivity constraint in the original joint reconstruction problem (2).
III. A PENALIZED APPROACH FOR JOINT BLIND-SIM
As underlined in the beginning of Subsection II-B, there is an ambiguity issue concerning the original joint Blind-SIM reconstruction problem. A simple way to enforce unicity is to slightly modify (3) by adding a strictly convex penalization term. We are thus led to solving
min qm≥0 ym -Hqm 2 + ϕ(qm). ( 8
)
Another advantage of such an approach is that ϕ can be chosen so that robustness to the noise is granted and/or some expected features in the solution are enforced. In particular, the analysis conveyed above suggests that favoring sparsity in each qm is suited since speckle or periodic illumination patterns tend to frequently cancel or nearly cancel the product images qm. For such illuminations, the Near-Black Object introduced in Donoho's seminal paper [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] is an appropriate modeling and, following this line, we found that the separable " 1 +
2" penalty 5 provides super-resolved reconstructions:
ϕ(qm) := α n |qm;n| + β||qm|| 2 , α ≥ 0, β > 0. ( 9
)
With properly tuned (α, β), our joint Blind-SIM strategy is expected to bring SR if "sparse" illumination patterns Im are used, i.e., if they enforce qm;n = 0 for most (or at least many) n. More specifically, it is shown in [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF]Sec. 4] that SR occurs if the number of non-zero Im;n (i.e., the number of non-zero components to retrieve in qm) divided by N is lower than 1 2 R/N , with R/N the incompleteness ratio and R the rank of H. In addition, the resolving power is driven by the spacing between the components to retrieve that, ideally, should be greater than the Rayleigh distance λ 2 NA , see [12, pp. 56-57]. These conditions are rather stringent and hardly met by illumination patterns that can be reasonably considered in practice. These illumination patterns are usually either deterministic harmonic or quasi-harmonic 6 patterns, or random speckle patterns, these latter illuminations being much easier to generate [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. Nevertheless, in both cases, a SR effect is observed in joint Blind-SIM. Moreover, one can try to maximize this effect via the tuning of some experimental parameters that are left to the designer of the setup. Such parameters are mainly: the period of the light grid and the number of grid shifts for harmonic patterns, the spatial correlation length and the point-wise statistics of the speckle patterns. Investigating the SR properties with respect to these parameters on a theoretical ground seems out of reach. However, a numerical analysis is possible and some illustrative results are now provided that address this question. Reconstructions shown in the sequel are built from (4a) via the numerical resolution of ( 8)- [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF]. For sake of clarity, all the algorithmic details concerning this minimization problem are reported in Sec. IV. These simulations were performed with low-resolution microscope images corrupted by additive Gaussian noise such that the signal-to-noise ratio (SNR) of the dataset {ym} M m=1 is 40 dB. In addition, we note that this penalized joint Blind-SIM strategy requires an explicit tuning of some hyper-parameters, namely α and β in the regularization function [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF]. Further details concerning these parameters are reported in Sec. III-D.
A. Regular and distorted harmonic patterns
We first consider unknown harmonic patterns defining a "standard" SIM experiment with M = 18 patterns. More precisely, the illuminations are harmonic patterns of the form I(r) = 1 + cos(2πν t r + φ) where φ is the phase shift, and with r = (x, y) t and ν = (νx, νy) t the spatial coordinates and the spatial frequencies of the harmonic 5 The super-resolved solution in [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] is obtained with a positivity constraint and a 1 separable penalty. However, ambiguous solutions may exist in this case since the criterion to minimize is not strictly convex. The 2 penalty in ( 9) is then mostly introduced for the technical reason that a unique solution exists for problem [START_REF] Wicker | Phase optimisation for structured illumination microscopy[END_REF]. 6 Dealing with distorted patterns is of particular practical importance since it allows to cope with the distortions and misalignments induced by the instrumental uncertainties or even by the sample itself [START_REF] Ayuk | Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm[END_REF], [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF]. function, respectively. Distorted versions of these patterns (deformed by optical aberrations such as astigmatism and coma) were also considered. Three distinct orientations θ := tan -1 (νy/νx) ∈ {0, 2π/3, 4π/3}, for each of which six phase shifts of one sixth of the period, were considered. The frequency of the harmonic patterns ||ν|| := (ν 2 x + ν 2 y ) 1/2 is set to 80% of the OTF cutoff frequency, i.e., it lies inside the OTF support. One regular and one distorted pattern are depicted in Fig. 3(A) and the penalized joint Blind-SIM reconstructions are shown in Fig. 3(B). For both illumination sets, a clear SR effect occurs, which is similar to the one obtained with the original approach presented in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF]. As expected, however, the reconstruction quality achieved in this blind context is lower than what can be obtained with standard harmonic SIM -for the sake of comparison, see Fig. 1(B). In addition, we note that some artifacts may appear if the number of phase shifts for each orientation is decreased, see Fig. 3(C-left). If we keep in mind that the retrieved
B. Speckle illumination patterns
We now consider second-order stationary speckle illuminations Im with known first order statistics I0;n = I0, ∀n. Each one of these patterns is a fully-developed speckle drawn from the pointwise intensity of a correlated circular Gaussian random field. The correlation is adjusted so that the pattern Im exhibits a spatial correlation of the form (7) but with "numerical aperture" parameter NAill that sets the correlation length to λ 2 NA ill within the random field. As an illustration, the speckle pattern shown in Fig. 4(A-left) was generated in the standard case 7 NAill = NA. From this set of regular (fully-developed) speckle patterns, we also consider another set of random illumination patterns built by squaring each speckle pattern, see Fig. 4(A). These "squared" patterns are considered hereafter because they give a deeper insight about the SR mechanism at work in joint Blind-SIM. Moreover, we discuss later in this subsection that 7 It is usually considered that NA ill = NA if the illumination and the collection of the fluorescent light are performed through the same optical device. these patterns can be generated with other microscopy techniques, hence extending the concept of random illumination microscope to other optical configurations. From a statistical viewpoint, the probability distribution function (pdf) of "standard" and "squared" speckle patterns differ. For instance, the pdf of the squared speckle intensity is more concentrated around zero8 than the exponential pdf of the standard speckle intensity. In addition, the spatial correlation is also changed since the power spectral density of the "squared" random field spans twice the initial support of its speckle counterpart [START_REF] Denk | Two-photon laser scanning fluorescence microscopy[END_REF]. As a result, the "squared" speckle grains are sharper, and they enjoy larger spatial separation. According to previous SR theoretical results [12, p. 57] (see also the beginning of Sec. III), these features may bring more SR in joint Blind-SIM than standard speckle patterns. This assumption was indeed corroborated by our simulations. For instance, the reconstructions in Fig. 4(B) were obtained from a single set of M = 1000 speckle patterns such that NAill = NA: in this case, the "squared" illuminations (obtained by squaring the speckle patterns) provide a higher level of SR than the standard speckle illuminations.
Figure 5 shows how the reconstruction quality varies with the number of illumination patterns. With very few illuminations, the sample is retrieved in the few places that are activated by the "hot spots" of the speckle patterns. This actually illustrates that the joint Blind-SIM approach is also an "activation" strategy in the spirit of PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] and STORM [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. With our strategy, the activation process is nevertheless enforced by the structured illumination patterns and (A) not by the fluorescent markers staining the sample. This effect is more visible with the squared illumination patterns and, with these somehow sparser illuminations, the number of patterns needs to be increased so that the fluctuations in m Im is moderate, hence making the equality (2b) a legitimate constraint. We also stress that these simulations corroborate the empirical statement that M ≈ 9 harmonic illuminations and M ≈ 200 speckle illuminations produce comparable super-resolved reconstructions, see Fig. 3(Cleft) and Fig. 5(B-left). Obviously, imaging with random speckle patterns remains an attractive strategy since it is achieved with a very simple experimental setup, see [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] for details. For both random patterns, we also note that increasing the correlation length above the Rayleigh distance λ 2 NA (i.e., setting NAill < NA) deteriorates the SR whereas, conversely, taking NAill = 2NA enhances it, see Fig. 6-(A,B). However, the resolving power of the joint Blind-SIM estimate deteriorates if the correlation length is further decreased; for instance, uncorrelated speckle patterns are finally found to hardly produce any SR, see Fig. 6-(C). Indeed, with arbitrary small correlation lengths, many "hot spots" tend to be generated within a single Rayleigh distance, leading to this loss in the resolving power. Obviously, the "squared" speckle patterns are less sensitive to this problem because they are inherently sparser.
Finally, the experimental relevance of the simulations involving "squared" speckle illuminations needs to be addressed. Since a twophoton (2P) fluorescence interaction is sensitive to the square of the intensity [START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF], most of these simulations can actually be considered as wide-field 2P structured illumination experiments. Unlike one-photon (i.e., fully-developed) speckle illuminations 9 , though, a 2P interaction requires an excitation wavelength λill ∼ 1000 nm that is roughly twice the one of the collected fluorescence λdet ∼ 500 nm. The lateral 2P correlation length being λ ill 4NA ill , epi-illumination setups with onephoton (1P) and 2P illuminations provide similar lateral correlation lengths. This 2P instrumental configuration is simulated in Fig. 6(Aright), which does not show any significant SR improvement with respect to 1P epi-illumination interaction shown in Fig. 5(C-left). The increased SR effect driven by "squared" illumination patterns can nevertheless be obtained with 2P interactions if the excitation and the collection are performed with separate objectives. For instance, the behaviors shown in Fig. 5(C-right) and in Fig. 6(B-right) can be obtained if the excitation NA is, respectively, twice and four times the collection NA. With these configurations, the 2P excitation exhibits a correlation length which is significantly smaller than the one driven by the objective PSF, and a strong SR improvement is observed in simulation by joint Blind-SIM. The less spectacular simulation shown in Fig. 6(C-right) can also be considered as a 2P excitation, in the "limit" case of a very low collection NA. The 1P simulation shown in Fig. 6(C-left) rather mock a photo-acoustic imaging experiment [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF], an imaging technique for which the illumination lateral correlation length is negligible with respect to the PSF width.
As a final remark, we stress that 2P interactions are not the only way to generate sparse illumination patterns for the joint Blind-SIM. In particular, super-Rayleigh speckle patterns [START_REF] Bromberg | Generating non-Rayleigh speckles with tailored intensity statistics[END_REF] are promising candidates for that purpose.
C. Some reconstructions from real and mock data
The star test-pattern used so far is a simple and legitimate mean to evaluate the resolving power of our strategy [START_REF] Horstmeyer | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF], but it hardly provides a convincing illustration of what can be expected with real data. Therefore, we now consider the processing of more realistic datasets with joint Blind-SIM. In this section, the microscope acquisitions are all designed so that the spatial sampling rate is equal or slightly above the Nyquist rate λ 4NA . As a consequence, a preliminary up-sampling step of the camera acquisitions is performed so that their sampling rate reaches that of the super-resolved reconstruction.
As a first illustration, we consider a real dataset resulting from a test sample composed of fluorescent beads with diameters of 100 nm. A set of 100 one-photon speckle patterns is generated by a spatial light modulator and a laser source operating at λill = 488 nm. The fluorescent light at λcoll = 520 nm is collected through an objective with NA = 1.49 and recorded by a camera. The excitation and the collection of the emitted light are performed through the same objective, i.e., the setup is in epi-illumination mode. The total number of photons per camera pixels is about 65 000. In the perspective of further processing, this set of camera acquisitions is first up-sampled with a factor of two. Figure 7(A-left) shows the sum of these (upsampled) acquisitions, which is similar to a wide-field image. Wiener deconvolution of this image can be performed so that all spatial frequencies transmitted by the OTF are equivalently contributing in a diffraction-limited image of the beads, see Figure 7(A-middle). The processing of the dataset by the joint Blind-SIM strategy shown in Figure 7(A-right) reveals several beads that are otherwise unresolved on the diffraction-limited images, hence demonstrating a clear SR effect. In this case, the distance between the closest pair of resolved beads provides an upper bound for the final resolution, that is λcoll/5. 9 With one-photon interactions, the Stokes shift [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF] implies that the excitation and the fluorescence wavelengths are not strictly equivalent. The difference is however negligible in practice (about 10%), hence our assumption that one-photon interactions occur with identical wavelengths for both the excitation and the collection. The experimental demonstration above does not involve any biological sample, and we now consider a simulation designed to be close to a real-world biological experiment. More specifically, the STORM reconstruction of a marked neuron 10 is used as a ground truth to simulate a series of microscope acquisitions generated from onephoton speckle illuminations. Our simulation considers 300 illuminations and acquisitions, both performed through the same objective, at λ = 488 nm and with NA = 1. Each low-resolution acquisition is finally plagued with Poisson noise, the total photon budget being equal to 50 000 so that it fits to the one of a standard fluorescence wide-field image. The sample (ground truth) shown in Figure 7(Bleft) interestingly exhibits a lattice structure with a 190 nm periodicity (in average) that is not resolved by the diffracted-limited image shown in Figure 7(B-middle). The joint Blind-SIM reconstruction in Figure 7(A-right) shows a significant improvement of the resolution, which reveals some parts of the underlying structure.
D. Tuning the regularization parameters
The tuning of parameters α and β in ( 9) is a pivotal issue since inappropriate values result in deteriorated reconstructions. On the one hand, the quadratic penalty in [START_REF] Wicker | Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space[END_REF] was mostly introduced to ensure that the minimizer defined by ( 8) is unique (via strict convexity of the criterion). However, because high-frequency components in qm are progressively damped as β increases, the latter parameter can 10 A rat hippocampal neuron in culture labelled with an anti-βIV-spectrin primary and a donkey anti-rabbit Alexa Fluor 647 secondary antibodies, imaged by STORM and processed similarly to [START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF]. also be adjusted in order to prevent an over-amplification of the instrumental noise. A trade-off should nevertheless be sought since large values of β prevent super-resolution to occur. For a given SNR, β is then maintained to a fixed (usually small) value. For instance, we chose β = 10 -6 for all the simulations involving the star pattern in this paper since they were performed with a rather favorable SNR. On the other hand, the quality of reconstruction crucially depends on parameter α. More precisely, larger values of α will provide sparser solutions qm, and thus a sparser reconstructed object ρ. Fig. 8 shows an example of under-regularized and overregularized solutions, respectively corresponding to a too small and a too large value of α. The prediction of the appropriate level of sparsity to seek for each qm, or equivalently the tuning of the regularization parameter α, is not an easy task. Two main approaches can be considered. One relies on automatic tuning. For instance, a simple method called Morozov's discrepancy principle considers that the least-squares terms ym -H qm 2 should be chosen in proportion with the variance of the additive noise, the latter being assumed known [START_REF] Morozov | Methods for Solving Incorrectly Posed Problems[END_REF]. Other possibilities seek a trade-off between ym -H qm 2 and ϕ( qm). This is the case with the L-curve [START_REF] Hansen | Analysis of discrete ill-posed problems by means of the L-curve[END_REF], but also with the recent contribution [START_REF] Song | Regularization parameter estimation for non-negative hyperspectral image deconvolution[END_REF], which deals with a situation comparable to ours. Another option relies on a Bayesian interpretation of qm as a maximum a posteriori solution, which opens the way to the estimation of α marginally of qm. In this setting, Markov Chain Monte Carlo sampling [START_REF] Lucka | Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors[END_REF] or variational Bayes methods [START_REF] Babacan | Variational Bayesian blind deconvolution using a total variation prior[END_REF] could be employed. An alternate approach to automatic tuning consists in relying on a calibration step. It amounts to consider that similar acquisition conditions, applied to a given type of biological samples, lead to similar ranges of admissible values for the tuning of α. The validation of such a principle is however outside the scope of this article as it requires various experimental acquisitions from biological samples with known structures (or, at least, with some calibrated test patterns). Concerning the examples proposed in the present section, the much simpler strategy consisted in selecting the reconstruction which is visually the "best" among the reconstructed images with varying α.
IV. A NEW PRECONDITIONED PROXIMAL ITERATION
We now consider the algorithmic issues involved in the constrained optimization problem ( 8)-( 9). For sake of simplicity, the subscript m in ym and qm will be dropped. The reader should however keep in mind that the algorithms presented below only aim at solving one of the M sub-problems involved in the final joint Blind-SIM reconstruction. Moreover, we stress that all simulations presented in this article are performed with a convolution matrix H with a blockcirculant with circulant-block (BCCB) structure. The more general case of block-Toeplitz with Toeplitz-block (BTTB) structure is shortly addressed at the end of Subsection IV-C.
At first, let us note that ( 8)-( 9) is an instance of the more general problem min
q∈R N [f (q) := g(q) + h(q)] (10)
where g and h are closed-convex functions that may not share the same regularity assumptions: g is supposed to be a smooth function with a L-Lipschitz continuous gradient ∇g, but h does not need to be smooth. Such a splitting aims at solving constrained non-smooth optimization problems by proximal (or forward-backward) iterations.
The next subsection presents the basic proximal algorithm and the well-known FISTA that usually improves the convergence speed.
A. Basic proximal and FISTA iterations
We first present the resolution of (10) in a general setting, then the penalized joint Blind-SIM problem (8) is addressed as our particular case of interest.
1) General setting: Let q (0) be an arbitrary initial guess, the basic proximal update k → k + 1 for minimizing the convex criterion f is [START_REF] Combettes | Signal recovery by proximal forwardbackward splitting[END_REF]- [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF] q (k+1) ←-P γh q (k) -γ∇g(q (k) )
where P γh is the proximity operator (or Moreau envelope) of the function γh [37, p.339]
P γh (q) := arg min x∈R N h(x) + 1 2γ ||x -q|| 2 . ( 12
)
(a) FISTA 10 it. For all these simulations, the initial-guess is q (0) = 0 and the regularization parameters is set to (α = 0.3, β = 10 -6 ).
The PPDS iteration implements the preconditioner given in [START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF] with C = H t H and a = 1, see Sec IV-C for details.
Although this operator defines the update implicitly, an explicit form is actually available for many of the functions met in signal and image processing applications, see for instance [36, Table 10.2].
The Lipschitz constant L granted to ∇g plays an important role in the convergence of iterations [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]. In particular, global convergence toward a solution of (10) occurs as long as the step size γ is chosen such that 0 < γ < 2/L. However, the convergence speed is usually very low and the following accelerated version named FISTA [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF] is usually preferred
q (k+1) ←-P γh ω (k) -γ∇g(ω (k) ) (13a)
ω (k+1) ←-q (k+1) + k-1 k+2 q (k+1) -q (k) . (13b)
The convergence speed toward minq f (q) achieved by ( 13) is O(1/k 2 ), which is often considered as a substantial gain compared to the O(1/k) rate of the basic proximal iteration. It should be noted however that this "accelerated" form may not always provide a faster convergence speed with respect to its standard counterpart, see for instance [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF]Fig. 10.2]. FISTA was nevertheless found to be faster for solving the constrained minimization problem involved in joint Blind-SIM, see Fig. 11. We finally stress that convergence of ( 13) is granted for 0 < γ < 1/L [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF].
2) Solution of the m-th joint Blind-SIM sub-problem: For the penalized joint Blind-SIM problem considered in this paper, the minimization problem [START_REF] Wicker | Phase optimisation for structured illumination microscopy[END_REF] [equipped with the penalty ( 9)] takes the form [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF] with
g(q) = ||y -Hq|| 2 + β||q|| 2 (14a) h(q) = α n φ(qn) (14b)
where φ : R → R ∪ {+∞} is such that
φ(u) := u if u ≥ 0. +∞ otherwise. ( 14c
)
The gradient of the regular part in the splitting
∇g(q) = 2 H t (Hq -y) + βq (15)
is L-Lipschitz-continuous with L = 2 λmax(H t H) + β where λmax(A) denotes the highest eigenvalue of the matrix A. Furthermore, the proximity operator [START_REF] Donoho | Maximum entropy and the nearly black object[END_REF] with h defined by (14b) leads to the well-known soft-thresholding rule [START_REF] Moulin | Analysis of multiresolution image denoising schemes using generalized Gaussian and complexity priors[END_REF], [START_REF] Figueiredo | An EM algorithm for waveletbased image restoration[END_REF] P γh (q) = vect (max{qn -γα, 0}) .
From a practical perspective, both the basic iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF] and its accelerated counterpart [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] are easily implemented at a very low computational cost 11 from equations ( 15) and ( 16). For our penalized joint Blind-SIM approach, however, we observed that both algorithms exhibit similar convergence behavior in terms of visual aspect of the current estimate. The convergence speed is also significantly slow: several hundreds of iterations are usually required for solving the M = 200 sub-problems involved in the joint Blind-SIM reconstruction shown in Fig. 5(B). In addition, Fig. 9(ac) shows the reconstruction built with ten, fifty and one thousand FISTA iterations. Clearly, we would like that this latter quality of reconstruction is reached in a reasonable amount of time. The next subsection introduces a preconditioned primal-dual splitting strategy that achieves a much higher convergence speed, as illustrated by Fig. 9(right).
B. Preconditioned primal-dual splitting
The preconditioning technique [42, p. 69] is formally equivalent to addressing the initial minimization problem (10) via a linear transformation q := P v, where P ∈ R N ×N is a symmetric positive-definite matrix. There is no formal difficulty in defining a preconditioned version of the proximal iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]. However, if one excepts the special case of diagonal matrices P [43]- [START_REF] Raguet | Preconditioning of a generalized forwardbackward splitting and application to optimization on graphs[END_REF], the proximity operator of H(v) := h(P v) cannot be obtained explicitly and needs to be computed approximately. As a result, solving a nested optimization problem is required at each iteration, hence increasing the overall computational cost of the algorithm and raising a convergence issue since the sub-iterations must be truncated in practice [START_REF] Becker | A quasi-Newton proximal splitting method[END_REF], [START_REF] Chouzenoux | Variable metric forwardbackward algorithm for minimizing the sum of a differentiable function and a convex function[END_REF]. Despite this difficulty, the preconditioning is widely accepted as a very effective way for accelerating proximal iterations. In the sequel, the versatile primal-dual splitting technique introduced in [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF], [START_REF] Vũ | A splitting algorithm for dual monotone inclusions involving cocoercive operators[END_REF], [START_REF] Combettes | Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators[END_REF] is used to propose a new preconditioned proximal iteration, without any nested optimization problem.
This new preconditioning technique is now presented for the generic problem [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF]. At first, we express the criterion f with respect to the transformed variables
f (P v) = G(v) + h(P v) ( 17
)
11 Since H is a convolution matrix, the computation of the gradient (15) can be performed by fast Fourier transform and vector dot-products, see for instance [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.3].
with G(v) := g(P v). Since the criterion above is a particular case of the form considered in [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Eq. (45)], it can be optimized by a primal-dual iteration [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Eq. (55)] that reads
v (k+1) ←-v (k) -θτ d (k) (18a) ω (k+1) ←-ω (k) + θ∆ (k) (18b) with d (k) := ∇G(v (k) ) + P ω (k) (19a) ∆ (k) := P σh ω (k) + σP (v (k) -2τ d (k) ) -ω (k) (19b)
where the proximal mapping applied to h , the Fenchel conjugate function for h, is easily obtained from
P σh (ω) = ω -σP h/σ (ω/σ). (20)
The primal update (18a) can also be expressed with respect to the untransformed variables q:
q (k+1) ←-q (k) -θτ Bζ (k) (21)
with ζ (k) := ∇g(q (k) )+ω (k) and B := P P . Since the update ( 21) is a preconditioned primal step, we expect that a clever choice of the preconditioning matrix B will provide a significant acceleration of the primal-dual procedure. In addition, we note that the quantity
a (k) := ω (k) + σP (v (k) -2τ d (k)
) involved in the dual step via (19b) also reads
a (k) := ω (k) + σ(q (k) -2τ Bζ (k) ). (22)
Hereafter, the primal-dual updating pair (18b) and ( 21) is called a preconditioned primal-dual splitting (PPDS) iteration. Following [START_REF] Condat | A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms[END_REF]Theorem 5.1], the convergence of these PPDS iterations is granted if the following conditions are met for the parameters (θ, τ, σ):
σ > 0, τ > 0, θ > 0 (23a) γτ,σ ∈ [1; 2) (23b) γτ,σ > θ (23c) with γτ,σ := 2 -τ [1 -τ σλmax(B)] -1 L/2
, where L is the Lipschitz-continuity constant of ∇G, see Eq. [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF]. Within the convergence domain ensured by [START_REF] Denk | Two-photon laser scanning fluorescence microscopy[END_REF], the practical tuning of the parameter set (θ, τ, σ) is tedious as it may impair the convergence speed. We propose the following tuning strategy, which appeared to be very efficient. At first, we note that the step length τ relates only to the primal update (18a) whereas σ relates only to the dual update (18b) via ∆ (k) . In addition, the relaxation parameter θ scales both the primal and the dual steps [START_REF] Oh | Sub-Rayleigh imaging via speckle illumination[END_REF]. Considering only under-relaxation (i.e., θ < 1), (23c) is unnecessary and (23b) is equivalent to the following bound
σ ≤ σ with σ := 1/τ -L/2 λ -1 max (B). (24)
This relation defines an admissible domain for (τ, σ) under the condition θ < 1, see Fig. 10. Our strategy defines τ as the single tuning parameter of our PPDS iteration, the parameter σ being adjusted so that the dual step is maximized:
0 < τ < τ , σ = σ and θ = 0.99, (25)
with τ := 2/L. We set θ arbitrary close to 1 since practical evidence indicates that under-relaxing θ slows down the convergence rate. The numerical evaluation of the bounds τ and σ is application-dependent since they depend on L and λmax(B).
0 τ σ 1/L 2/L L 2 λ -1 max (B B)
Fig. 10. Admissible domain for (τ, σ) ensuring the global convergence of the PPDS iteration with θ ∈ (0; 1), see Equation [START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF].
C. Resolution of the joint Blind-SIM sub-problem
For our specific problem, the implementation of the PPDS iteration requires first the conjugate function [START_REF] Labouesse | Fluorescence blind structured illumination microscopy: a new reconstruction strategy[END_REF]: with h defined by (14b), the Fenchel conjugate is easily found and reads
P σh (ω) = vect (min {ωn, α}) . ( 26
)
The updating rule for the PPDS iteration then reads
q (k+1) ←-q (k) -θτ Bζ (k) (27a) ω (k+1) ←-ω (k) + θ∆ (k) (27b) with ∆ (k) = vect min{a (k) n , α} -ω (k) and a (k)
n the nth component of the vector a (k) defined in [START_REF] Golub | Matrix computation[END_REF]. We note that the positivity constraint is not enforced in the primal update (27a). Primal feasibility (i.e. positivity) therefore occurs only asymptotically thanks to the global convergence of the sequence [START_REF] Horstmeyer | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF] toward the minimizer of the functional [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. Compared to FISTA, this behavior may be considered as a drawback of the PPDS iteration. However, we do believe that the ability of the PPDS iteration to "transfer" the hard constraint from the primal to the dual step is precisely the cornerstone of the acceleration provided by preconditioning. Obviously, such an acceleration requires that the preconditioner B is wisely chosen. For our joint Blind-SIM problem, the preconditioning matrix is derived from Geman and Yang semi-quadratic construction [START_REF] Geman | Nonlinear image recovery with half-quadratic regularization[END_REF], [51, Eq. ( 6)]
B = 1 2 (C + β I d /a) -1 (28)
where I d is the identity matrix and a > 0 is a free parameter of the preconditioner. We choose C in the class of positive semidefinite matrix with a BCCB structure [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.5]. This choice enforces that B is also BCCB, which considerably helps in reducing the computational burden: (i) B can be stored efficiently 12 and (ii) the matrix-vector product Bζ (k) in (27a) can be computed with O(N log N ) complexity by the bidimensional fast Fourier transform (FFT) algorithm. Obviously, if the observation model H is also a BCCB matrix built from the discretized OTF, the choice C = H t H in (28) leads to B = (∇ 2 g) -1 for a = 1. Such a preconditioner is expected to bring the fastest asymptotic convergence since it corrects the curvature anisotropies induced by the regular part g in the criterion [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF].
The PPDS pseudo-code for solving the joint Blind-SIM problem is given in Algorithm 1. This pseudo-code requires that L and λmax(B) are given for the tuning ( 25 // The primal step (Fourier domain)...
14 d (k) ←-ω (k) -2( h y -( γ + β) q (k) ) b; 15 q (k+1) ←-q (k) -θτ d (k) ; 16 // The dual step (direct domain)... 17 a (k) ←-FFT -1 ω (k) + σ( q (k) -2τ d (k) ) ; 18 ω (k+1) ←-(1 -θ) ω (k) + θ vect(min{a (k) n , α}); 19 // Prepare next PPDS iteration... 20 q (k) ←-q (k+1) ; ω (k) ←-FFT(ω (k+1) );
ρ ←-ρ + 1 M FFT -1 ( q (k) ) I0; 24 end
25 Final result: The joint Blind-SIM estimate is stored in ρ Algorithm 1: Pseudo-Code of the joint Blind-SIM PPDS algorithm, assuming that H is a BCCB matrix and C = H t H. The symbols and are the component-wise product and division, respectively. For the sake of simplicity, this pseudo-code implements a very simple stopping rule based on a maximum number of minimizing steps, see line 11. In practice, a more elaborated stopping rule could be used by monitoring the norm ||ζ (k) || defined by [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF] since it tends towards 0 as q (k) asymptotically reaches the constrained minimizer of the m-th nested problem. since H is rank deficient in our context, and the Lipschitz constant that reads L = λmax(B ∇ 2 g) can be further simplified as
L = a if a ≥ 1 ( γmax + β)( γmax + β/a) -1 otherwise, (29b)
with γmax the maximum of the square magnitude of the OTF components. From the pseudo-code, we also note that the computation of the primal update (27a) remains in the Fourier domain during the PPDS iteration, see line 14. With this strategy (possible because ∇g is a linear function), the computational burden per PPDS iteration 13is dominated by one single forward/inverse FFT pair, i.e., PPDS and FISTA have equivalent computational burden per iteration. We now illustrate the performance of the PPDS iterations for minimizing the penalized criterion involved in the joint Blind-SIM reconstruction problem shown in Fig. 9-(right). These simulations 9. The chosen initial-guess is q (0) = 0 for the primal variables and ω (0) = -∇g(q (0) ) for the dual variables. The preconditioning parameter is set to a = 1 and (θ, τ, σ) were set according to the tuning rule [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF]. For the sake of completeness, the curve of the FISTA iterations and the PDS iterations (i.e., the PPDS equipped with the identity preconditioning matrix B = I d ) are also reported.
f q (k) -f ( q)
were performed with a standard MATLAB implementation of the pseudo-code shown in Algorithm 1. We set a = 1 so that the preconditioner B is the inverse of the Hessian of g in [START_REF] Rust | Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)[END_REF]. With this tuning, we expect that the PPDS iterations exhibit a very favorable convergence rate as long as the set of active constraints is correctly identified. Starting from initial guess q (0) = 0 (the dual variables being set accordingly to ω (0) = -∇g(q (0) ), see for instance [START_REF] Bertsekas | Nonlinear programming[END_REF]Sec. 3.3]), the criterion value of the PPDS iteration depicted in Fig. 11 exhibits an asymptotic convergence rate that can be considered as super-linear. Other tunings for a (not shown here) were tested and found to slow down the convergence speed. The pivotal role of the preconditioning in the convergence speed is also underlined since the PPDS algorithms becomes as slow as the standard proximal iteration when we set B = I d , see the "PDS" curve in Fig. 11. In addition, one can note from the reconstructions shown in Fig. 9 that the highfrequency components (i.e., the SR effect) are brought in the very early iterations. Actually, once PPDS is properly tuned, we always found that it offers a substantial acceleration with respect to the FISTA (or the standard proximal) iterates.
Finally, let us remind that the numerical simulations were performed with a BCCB convolution matrix H. In some cases, the implicit periodic boundary assumption 14 enforced by such matrices is not appropriate and a convolution model with a zero boundary assumption is preferable, which results in a matrix H with a BTTB structure. In such a case, the product of any vector by H t H can still be performed efficiently in O(N log N ) via the FFT algorithm, see for instance [START_REF] Vogel | Computational Methods for Inverse Problems[END_REF]Sec. 5.2.3]. This applies to the computation of ∇g(q (k) ) in the primal step [START_REF] Jost | Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction[END_REF], according to [START_REF] Mukamel | Statistical deconvolution for superresolution fluorescence microscopy[END_REF]. In contrast, 14 Let us recall that the matrix-vector multiplication Hq with H a BCCB matrix corresponds to the circular convolution of q with the convolution kernel that defines H.
exact system solving as required by ( 21) cannot be implemented in O(N log N ) anymore if matrix H is only BTTB (and not BCCB). In such a situation, one can define C as a BCCB approximation of H t H, so that the preconditioning matrix B = (C +βI d ) -1 remains BCCB, while ensuring that B(H t H +βI d ) has a clustered spectrum around 1 as the size N increases [START_REF] Chan | Conjugate gradient methods for Toeplitz systems[END_REF]Th. 4.6].
Finally, another practical issue arises from the numerical evaluation of L. No direct extension of (29b) is available when H is BTTB but not BCCB. However, according to [START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF], global convergence of the PPDS iterations is still granted if τ < 2/ L with L ≤ L. For instance, L := λmax(B) (||H||∞||H||1 + β) is an easy-to-compute upper bound of L.
V. CONCLUSION
The speckle-based fluorescence microscope proposed in [START_REF] Mudry | Structured illumination microscopy using unknown speckle patterns[END_REF] holds the promise of a super-resolved optical imager that is cheap and easy to use. The SR mechanism behind this strategy, that was not explained, is now properly linked with the sparsity of the illumination patterns. This readily relates joint Blind-SIM to localization microscopy techniques such as PALM [START_REF] Betzig | Imaging intracellular fluorescent proteins at nanometer resolution[END_REF] where the image sparsity is indeed brought by the sample itself. This finding also suggests that "optimized" random patterns can be used to enhance SR, one example being the two-photon excitations proposed in this paper. Obviously, even with such excitations, the massively sparse activation process at work with PALM/STORM remains unparalleled and one may not expect a resolution with joint Blind-SIM greater than twice or three times the resolution of a wide-field microscope. We note, however, that this analysis of the SR mechanism is only valid when the sample and the illumination patterns are jointly retrieved. In other words, this article does not tell anything about the SR obtained from marginal estimation techniques that estimates the sample only, see for instance [START_REF] Min | Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery[END_REF]- [START_REF] Chaigne | Super-resolution photoacoustic fluctuation imaging with multiple speckle illumination[END_REF]. Indeed, the SR properties of such "marginal" techniques are rather distinct [START_REF] Idier | A theoretical analysis of the super-resolution capacity of imagers using unknown speckle illuminations[END_REF].
From a practical perspective, the joint Blind-SIM strategy should be tested shortly with experimental datasets. One expected difficulty arising in the processing of real data is the strong background level induced in the focal plane by the out-of-focus light. This phenomenon prevents the local extinction of the excitation intensity, hence destroying the expected SR in joint Blind-SIM. A natural approach would be to solve the reconstruction problem in its 3D structure, which is numerically challenging, but remains a mandatory step to achieve 3D speckle SIM reconstructions [START_REF] Negash | Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations[END_REF]. The modeling of the out-of-focus background with a very smooth function is possible [START_REF] Orieux | Bayesian estimation for optimized structured illumination microscopy[END_REF] and will be considered for a fast 2D reconstruction of the sample in the focal plane.
Another important motivation of this work is the reduction of the computational time in joint Blind-SIM reconstructions. The reformulation of the original (large-scale) minimization problem is a first pivotal step as it leads to M sub-problems, all sharing the same structure, see Sec. II-A. The new preconditioned proximal iteration proposed in Sec. IV-B is also decisive as it efficiently tackles each sub-problem. In our opinion, this "preconditioned primal-dual splitting" (PPDS) technique is of general interest as it yields preconditioned proximal iterations that are easy to implement and provably convergent. For our specific problem, the criterion values are found to converge much faster with the PPDS iteration than with the standard proximal iterations (e.g., FISTA). We do believe, however, that PPDS deserves further investigations, both from the theoretical and the experimental viewpoints. This minimization strategy should be tested with other observation models and prior models. For example, as a natural extension of this work, we will consider shortly the Poisson distribution in the case of image acquisitions with low photon counting rates. The global and local convergence properties of PPDS should be explored extensively, in particular when the preconditioning matrix varies over the iterations. This issue is of importance if one aims at defining quasi-Newton proximal iterations with PPDS in a general context.
Fig. 1 .
1 Fig. 1. [Row A] Lower-right quarter of the (160×160 pixels) groundtruth fluorescence pattern considered in [1] (left) and deconvolution of the corresponding wide-field image (right). The dashed (resp. solid) lines corresponds to the spatial frequencies transmitted by the OTF support (resp. twice the OTF support). [Row B] Positivity-constrained reconstruction from known illumination patterns: (left) M = 9 harmonic patterns and (right) M = 200 speckle patterns. The distance units along the horizontal and vertical axes are given in wavelength λ.
Fig. 2 .
2 Fig. 2. [Row A] One product image qm = vect(ρn ×Im;n) built from one of the 200 illumination patterns used for generating the dataset: (left) a positive constant is added to the standard speckle patterns so that the lowest value is much greater that zero; (right) a positive constant is subtracted to the standard speckle patterns and negative values are set to zero. [Row B] Reconstruction of the product image qm that corresponds to the one shown above. [Row C] Final reconstruction ρ achieved with the whole set of illuminations -see Subsection II-B for details.
Fig. 3 .
3 Fig. 3. Harmonic patterns: [Row A] One illumination pattern Im drawn from the set of regular (left) and distorted (right) harmonic patterns. [Row B] Corresponding penalized joint Blind-SIM reconstructions. [Row C] (left) Decreasing the number of phase shifts from 6 to 3 brings some reconstruction artifacts, see (B-left) for comparison. (right) Increasing the modulation frequency ||ν|| of the harmonic patterns above the OTF cutoff frequency prevents the super-resolution to occur. [Row D] Low-resolution image ym drawn from the dataset for a modulation frequency ||ν|| lying inside (left) and outside (right) the OTF domain-see Sec. III-A for details.
Fig. 4 .
4 Fig. 4. Speckle patterns: [Row A] One speckle illumination such that NA ill = NA (left) and its "squared" counterpart (right). [Row B] Corresponding penalized joint Blind-SIM reconstructions from M = 1000 speckle (left) and "squared" speckle (right) patterns
Fig. 5 .
5 Fig. 5. Speckle patterns (continued): Penalized joint Blind-SIM reconstructions from standard speckle (left) and "squared" speckle (right) patterns. The number of illumination patterns considered for reconstruction is M = 10 (A), M = 200 (B) and M = 10000 (C).
Fig. 6 .
6 Fig. 6. Speckle patterns (continued): The correlation length of speckle and "squared" speckle patterns drives the level of super-resolution in the penalized joint Blind-SIM reconstruction: [Rows A] reconstruction from M = 10000 speckle patterns with NA ill = 0.5 NA (left) and from the corresponding "squared" random-patterns (right). [Rows B] idem with NA ill = 2 NA. [Rows C] idem with uncorrelated patterns.
Fig. 7 .
7 Fig. 7. Processing of real and mock data: [Row A] Fluorescent beads with diameters of 100 nm are illuminated by 100 fully-developed (i.e., onephoton) speckle patterns through an illumination/collection objective (NA = 1.49). The sum of the acquisitions of the fluorescent light (left) and its Wiener deconvolution (middle) provide diffraction limited images of the beads. The joint Blind-SIM reconstruction performed with the hyper-parameters set to β = 5 × 10 -5 and α = 0.4 is significantly more resolved (right). The sampling rate used in these images is 32.5 nm, corresponding to an up-sampling factor of two with respect to the camera sampling. [Row B] STORM reconstruction of a marked rat neuron showing a lattice structure with a 190-nm periodicity (left). Deconvolution of the simulated wide-field image (middle). Joint Blind-SIM reconstruction of the sample obtained from 300 (one-photon) speckle patterns; the hyper-parameters are set to β = 2 × 10 -5 and α = 1.5 (right). The sampling rate of the STORM ground-truth image is 11.4 nm. The sampling rate of the joint Blind-SIM reconstruction is 28.5 nm, corresponding to an up-sampling factor of four with respect to the camera sampling. The distance units along the horizontal and vertical axes are given in wavelength λ coll , i.e., 520 nm in row A and 488 nm in row B.
Fig. 8 .
8 Fig.8. Penalized Blind-SIM reconstructions from the dataset used to generate the super-resolved reconstruction shown in Fig.4(B-left). The hyper-parameter β was set to 10 -6 in any case, and α was set to 10 -3 (left) and 0.9 (right). For the sake of comparison, our tuning for the reconstruction shown in Fig.4(B-left) is β = 10 -6 and α = 0.3.
Fig. 9 .
9 Fig. 9. Harmonic joint Blind-SIM reconstruction of the fluorescence pattern achieved by the minimization of the criterion (8) with 10, 50 or 1000 FISTA (abc) or PPDS (def) iterations. For all these simulations, the initial-guess is q (0) = 0 and the regularization parameters is set to (α = 0.3, β = 10 -6 ). The PPDS iteration implements the preconditioner given in[START_REF] Leterrier | Nanoscale architecture of the axon initial segment reveals an organized and robust scaffold[END_REF] with C = H t H and a = 1, see Sec IV-C for details.
13 6 ρ 11 /
611 ): we get λmax(B) = 1/λmin(B -1 ) = a (2β) -1 (29a)12 Any BCCB matrix B reads H = F † ΛF with F the unitary discrete Fourier transform matrix, ' †' the transpose-conjugate operator, and Λ := Diag( b) where b := vect( bn) are the eigenvalues of B, see for instance [41, Sec. 5.2.5]. As a result, the storage requirement reduces to the storage of b. Given quantities:2 PSF h, Dataset {ym} M m=1 , Average intensity I0 ∈ R N + ; Regularization parameters: β, α ∈ R+; 4 PPDS parameters: a ∈ R+; θ ∈ (0, 1); τ ∈ (0, 2L); kmax ∈ N; ←-0; σ ←-σ [see[START_REF] Gu | Comparison of three-dimensional imaging properties between two-photon and single-photon fluorescence microscopy[END_REF]];7 h ←-FFT(h); γ ←-h * h; b ←-(2 γ + 2β/a);8 // The outer loop: processing each view ym... 9 for m = 1 • • • M do 10 y ←-FFT(ym); q (0) ←-FFT(q (0) m ); ω (0) ←-FFT(ω / The inner loop: PPDS minimization... 12 for k = 0 • • • kmax do 13
21 end 22 /
2122 / Building-up the joint Blind-SIM estimate...
23
23
Fig. 11 .
11 Fig.11. Criterion value (upper plots) and distance to the minimizer (lower plots) as a function of the PPDS iterations for the reconstruction problem considered in Fig.9. The chosen initial-guess is q (0) = 0 for the primal variables and ω (0) = -∇g(q (0) ) for the dual variables. The preconditioning parameter is set to a = 1 and (θ, τ, σ) were set according to the tuning rule[START_REF] Lakowicz | Principles of Fluorescence Spectroscopy[END_REF]. For the sake of completeness, the curve of the FISTA iterations and the PDS iterations (i.e., the PPDS equipped with the identity preconditioning matrix B = I d ) are also reported.
Whenever ρn = 0, the corresponding entry in the illumination pattern estimates (4b) can be set to Im;n = I 0;n /M for all m, hence preserving the positivity (2c) and the constraint (2b).
A constrained quadratic problem such as (3) is strictly convex if and only if the matrix H is full rank. In our case, however, H is rank deficient since its spectrum is the OTF that is strictly support-limited.
Assuming a fully-developed speckle, the fluctuation in Im;n is driven by an exponential pdf with parameter I 0 whereas the pdf of the "squared" pointwise intensity Jm;n := I 2 m,n is a Weibull distribution with shape parameter k = 0.5 and scale parameter λ = I 2 0 .
The MATLAB implementation of the PPDS pseudo-code Algorithm 1 requires less than 6 ms per iteration on a standard laptop (Intel Core M 1.3 GHz). For the sake of comparison, one FISTA iteration takes almost 5 ms on the same laptop.
ACKNOWLEDGMENTS
The authors are grateful to the anonymous reviewers for their valuable comments, and to Christophe Leterrier for the STORM image used in Section III.
Agence Nationale de la Recherche (ANR-12-BS03-0006 |
01460742 | en | [
"sde.ie",
"chim.anal"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01460742/file/Causse%20et%20al.%20-%20Direct%20DOC%20and%20nitrate%20determination%20in%20water%20usin.pdf | Jean Causse
Olivier Thomas
Aude-Valérie Jung
Marie-Florence Thomas
email: [email protected]
Direct DOC and nitrate determination in water using dual pathlength and second derivative UV spectrophotometry
Keywords: UV spectrophotometry, second derivative, nitrate, DOC, freshwaters, dual optical pathlength
come
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
spectrophotometric measurement of raw samples (without filtration) coupling a dual pathlength for spectra acquisition and the second derivative exploitation of the signal is proposed in this work. The determination of nitrate concentration is carried out from the second derivative of the absorbance at 226nm corresponding at the inflexion point of nitrate signal decrease. A short optical pathlength can be used considering the strong absorption of nitrate ion around 210nm. For DOC concentration determination the second derivative absorbance at 295nm is proposed after nitrate correction. Organic matter absorbing slightly in the 270-330nm window, a long optical pathlength must be selected in order to increase the sensitivity. The method was tested on several hundred of samples from small rivers of two agricultural watersheds located in Brittany, France, taken during dry and wet periods. The comparison between the proposed method and the standardised procedures for nitrate and DOC measurement gave a good adjustment for both parameters for ranges of 2-100 mg/L NO3 and 1-30 mg/L DOC.
Introduction
Nutrient monitoring in water bodies is still a challenge. The knowledge of nutrient concentrations as nitrate and dissolved organic carbon (DOC) in freshwater bodies is important for the assessment of the quality impairment of water resources touched by eutrophication or harmful algal blooms for example. The export of these nutrients in freshwater is often characterized on one hand, by a high spatio-temporal variability regarding seasonal change, agricultural practices, hydrological regime, tourism and on the other hand, by the nature and mode of nutrient sources (punctual/diffuse, continuous/discontinuous) [START_REF] Causse | Variability of N export in water: a review[END_REF]. In this context the monitoring of nitrate and DOC must be rapid and
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
easy to use on the field and UV spectrophotometry is certainly the best technique for that, given the great number of works, applications and systems proposed in the last decades.
Nitrate monitoring with UV sensing is a much more mature technique than DOC assessment by UV because nitrate ion has a specific and strong absorption. Several methods are available for drawing a relationship between UV absorbance and nitrate concentration using wavelength(s) around 200-220 nm, usually after sample filtration to eliminate interferences from suspended solids. Considering the presence of potential interferences such as dissolved organic matter (DOM) in real freshwater samples, the use of at least two wavelengths increases the quality of adjustment. The absorbance measurement at 205 and 300 nm was proposed by [START_REF] Edwards | Determination of Nitrate in Water Containing[END_REF] and the second derivative absorbance (SDA) calculated from three wavelengths was promoted by Suzuki and Kuroda (1987) and [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF]. A comparison of the two methods (two wavelengths and SDA) carried out on almost 100 freshwater samples of different stations in a 35 km 2 watershed, gave comparable data with ion chromatography analysis (Olsen, 2008). Other methods based on the exploitation of the whole UV spectrum were also proposed in the last decades, namely for wastewater and sea water, with the aim of a better treatment of interferences. Several multiwavelength methods were thus designed such as the polynomial modelisation of UV responses of organic matter and colloids (Thomas et al.,1990), a semi deterministic approach, including reference spectra (nitrate) and experimental spectra of organic matter, suspended solids and colloids (Thomas et al., 1993), or partial least square regression (PLSR) method built-into a field portable UV sensor (Langergraber et al., 2003). Kröckel et al. (2011) for groundwater monitoring. The findings were that the MWS offers more possibilities for calibration and error detection, but requires more expertise compared with the DWS.
Contrary to UV measurement of nitrate in water, DOC is associated with a bulk of dissolved organic matter (DOM) with UV absorption properties less known and defined than nitrate.
The study of the relation between absorbing DOM (chromophoric DOM or CDOM) and DOC has given numerous works on the characterisation of CDOM by UV spectrophotometry or fluorescence on one hand, and on the assessment of DOC concentration from the measurement of UV parameters on the other hand. Historically the absorbance at 254nm (A254), 254 nm being the emission wavelength of the low pressure mercury lamp used in the first UV systems, was the first proxy for the estimation of Total organic carbon [START_REF] Dobbs | The use of ultra-violet absorbance for monitoring the total organic carbon content of water and wastewater[END_REF], and was standardised in 1995 [START_REF] Eaton | Measuring UV absorbing organics: a standard method[END_REF]. Then the specific UV absorbance, the ratio of the absorbance at 254 nm (A254) divided by the DOC value was also standardized ten years after (Potter and Wimsatt, 2005). Among the more recent works, Spencer et al. (2012) shown strong correlations between CDOM absorption (absorbance at 254 and 350 nm namely) and DOC for a lot of samples from 30 US watersheds. [START_REF] Carter | Freshwater DOM quantity and quality from a two-component model of UV absorbance[END_REF] proposed a two component model, one absorbing strongly and representing aromatic chromophores and the other absorbing weakly and associated with hydrophilic substances. After calibration at 270 and 350 nm, the validation of the model for DOC assessment was quite satisfactory for 1700 filtered surface water samples from North America and the UK. This method was also used for waters draining upland catchments and it was found that both a single wavelength proxy (263 nm or 230 nm) and a two wavelengths model performed well for both pore water and surface water (Peacock et al., 2014). Besides these one or two wavelengths methods, the use of chemometric ones was also proposed at the same time as nitrate determination from a same spectrum acquisition (Thomas et al., 1993); (Rieger et al., 2004) [START_REF] Avagyan | Application of high-resolution spectral absorbance measurements to determine dissolved organic carbon concentration in remote areas[END_REF] from the signal of a UV-vis submersible sensor with the recommendation to create site-specific calibration models to achieve the optimal accuracy of DOC quantification.
Among the above methods proposed for UV spectra exploitation, the second derivative of the absorbance (SDA) is rather few considered even if SDA is used in other application fields to enhance the signal and resolve the overlapping of peaks [START_REF] Bosch Ojeda | Recent applications in derivative ultraviolet/visible absorption spectrophotometry: 2009-2011[END_REF]. Applied to the exploitation of UV spectra of freshwaters SDA is able to suppress or reduce the signal linked to the physical diffuse absorbance of colloids and particles and slight shoulders can be revealed (Thomas and Burgess, 2007). If SDA was proposed for nitrate [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF], its use for DOC has not been yet reported as well as a simultaneous SDA method for nitrate and DOC determination of raw sample (without filtration). This can be explained by the difficulty to obtain a specific response of organic matter and nitrate, in particular in the presence of high concentration of nitrate or high turbidity that cause spectra saturation and interferences. In this context, the aim of this work is to propose a new method to optimize the simultaneous measurement of DOC and nitrate using both dual optical pathlength and second derivative UV spectrophotometry.
Material and methods
Water samples
Water samples were taken from the Ic and Frémur watersheds (Brittany, France) through very different conditions during the hydrological year 2013-2014. These two rural watersheds of 86 km² and 77 km² respectively are concerned by water quality alteration with risks of green algae tides, closures of some beaches and contamination of seafood at their outlet. 580 samples were taken from spot-sampling (342 samples) on 32 different subwatersheds by dry or wet weather (defined for 5 mm of rain or more, 24 h before sampling) and by auto-
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
sampling (233 samples) during flood events on 3 subwatersheds. For wet weather, sampling was planned by spot-sampling before the rain, or programmed according to the local weather forecast to ensure a sample collection proportional to the flow. Samples were collected in 1L polyethylene bottles (24 for auto-sampler ISCO 3700) following the best available practices.
Samples were transported to the laboratory in a cooler and stored at 5 ି ା 3°C (NF EN ISO 5667-3, 2013).
Data acquisition
Nitrate concentration was analyzed according to NF EN ISO 13395 standard thanks to a continuous flow analyzer (Futura Alliance Instrument). Dissolved organic carbon (DOC) was determined by thermal oxidation coupled with infrared detection (Multi N/C 2100, Analytik Jena) following acidification with HCl (standard NF EN 1484). Samples were filtered prior to the measurement with 0.45 µm HA Membrane Filters (Millipore®).
Turbidity (NF EN ISO 7027, 2000) was measured in situ for each sample, with a multiparameter probe (OTT Hydrolab MS5) for spot-sampling and with an Odeon probe (Neotek-Ponsel, France) for auto-sampling stations.
Finally discharge data at hydrological stations were retrieved from the database of the national program of discharge monitoring.
UV measurement
Spectra acquisition
UV spectra were acquired with a Perkin Elmer Lambda 35 UV/Vis spectrophotometer, between 200 and 400 nm with different Suprasil® quartz cells (acquisition step: 1 nm, scan speed: 1920 nm/min). Two types of quartz cells were used for each sample. A short path length cell of 2 mm was firstly used to avoid absorbance saturation in the wavelength domain strongly influenced by nitrate below 240 nm (linearity limited to 2.0 a.u.). On the contrary, a
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
longer pathlength cell (20 mm) was used in order to increase the signal for wavelengths outside of the influence of nitrates (> 240 nm approx.). Regarding a classic UV spectrophotometer with a pathlength cell of 10 mm, these dual pathlength devices act as a spectrophotometric dilution/concentration system, adapted to a high range of variation of nitrate concentrations in particular.
Preliminary observation
Before explaining the proposed method, a qualitative relation between UV spectra shape and water quality can be reminded. Figure 1 shows two spectra of raw freshwaters with the same nitrate concentration (9.8 mgNO3/L) taken among samples of the present work. These spectra are quite typical of freshwaters. If the nitrate signal is well identified with the half Gaussian below 240 nm, the one of organic matter responsible for DOC is very weak with a very slight shoulder above 250 nm. In this context, the use of SDA already proposed for nitrate determination [START_REF] Crumpton | Nitrate and organic N analysis with 2nd-derivative spectroscopy[END_REF] and giving a maximum for any inflexion point in the decreasing part of the signal after a peak or a shoulder can be useful. However, given the absorbance values above 250 nm, the use of a longer optical pathlength is recommended in order to increase the sensitivity of the method.
Methodology
The general methodology is presented in Figure 2. Firstly, a UV spectrum is obtained directly from a raw sample (without filtration nor pretreatment) with a 2mm pathlength (PL) cell. If the absorbance value at 210 nm (A210), is greater than 2 u.a., a dilution with distilled water must be carried out. If not, the second derivative of the absorbance (SDA) at 226 nm is used for nitrate determination. The SDA value at a given wavelength λ is calculated according to the equation 1 (Thomas and Burgess 2007):
ܣܦܵ ఒ = ݇ * ൫ ഊష ା ഊశ -ଶ * ഊ ൯ మ [1]
where A λ is the absorbance value at wavelength λ, k is an arbitrary constant (chosen here equal to 1000) and h is the derivative step (here set at 10 nm).
Given the variability between successive SDA values linked to the electronic noise of the spectrophotometer, a smoothing step of the SDA spectrum is sometimes required, particularly when the initial absorbance values are low (< 0.1 a.u.). This smoothing step is based on the Stavitsky-Golay's method (Stavitsky and Golay, 1964).
For DOC measurement, the SDA value at 295 nm is used if A250 is greater than 0.1 a.u.. If A250 is lower than 0.1, the intensity of absorbances must be increased with the use of a 20mm pathlength cell. After the SDA295 calculation, a correction from the value of SDA226 linked to the interference of nitrate around 300nm is carried out. This point will be explained in the DOC calibration section. From the results of SDA values and the corresponding concentration of nitrate, a calibration is obtained for nitrate concentration ranging up to 100 mgNO 3 /L (Figure 4). This high value of nitrate concentration is possible thanks to the use of the 2 mm pathlength cell. Deduced from the calibration line, the R2 value is very close to 1 and the limit of detection (LOD) is 0.32 mgNO 3 /L. For DOC calibration, the procedure is different from the one for nitrate given the absence of standard solution for DOC, covering the complexity of dissolved organic matter. A test set of 49 samples was chosen among samples described hereafter, according to their DOC concentration up to 20 mgC/L. The choice of the SDA value at 295 nm is deduced from the examination of the second derivative spectra of some samples of the test set (Figure 5). Two peaks can be observed, the first one around 290 nm, and the second one less defined, around 330 nm (Figure 5a). The maximum of the first peak is linked to the DOC content, but its position shifts between 290 and 300 nm, because of the relation between DOC and nitrate concentration, with relatively more important SDA values when DOC is low (Thomas et al.
2014).
Considering that the measurement is carried out with a long optical pathlength for DOC, and that nitrate also absorbs in this region, its presence must be taken into account. On Figure 5a the second derivative spectrum of a 50 mgNO 3 /L of nitrate presents a valley (negative peak) around 310 nm and a small but large peak around 330 nm. Based on this observation, a correction is proposed for the SDA of the different samples (equation 2):
SDA* = SDAsample -SDAnitrate [2]
Where SDA* is the SDA corrected, SDA sample is the SDA calculated from the spectrum acquisition of a given sample and SDA nitrate is the SDA value corresponding to the nitrate concentration of the given sample.
After correction, the second derivative spectra show only a slight shift for the first peak around 300 nm and the peak around 330 nm is no more present (Figure 5b). From this observation, the SDA value at 295 nm is chosen for DOC assessment.
Samples characteristics
For this work, a great number of samples were necessary for covering the different subwatersheds characteristics and the variability of hydrometeorological conditions all along the hydrological year. 580 samples were taken from 32 stations and the majority of samples were taken in spring and summer time with regard to the principal land use of the two watersheds and the corresponding agricultural practices, namely fertilization (Figure 7).
Nitrate and DOC concentrations ranged respectively from 2.9 to 98.5 mgNO3/L and from 0.7 to 28.9 mgC/L. The river flows were between 0.8 L/s and 6299 L/s and turbidity between 0.1 and 821 NTU, after the rainy periods.
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
Validation on freshwater samples
The validation of the method for nitrate determination was carried out on 580 samples (Figure 8). The adjustment between measured and estimated values of nitrate concentration gave a R 2 greater than 0.99 and a RMSE of 2.32 mgNO 3 /L. The slope is close to 1 and the ordinate is slightly negative (-1.68) which will be explained in the discussion section. The validation of the method for DOC determination was carried out on 580 samples (Figure 9). The adjustment between measured and estimated values of DOC concentration gave a R 2 greater than 0.95 and a RMSE close to 1 mgC/L. The slope is close to 1 and the ordinate is low (0.086 mgC/L).
Interferences
Except for DOC assessment for which the value of SDA at 295 nm must be corrected by the presence of nitrate, different interferences have to be considered in nitrate measurement.
Nitrate absorbing in the first exploitable window of UV spectrophotometry (measurement below 200 nm being quite impossible given the strong absorption of dioxygen) the presence of nitrite with a maximum of absorption at 213 nm could be a problem. However the molar absorption coefficient of nitrite is equal to the half of the value of nitrate (Thomas and Burgess, 2007) and usual concentrations of nitrite are much lower in freshwaters than nitrate ones (Raimonet et al., 2015). For other interferences linked to the presence of suspended solids or colloids (for raw samples) and organic matter for nitrate determination, Figure 10 shows some example of spectral responses for some samples of the tests set taken under contrasted conditions (dry and wet weather) and corrected from nitrate absorption (i.e. the contribution of nitrate absorption is deduced from the initial spectrum of a each sample). The spectral shape is mainly explained by the combination of physical (suspended solids and colloids) and chemical (DOM) responses. Suspended solids are responsible for a linear decrease of absorbance up to 350 nm and more, colloids for an exponential decrease between 200 and 240 nm and the main effect of the presence of DOM is the shoulder shape between 250 and 300 nm the intensity of which being linked to DOC content. Thus the spectral shape is not linear around the inflexion point of the nitrate spectrum (226 nm) and the corresponding second derivative values being low at 226 nm give a theoretical concentration under 2 mgNO 3 /L at maximum. This observation explains the slight negative ordinate of the validation curve (Figure 8). In order to confirm the need of nitrate correction of SDA at 295 nm for DOC determination the adjustment between DOC and SDA at 295nm without nitrate correction was carried out for the same set of samples than for DOC calibration (Figure 6). Compared to the characteristics of the corrected calibration line, the determination coefficient is lower (0.983 against 0.996) and the slope is greater (1.2 times) as well as the ordinate (7.9 times), the RMSE (5.6 times) and the LOD (5.4 against 1.1 mgNO3/L. These observations can be explained by the shift of the peak (around 290-295nm) and the hypochromic effect of nitrate on the SDA value of the sample at 295 nm, (see Figure 5) showing the importance of the nitrate correction for DOC determination.
Another interfering substance can be free residual chlorine absorbing almost equally at 200nm and 291nm with a molar absorption coefficient of 7.96*10 4 m 2 /mol at 291nm (Thomas O. and Burgess C., 2007), preventing the use of the method for chlorinated drinking waters.
Optical pathlength influence for NO3 and DOC
Two optical pathlengths are proposed for the method, a short one (2mm) for nitrate determination and a longer one (20mm) for DOC (Figure 2). However, considering that the optimal spectrophotometric range UV spectrophotometers between 0.1 and 2.0 a.u. (O Thomas and Burgess, 2007) must be respected, other optical pathlengths can be chosen for some water samples depending on their UV response. If the absorbance value of a sample is lower than 0.1 a.u. at 200nm with the 2mm optical pathlength, a 20mm quartz cell must be used. Respectively, if the absorbance value is lower than 0.1 at 300nm with the proposed optical pathlength of 20mm, a 100 mm one must be used. This can be the case when nitrate or DOC concentration is very low given the inverse relationship often existing between these two parameters (Thomas et al., 2014). A comparison of the use of different optical pathlength for DOC estimation gives an R 2 value of 0.70 for the short pathlength (2mm) against 0.96 for the recommended one (20mm). Finally, the choice of a dual pathlength measurement was recently proposed by [START_REF] Chen | Development of variable pathlength UV-vis spectroscopy combined with partial-least-squares regression for wastewater chemical oxygen demand (COD) monitoring[END_REF] to improve successfully the chemical oxygen demand estimation in wastewater samples by using a PLS regression model applied to the two spectra.
Conclusion
A simple and rapid method for the UV determination of DOC and nitrate in raw freshwater samples, without filtration, is proposed in this work:
-Starting from the acquisition UV absorption spectra with 2 optical pathlengths (2 and 20 mm), the second derivative values at 226 and 295 nm are respectively used for nitrate and DOC measurement.
-After a calibration step with standard solutions for nitrate and known DOC content samples for DOC, LODs of 0.3 mgNO3/L for nitrate and 1.1 mgC/L were obtained for ranges up to 100 mgNO3/L and 0-25 mgC/L. -Given its simplicity, this method can be handled without chemometric expertise and adapted on site with field portable UV sensors or spectrophotometers.
It is the first UV procedure based on the use of the second derivative absorbance at 295nm for DOC determination, and calculated after correction of nitrate interference from the acquisition of the UV absorption spectrum with a long optical pathlength (20mm or more). This is a simple way to enhance the slight absorption shoulder around 280-300nm due to the presence of organic matter. Moreover the interferences of suspended matter and colloids being negligible on the second derivative signal, the measurement can be carried out for both parameters on raw freshwater samples without filtration. Finally, even if the validation of the method was carried out on a high number of freshwater samples covering different hydrological conditions, further experimentations should be envisaged in order to check the applicability of the method to the variability of DOM nature.
proposed a combined method of exploitation (multi component analysis (MCA) integrating reference spectra and a polynomial modelisation of humic acids), associated to a miniaturized spectrophotometer with a capillary flow cell. More recently, a comparison between two different commercials in situ spectrophotometers, a double wavelength spectrophotometer (DWS) and a M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT multiwavelength one (MWS) with PLSR resolution was carried out by Huebsch et al. (2015)
Figure 1 :
1 Figure 1: Example of UV spectra of raw freshwaters with the same concentration of nitrate
Figure 2 : General methodology
Figure 3
3 Figure 3 shows spectra and second derivatives of standard nitrate solutions. Nitrate strongly absorbs around 200-210 nm with a molar absorption coefficient of 8.63*10 5 m 2 /mol at 205.6
Figure 3 :
3 Figure 3 : Spectra of standard solutions of nitrate (raw absorbances left, and SDA right)
Figure 4 :
4 Figure 4 : Calibration line for nitrate determination from SDA at 226 nm (R 2 being equal to 1,
Figure5: Second derivative spectra of water samples without nitrate correction (5a with a
Fig 6 :
6 Fig 6: Calibration between SDA at 295nm corrected by nitrate (SDA* 295 ) and DOC
Figure 7 :
7 Figure 7: Relevance of samples in relation to land use, seasonality, land use and physic-
Fig 8 :
8 Fig 8: Relation between measured and estimated (from SDA 226 ) NO 3 concentrations for 580
Fig 9 :
9 Fig 9: Relation between measured and estimated (by SDA* 295 ) DOC concentrations for 580
Figure 10 :
10 Figure 10: Spectra of freshwater samples corrected from nitrate absorbance. Nitrate, DOC and
The peak around 295nm for the second derivative spectra reveals the existence of an inflexion point at the right part of the slight shoulder of the absorbance spectrum, between 250 and 300 nm. This observation can be connected with the use of the spectral slope between 265 and 305 nm(Galgani et al., 2011) to study the impact of photodegradation and mixing processes on the optical properties of dissolved organic matter (DOM) in the complement of fluorescence in two Argentine lakes. Fichot and Benner (2012) also used the spectral slope between 275 and 295 nm for CDOM characterisation and its use as tracers of the percent terrigenous DOC in river-influenced ocean margins.Helms et al. (2008) propose to consider two distinct spectral slope regions (275-295 nm and 350-400 nm) within log-transformed absorption spectra in order to compare DOM from contrasting water types. The use of the logtransformed spectra was recently proposed byRoccaro et al. (2015) for raw and treated drinking water and the spectral slopes between 280 and 350nm were shown to be correlated to the reactivity of DOM and the formation of potential disinfection by-products. Finally a very recent study(Hansen et al., 2016) based on the use of DOM optical properties for the discrimination of DOM sources and processing (biodegradation, photodegradation), focused on the complexity of DOM nature made-up of a mixture of sources with variable degrees of microbial and photolytic processing and on the need for further studies on optical properties of DOM. Thus, despite the high number of samples considered for this work and the contrasted hydrological conditions covered, the relevance of DOM nature as representing all types of DOM existing in freshwaters is not ensured. The transposition of the method, at least for DOC assessment, supposes to verify the existence of the second derivative peak at 290-300 nm and the quality of the relation between the SDA value at 295 (after nitrate correction), and the DOC content.
-
The method validation was carried out for around 580 freshwater samples representing different hydrological conditions in two agricultural watersheds.
Acknowledgement
The authors wish to thank the Association Nationale de la Recherche et de la Technologie (ANRT), and Coop de France Ouest for their funding during the PhD of Jean Causse (PhD grant and data collection), the Agence de l'Eau Loire-Bretagne and the Conseil Régional de Bretagne for their financial support (project C&N transfert).
Etheridge, J.R., Birgand, F., Osborne, J. a., Osburn, C.L., Burchell Ii, M.R., Irving, J., 2014. Using in situ ultraviolet-visual spectroscopy to measure nitrogen, carbon, phosphorus, and suspended solids concentrations at a high frequency in a brackish tidal marsh. Limnol. Oceanogr. Methods 12, 10-22.
Fichot, C.G., Benner, R., 2012. The spectral slope coefficient of chromophoric dissolved organic matter (S275-295) as a tracer of terrigenous dissolved organic carbon in river-influenced ocean margins. Limnol. Oceanogr. 57, 1453-1466. Galgani, L., Tognazzi, A., Rossi, C., Ricci, M., Angel Galvez, J., Dattilo, A.M., Cozar, A., Bracchini, L., Loiselle, S.A., 2011. Assessing the optical changes in dissolved organic |
01765751 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01765751/file/Ferrarini2018.pdf | M G Ferrarini
S G Mucha
D Parrot
G Meiffren
J F R Bachega
G Comt
A Zaha
email: [email protected]
M F Sagot
Hydrogen peroxide production and myo-inositol metabolism as important traits for virulence of Mycoplasma hyopneumoniae
Keywords:
Mycoplasma hyopneumoniae is the causative agent of enzootic pneumonia. In our previous work, we reconstructed the metabolic models of this species along with two other mycoplasmas from the respiratory tract of swine: Mycoplasma hyorhinis, considered less pathogenic but which nonetheless causes disease and Mycoplasma flocculare, a commensal bacterium. We identified metabolic differences that partially explained their different levels of pathogenicity. One important trait was the production of hydrogen peroxide from the glycerol metabolism only in the pathogenic species. Another important feature was a pathway for the metabolism of myo-inositol in M. hyopneumoniae. Here, we tested these traits to understand their relation to the different levels of pathogenicity, comparing not only the species but also pathogenic and attenuated strains of M. hyopneumoniae. Regarding the myo-inositol metabolism, we show that only M. hyopneumoniae assimilated this carbohydrate and remained viable when myo-inositol was the primary energy source. Strikingly, only the two pathogenic strains of M. hyopneumoniae produced hydrogen peroxide in complex medium. We also show that this production was dependent on the presence of glycerol. Although further functional tests are needed, we present in this work two interesting metabolic traits of M. hyopneumoniae that might be directly related to its enhanced virulence.
Contents
Introduction
The notion that the lungs are sterile is frequently stated in textbooks; however, no modern studies have provided evidence for the absence of microorganisms in this environment [START_REF] Dickson | The lung microbiome: New principles for respiratory bacteriology in health and disease[END_REF]. Several bacteria colonize the respiratory tract of swine.
Mycoplasma hyopneumoniae, Mycoplasma flocculare, and Mycoplasma hyorhinis are some of the most important species identified so far [START_REF] Mare | New species: Mycoplasma hyopneumoniae; a causative agent of virus pig pneumonia[END_REF][START_REF] Meyling | Serological identification of a new porcine Mycoplasma species, Mycoplasma flocculare[END_REF][START_REF] Rose | Taxonomy of some swine Mycoplasmas: Mycoplasma suipneumoniae goodwin et al. 1965, a later, objective synonym of Mycoplasma hyopneumoniae mare and switzer 1965, and the status of Mycoplasma flocculare meyling and friis 1972[END_REF][START_REF] Siqueira | Microbiome overview in swine lungs[END_REF]. M. hyopneumoniae is widespread in pig populations and is the causative agent of enzootic pneumonia [START_REF] Maes | Enzootic pneumonia in pigs[END_REF]; M. hyorhinis, although not as pathogenic as M. hyopneumoniae, has already been found as the sole causative agent of pneumonia, polyserositis and arthritis in pigs [START_REF] Kobisch | Swine mycoplasmoses[END_REF][START_REF] Davenport | Polyserositis in pigs caused by infection with Mycoplasma[END_REF][START_REF] Whittlestone | Porcine mycoplasmas[END_REF][START_REF] Thacker | Mycoplasmosis[END_REF]. M. flocculare, on the other hand, has high prevalence in swine herds worldwide, but up to now, is still considered a commensal bacterium [START_REF] Kobisch | Swine mycoplasmoses[END_REF].
Because of the genomic resemblance of these three Mycoplasma species [START_REF] Stemke | Phylogenetic relationships of three porcine mycoplasmas, Mycoplasma hyopneumoniae, Mycoplasma flocculare, and Mycoplasma hyorhinis, and complete 16S rRNA sequence of M. flocculare[END_REF][START_REF] Siqueira | New insights on the biology of swine respiratory tract mycoplasmas from a comparative genome analysis[END_REF], it remains unclear why M. hyopneumoniae can become highly virulent if compared with the other two. It is also essential to understand that the simple presence or absence of each species is not in itself a determinant factor in the development of enzootic pneumonia: most piglets are thought to be vertically infected with M. hyopneumoniae at birth [START_REF] Maes | Enzootic pneumonia in pigs[END_REF][START_REF] Fano | Assessment of the effect of sow parity on the prevalence of Mycoplasma hyopneumoniae in piglets at weaning[END_REF][START_REF] Sibila | Current perspectives on the diagnosis and epidemiology of Mycoplasma hyopneumoniae infection[END_REF] and many can become carriers of the pathogen throughout their entire life without developing acute pneumonia. Moreover, M. hyopneumoniae also persists longer in the respiratory tract, either in healthy animals or even after successful treatment of the disease [START_REF] Thacker | Interaction between Mycoplasma hyopneumoniae and swine influenza virus[END_REF][START_REF] Ruiz | Mycoplasma hyopneumoniae colonization of pigs sired by different boars[END_REF][START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Overesch | Persistence of Mycoplasma hyopneumoniae sequence types in spite of a control program for enzootic pneumonia in pigs[END_REF].
To make it even more complex, different strains of each species bear different levels (or even lack) of pathogenicity. For instance, M. hyopneumoniae has six sequenced strains, two of which are known to be attenuated by culture passages [START_REF] Zielinski | Effect of growth in cell cultures and strain on virulence of Mycoplasma hyopneumoniae for swine[END_REF][START_REF] Liu | Comparative genomic analyses of Mycoplasma hyopneumoniae pathogenic 168 strain and its high-passaged attenuated strain[END_REF]. These strains cannot cause the clinical symptoms of pneumonia in vivo and up to now it is not clear why.
In contrast to other pathogenic bacteria, and as revealed by the analysis of the sequenced genomes from several mycoplasmas [START_REF] Himmelreich | Complete sequence analysis of the genome of the bacterium Mycoplasma pneumoniae[END_REF][START_REF] Chambaud | The complete genome sequence of the murine respiratory pathogen Mycoplasma pulmonis[END_REF][START_REF] Minion | The genome sequence of Mycoplasma hyopneumoniae strain 232, the agent of swine mycoplasmosis[END_REF][START_REF] Vasconcelos | Swine and poultry pathogens: the complete genome sequences of two[END_REF][START_REF] Siqueira | New insights on the biology of swine respiratory tract mycoplasmas from a comparative genome analysis[END_REF], pathogenic Mycoplasma species seem to lack typical primary virulence factors such as toxins, invasins, and cytolysins [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF][START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF]. For this reason, classical concepts of virulence genes are usually problematic and a broader concept for virulence is used for these species. In this way, a virulence gene in mycoplasmas is described as any non essential gene for in vitro conventional growth, which is essential for the optimal survival (colonization, persistence or pathology) inside the host [START_REF] Browning | Identification and characterization of virulence genes in mycoplasmas[END_REF].
There have been many different types of virulence factors described so far in several Mycoplasma species, most of them related to adhesion [START_REF] Razin | Mycoplasma adhesion[END_REF], invasion [START_REF] Burki | Virulence, persistence and dissemination of Mycoplasma bovis[END_REF], cytotoxicity [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF][START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF], host-evasion [START_REF] Simmons | How Some Mycoplasmas evade host immune responses[END_REF] and host-immunomodulation [START_REF] Katz | Comparison of mitogens from Mycoplasma pulmonis and Mycoplasma neurolyticum[END_REF][START_REF] Waites | Mycoplasma pneumoniae and its role as a human pathogen[END_REF].
As for M. hyopneumoniae and M. hyorhinis, adhesion factors such as antigen surface proteins and the ability of these organisms to produce a capsular polysaccharide have already been described in the literature [START_REF] Whittlestone | Porcine mycoplasmas[END_REF][START_REF] Tajima | Interaction of Mycoplasma hyopneumoniae with the porcine respiratory epithelium as observed by electron microscopy[END_REF][START_REF] Citti | Elongated versions of Vlp surface lipoproteins protect Mycoplasma hyorhinis escape variants from growth-inhibiting host antibodies[END_REF][START_REF] Djordjevic | Proteolytic processing of the Mycoplasma hyopneumoniae cilium adhesin[END_REF][START_REF] Seymour | Mhp182 (P102) binds fibronectin and contributes to the recruitment of plasmin(ogen) to the Mycoplasma hyopneumoniae cell surface[END_REF]. However, while the diseases caused by these swine mycoplasmas have been extensively studied, only recently their metabolism has been explored from a mathematical and computational point of view by our group [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF]. We are well aware that metabolism does not fully explain the pathologies caused by either of them. However, adhesion proteins, classically related to virulence in mycoplasmas cannot be associated with the different levels of pathogenicity between M. hyopneumoniae and M. flocculare. Both species harbor similar sets of adhesion proteins [START_REF] Siqueira | Unravelling the transcriptome profile of the swine respiratory tract mycoplasmas[END_REF] and have been shown to adhere to cilia in a similar way [START_REF] Young | A tissue culture system to study respiratory ciliary epithelial adherence of selected swine mycoplasmas[END_REF]. Thus, it remains unclear what prevents M. flocculare to cause disease in this context.
In our previous work [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], we compared the reconstructed metabolic models of these three Mycoplasma species, and pointed out important metabolic differences that could partly explain the different levels of pathogenicity between the three species. The most important trait was related to the glycerol metabolism, more specifically the turnover of glycerol-3-phosphate into dihydroxyacetone-phosphate (DHAP) by the action of glycerol-3-phosphate oxidase (GlpO, EC 1.1.3.21), which was only present in the genomes of M. hyorhinis and M. hyopneumoniae. This would allow the usage of glycerol as a primary energy source, with the production of highly toxic hydrogen peroxide in the presence of molecular oxygen. The metabolism of glycerol and the subsequent production of hydrogen peroxide by the action of GlpO are essential for the cytotoxicity of lung pathogens Mycoplasma pneumoniae [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF] and Mycoplasma mycoides subsp. mycoides [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF]. Moreover, the Mycoplasma hominis group is not the only one where hydrogen peroxide production via glpO has been reported. In some Spiroplasma species (specifically Spiroplasma taiwanense) and within the pneumoniae group (for instance in Mycoplasma penetrans), the presence of this enzyme was also associated with virulence [START_REF] Kannan | Hemolytic and hemoxidative activities in Mycoplasma penetrans[END_REF][START_REF] Lo | Comparison of metabolic capacities and inference of gene content evolution in mosquitoassociated Spiroplasma diminutum and S. taiwanense[END_REF].
Another major difference between our previous models was related to the presence of a complete transcriptional unit (TU) encoding proteins for the uptake and metabolism of myo-inositol in M. hyopneumoniae (with the exception of one enzyme). This could be another important trait for the enhanced virulence of this species if compared with the other two. Here, we studied this pathway in more detail to try to find this missing enzyme and the possible reasons as to why natural selection kept these genes only in this Mycoplasma species.
In a recent review, Maes and colaborators [START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF] emphasize the need for the further investigation of the role of glycerol and myo-inositol metabolism and their contribution to virulence in M. hyopneumoniae. Here, we experimentally tested these two traits to show how they might be related to the different levels of pathogenicity, by comparing not only the species themselves but different strains of M. hyopneumoniae. Contrary to what we anticipated, only the two pathogenic strains of M. hyopneumoniae were able to produce hydrogen peroxide in complex medium, and we confirmed that this production was dependent on the presence of glycerol. The myo-inositol metabolism, in turn, was tested with the aid of deuterated myo-inositol in Friis medium. We were able to detect by mass spectrometry (MS) a slight decrease in the marked myo-inositol concentration throughout time, indicating the ability of M. hyopneumoniae to uptake such carbohydrate. We also show here that only the M. hyopneumoniae strains remained viable when myo-inositol was the primary energy source.
We present here two metabolic traits specific to M. hyopneumoniae that might be directly related to its enhanced virulence, specially in its ability to successfully overgrow the other two Mycoplasma species in the respiratory tract of swine, persist longer in this environment and possibly cause disease.
Results
Comparative genomics of glpO from glycerol metabolism
Highly conserved homolog genes to glpO from M. mycoides subsp. mycoides (EC 1.1.3.21) were found only in the genomes of M. hyopneumoniae and M. hyorhinis. Despite the annotation as a dehydrogenase in both M. hyopneumoniae and M. hyorhinis, we propose this enzyme to act as glycerol-3-phosphate oxidase (GlpO), using molecular oxygen as the final electron acceptor and producing DHAP and hydrogen peroxide. We therefore refer to the encoded protein in M. hyopneumoniae and M. hyorhinis as GlpO, rather than GlpD. The high similarity between these predicted proteins (Supplementary Figure S1A) may be an indication that this trait might be essential for the pathogenicity of these Mycoplasma species.
Particularly, the cytotoxicity of M. mycoides subsp. mycoides is considered to be related to the translocation of the hydrogen peroxide into the host cells [START_REF] Bischof | Cytotoxicity of Mycoplasma mycoides subsp. mycoides small colony type to bovine epithelial cells[END_REF]. This is presumably possible because of the close proximity to the host cells along with the integral membrane GlpO [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF][START_REF] Pilo | Molecular mechanisms of pathogenicity of Mycoplasma mycoides subsp. mycoides SC[END_REF]. Different transmembrane prediction softwares [START_REF] Hofmann | TMbase -A database of membrane spanning proteins segments[END_REF][START_REF] Krogh | Predicting transmembrane protein topology with a hidden Markov model: application to complete genomes[END_REF][START_REF] Combet | NPS@: network protein sequence analysis[END_REF][START_REF] Kahsay | An improved hidden Markov model for transmembrane protein detection and topology prediction and its applications to complete genomes[END_REF] identified putative transmembrane portions in the GlpO proteins from M. hyopneumoniae and M. hyorhinis (Supplementary Figure S1B). Similar results were reported for the homolog enzyme in M. mycoides subsp. mycoides [START_REF] Pilo | A metabolic enzyme as a primary virulence factor of Mycoplasma mycoides subsp. mycoides small colony[END_REF], and a recent proteomic study has detected GlpO from M. hyopneumoniae in surface-enriched extracts through LC-MS/MS (Personal communication from H. B. Ferreira, [START_REF] Machado | Comparative surface proteomic approach reveals qualitative and quantitative differences of two Mycoplasma hyopneumoniae strains and Mycoplasma flocculare[END_REF]).
Pathogenic M. hyopneumoniae strains produce hydrogen peroxide from glycerol
Contrary to what we had anticipated, we were only able to detect the production of hydrogen peroxide from the two pathogenic strains of M. hyopneumoniae (7448 and 7422) in Friis medium, as can be seen in Figure 1A. The attenuated strain from the same species (M. hyopneumoniae strain J), along with M. hyorhinis and M. flocculare did not produce detectable quantities of this toxic product. In order to verify if the amount of hydrogen peroxide produced was comparable between strains, we also counted the number of cells for each replicate. In this way, the two pathogenic strains produced approximately the same amount of hydrogen peroxide and had cell counts of the same order of magnitude (available in Supplementary Table S1).
We also show (Figure 1B) that the hydrogen peroxide produced by the M. hyopneumoniae strains 7448 and 7422 was dependent on the presence of glycerol in the incubation buffer.
Levels of glpO transcripts do not differ from pathogenic to attenuated strains of M. hyopneumoniae
We tested the three M. hyopneumoniae strains (7448, 7422 and J) in order to compare the mRNA expression levels of glpO gene by RT-qPCR. Since the transcript levels of normalizer genes were not comparable between strains, we used relative quantification normalized against unit mass; in our case, the initial amount of RNA. We chose one of the replicates from strain 7448 as the calibrator, and we were able to show (Figure 2 and Supplementary Table S2) that there was no significant difference in the transcript levels of glpO in all tested strains from M. hyopneumoniae.
Enzymes for the uptake and catabolism of myo-inositol are specific to M. hyopneumoniae strains
M. hyopneumoniae is the only reported species among the Mollicutes that contains genes involved in the catabolism of myo-inositol. Since Mycoplasma species seem to maintain a minimum set of essential metabolic capabilities, we decided to further investigate this pathway and the influence of its presence on the metabolism and pathogenicity of M. hyopneumoniae. The degradation of inositol can feed glycolysis with DHAP and also produces an acetyl coenzyme-A (AcCoA) (Figure 3). A TU for the myo-inositol catabolism is present in all M. hyopneumoniae strains, with the exception of the gene that codes for the enzyme 6-phospho-5-dehydro-2-deoxy-D-gluconate aldolase (IolJ, EC 4.1.2.29), responsible for the turnover of 6-phospho-5-dehydro-2-deoxy-D-gluconate (DKGP) into malonate semialdehyde (MSA).
The gene encoding IolJ in other organisms is similar to the one coding for enzyme fructose-bisphosphate aldolase (Fba) from glycolysis (EC 4.1.2.13).
There are two annotated copies of the gene fba in M. hyopneumoniae (fba and fba-1, Supplementary Table S3). We performed homology and gene context analyses, 3D comparative modeling and protein-ligand interaction analysis to check if either of them would be a suitable candidate for this activity.
The gene context and protein sequence alignment for 15 selected Fba homologs in Mollicutes can be seen in Supplementary Figures S2 andS3. Comparative models for both copies of Fba from M. hyopneumoniae and previously characterized IolJ and Fba from Bacillus subtilis [START_REF] Yoshida | myo-Inositol catabolism in Bacillus subtilis[END_REF] were constructed based on available structures of Fba in PDB [START_REF] Berman | The Protein Data Bank[END_REF] (Figure 4 and Supplementary Table S4).
Fba structures from Escherichia coli and Giardia intestinalis were used to gather more information about substrate binding (Supplementary Figure S3). The alignment shows a highly conserved zinc binding site (residues marked as '*'), essential for substrate binding and catalysis. Positions 'a', 'b', 'c', 'd' and 'e' surround the substrate cavity. The structural analysis suggests that the interaction mode of DKGP (substrate of IolJ) with the zinc ion of the active site is similar to that observed for FBP (fructose-1,6-bisphosphate, substrate of Fba).
Nevertheless the substrate specificity is strongly dependent on the residues that form the substrate cavity.
While there seems to be several common features between Fba and IolJ (residues 'c', 'd', 'e' and '*'), residue 'a' appears to be essential for the substrate interaction with IolJ. This residue is generally occupied by an arginine (R52) in several putative IolJs from other organisms (Supplementary Figure S4), and absent in all predicted Fbas analysed in this study. From the predicted structures, the presence of this positively charged arginine in IolJ seems to disfavour the interaction with the phosphate group of FBP whilst it is complementary to the carboxyl group from DKGP.
In this way, the predicted structure of Fba-1 from M. hyopneumoniae resembles more the Fba structures from the experimentally solved Fbas in B. subtilis, E. coli and G. intestinalis. The annotated Fba from M. hyopneumoniae, on the other hand, seems to be more similar to the IolJ structure from B. subtilis. Although functional studies are needed to test this hypothesis, we propose that all enzymes needed for the myo-inositol catabolism are present in M. hyopneumoniae.
M. hyopneumoniae is able to uptake myo-inositol from the culture medium
In order to ascertain the ability of different bacteria to uptake myo-inositol, we used two different approaches. The first was the use of marked myo-inositol in complex medium and analysis by MS, and the second was to check the viability of cells (through ATP production) whenever myo-inositol was used as primary energy source.
When we tested if cells were able to uptake the marked myo-inositol, over the course of 48 h, we found no significant difference in M. flocculare and M. hyorhinis when compared to the control medium (CTRL), as observed in Figure 5A. As expected, the concentrations of myo-inositol for both strains of M. hyopneumoniae after 48 h of growth were lower than the control medium. We also collected two extra time points for M. hyopneumoniae strain 7448 and CTRL: 8 h and 24 h of growth (Figure 5B). In all time points, there is significant difference between the residual marked myo-inositol and the control medium, which implies that M. hyopneumoniae is able to uptake such carbohydrate from the medium. MS peak data is available in Supplementary Table S5.
Since we had glucose and glycerol present in this complex medium analysed by MS, we also wanted to check if the viability of the different strains and species altered when myo-inositol was the primary energy source. For this, we incubated cells in myo-inositol defined medium (depleted with glucose and glycerol) for 8 hours and measured the amount of ATP these cells were able to produce. This in turn was directly related to the amount of viable cells after cultivation in the specific medium tested. Considering we do not know the energetic yield and efficiency of each strain and species, we could not directly compare the amount of ATP produced between different organisms. For this reason, growth in regular defined medium (with glucose) for each strain was used as a normalization control and the ratio of ATP production in both media was used to compare the viable cells between strains. Since there was no other energy source available in the medium and in accordance with our previous predictions and results, only M. hyopneumoniae cells remained viable (ranging from 75% to 280%) when compared to their control growth in the regular defined medium (Figure 6 and Supplementary Table S6). The viability of the other species in this medium was 11.5% for M. hyorhinis and 0.2% for M. flocculare. We also achieved similar results when comparing the growth in myo-inositol defined medium versus Friis medium (Supplementary Figure S5).
Discussion
In this study, we wanted to find possible differences between pathogenic and attenuated strains of M. hyopneumoniae and also compare them with M. hyorhinis and M. flocculare and assess possible links to the enhanced virulence of M. hyopneumoniae. While M. hyopneumoniae strains 7422 and 7448 are considered pathogenic, strain J became attenuated after serial passages of in vitro culture; M. hyorhinis strain ATCC 17981 was isolated from swine but, to our knowledge, its level of pathogenicity has not been tested in vivo; and even though M. flocculare is not considered pathogenic, strain ATCC 27399 was isolated from a case of swine pneumonia (strain ATCC 27716 is derived from this strain). In our previous study [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], through mathematical modeling, we predicted two traits of M. hyopneumoniae in silico that could be associated with its enhanced virulence: the myo-inositol catabolism and the link between the glycerol and the glycolysis metabolism, with the production of highly toxic hydrogen peroxide (by the activity of the GlpO enzyme). In this work, we tested whether these species indeed differed from each other regarding their ability (i) to produce hydrogen peroxide in vitro and whether this was related to the availability of glycerol, (ii) to uptake myo-inositol, and (iii) to remain viable in a defined medium with myo-inositol as the primary energy source. While the uptake of myo-inositol might be a general feature of M. hyopneumoniae, the production of hydrogen peroxide in complex medium seems to be specific to pathogenic strains of this species.
Glycerol metabolism and hydrogen peroxide production
Even though the GlpO enzyme was previously detected in proteomes from both pathogenic and attenuated strains of M. hyopneumoniae (232 and J) [START_REF] Pinto | Comparative proteomic analysis of pathogenic and non-pathogenic strains from the swine pathogen Mycoplasma hyopneumoniae[END_REF][START_REF] Pendarvis | Proteogenomic mapping of Mycoplasma hyopneumoniae virulent strain 232[END_REF], only the pathogenic strains tested in our study (7448 and 7422) were able to produce detectable amounts of hydrogen peroxide in Friis medium (Figure 1). To our knowledge, no other study up to now was able to show that M. hyopneumoniae strains were able to produce this toxic product in vitro [START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF]. We also show here that the production of hydrogen peroxide in the pathogenic strains of M. hyopneumoniae is dependent on the presence of glycerol (Figure 1B).
The metabolism of glycerol and the formation of hydrogen peroxide were described as essential for the cytotoxicity of lung pathogens M. mycoides subsp. mycoides [START_REF] Vilei | Genetic and biochemical characterization of glycerol uptake in Mycoplasma mycoides subsp. mycoides SC: its impact on H(2)O(2) production and virulence[END_REF] and M. pneumoniae [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF]. Moreover, although both M. hyopneumoniae and M. flocculare can adhere to the cilia of tracheal epithelial cells in a similar way, only the adhesion of M. hyopneumoniae causes tissue damage [START_REF] Young | A tissue culture system to study respiratory ciliary epithelial adherence of selected swine mycoplasmas[END_REF].
We showed that the difference in enzyme activity was not related to the expression levels of glpO gene from the strains tested (Figure 2). We did not find any extreme differences in their aminoacid sequences either (Supplementary Figure S3). This could be an indication that either this enzyme undergoes posttranslational modifications in order to be active and/or the availability of the substrate (glycerol) intracellularly might be a limiting step for its activity. Posttranslational modifications have been extensively reported experimentally in several proteins of M. hyopneumoniae [START_REF] Djordjevic | Proteolytic processing of the Mycoplasma hyopneumoniae cilium adhesin[END_REF][START_REF] Burnett | P159 is a proteolytically processed, surface adhesin of Mycoplasma hyopneumoniae: defined domains of P159 bind heparin and promote adherence to eukaryote cells[END_REF][START_REF] Pinto | Proteomic survey of the pathogenic Mycoplasma hyopneumoniae strain 7448 and identification of novel posttranslationally modified and antigenic proteins[END_REF][START_REF] Seymour | A processed multidomain Mycoplasma hyopneumoniae adhesin binds fibronectin, plasminogen, and swine respiratory cilia[END_REF][START_REF] Tacchi | Post-translational processing targets functionally diverse proteins in Mycoplasma hyopneumoniae[END_REF]. From transcriptomic and proteomic literature data, we were not able to find any enlightening differences in this pathway between strains or species (Supplementary Table S7).
As for the availability of intracellular glycerol, in our previous metabolic models, we predicted differences in the metabolism of glycerol among the three Mycoplasma species (Supplementary Figure S6). While M. hyopneumoniae has five different ways of uptaking glycerol (dehydrogenation of glyceraldehyde, ABC transport of glycerol and glycerol-phosphate, import of glycerophosphoglycerol, and glycerophosphocholine), the other two species lack at least two reactions.
This might also limit the rate of production of hydrogen peroxide in each species.
In this way, the enhanced pathogenicity of M. hyopneumoniae over M. hyorhinis and M. flocculare may therefore also be due to hydrogen peroxide formation resulting from a higher uptake of glycerol as an energy source. Similarly, one reason that could partially explain why M. mycoides subsp. mycoides is highly pathogenic in comparison with the less pathogenic M. pneumoniae might be the greater intracellular availability of glycerol due to the presence of a specific and very efficient ABC transporter in M. mycoides subsp. mycoides.
Since the production of hydrogen peroxide was not reported as essential to the in vivo virulence of Mycoplasma gallisepticum [START_REF] Szczepanek | Hydrogen peroxide production from glycerol metabolism is dispensable for virulence of Mycoplasma gallisepticum in the tracheas of chickens[END_REF], more studies are needed to better understand the importance of this metabolism in M. hyopneumoniae. Moreover, future biochemical and functional studies are needed to prove that GlpO is indeed responsible for the activity proposed here and to check if the enzyme in attenuated strains/species is functional.
Myo-inositol uptake and catabolism
M. hyopneumoniae is the only Mycoplasma species with sequenced genome that has the genes for the catabolism of myo-inositol. Myo-inositol is an essential precursor for the production of inositol phosphates and inositol phospholipids in all eukaryotes [START_REF] Gonzalez-Salgado | Myo-Inositol uptake is essential for bulk inositol phospholipid but not glycosylphosphatidylinositol synthesis in Trypanosoma brucei[END_REF]. Myo-inositol is also widespread in the bloodstream of mammalians [START_REF] Reynolds | Strategies for acquiring the phospholipid metabolite inositol in pathogenic bacteria, fungi and protozoa: making it and taking it[END_REF], which would make it a suitable energy source for bacteria in the extremely vascularized respiratory system. Previously, Mycoplasma iguanae was described to produce acid from inositol [START_REF] Brown | Mycoplasma iguanae sp. nov., from a green iguana (Iguana iguana) with vertebral disease[END_REF], but the methods used in that paper are not clear and there is no complete genome from this organism for us to draw any conclusions. Based on sequence homology, orthology, synteny and tridimensional analyses, we proposed a possible candidate for the missing enzyme IolJ in M. hyopneumoniae, namely a duplication of the fba gene from glycolysis. This functional divergence after duplication is particularly interesting in bacteria for which evolution was mostly driven by genome reduction. Another reported example of this event is the duplication of the trmFO gene in Mycoplasma capricolum and more recently in Mycoplasma bovis. The duplicated TrmFO in M. capricolum was reported to catalyze the methylation of 23S rRNA [START_REF] Lartigue | The flavoprotein Mcap0476 (RlmFO) catalyzes m5U1939 modification in Mycoplasma capricolum 23S rRNA[END_REF] while the duplicated copy in M. bovis has been described to act as a fibronectin-binding adhesin [START_REF] Guo | TrmFO, a Fibronectin-Binding Adhesin of Mycoplasma bovis[END_REF].
We showed here that M. hyopneumoniae was able to uptake marked myoinositol from a complex culture medium (Figure 5); in addition this was the only species that remained viable whenever myo-inositol was used as the primary energy source (Figure 6). From our metabolic model predictions [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], the use of myo-inositol would be much more costly than the uptake and metabolism of glucose, which corroborates the small uptake of myo-inositol in Friis medium (glucose-rich) (Figure 5). This basal uptake of myo-inositol could also be an indication that this pathway is important not only for energetic yield. Supporting this idea, microarray studies on strain 232 showed that several genes (if not all) from the myo-inositol catabolism were differentially expressed during stress treatments: heat shock (downregulated) [START_REF] Madsen | Transcriptional profiling of Mycoplasma hyopneumoniae during heat shock using microarrays[END_REF], iron depletion (upregulated) [START_REF] Madsen | Transcriptional profiling of Mycoplasma hyopneumoniae during iron depletion using microarrays[END_REF], and norepinephrine (downregulated) [START_REF] Oneal | Global transcriptional analysis of Mycoplasma hyopneumoniae following exposure to norepinephrine[END_REF]. Moreover, a previous transcriptome profiling of M. hyopneumoniae [START_REF] Siqueira | Unravelling the transcriptome profile of the swine respiratory tract mycoplasmas[END_REF] showed that all genes from the myo-inositol catabolism were transcribed under normal culture conditions. Furthermore, three genes from the pathway (iolB, iolC and iolA) belonged to the list of the 20 genes with the highest number of transcript reads. Besides the transcription of these genes, proteomic studies of M. hyopneumoniae strains 232 [START_REF] Pendarvis | Proteogenomic mapping of Mycoplasma hyopneumoniae virulent strain 232[END_REF], 7422, 7448 and J [START_REF] Pinto | Comparative proteomic analysis of pathogenic and non-pathogenic strains from the swine pathogen Mycoplasma hyopneumoniae[END_REF][START_REF] Reolon | Survey of surface proteins from the pathogenic Mycoplasma hyopneumoniae strain 7448 using a biotin cell surface labeling approach[END_REF] (Supplementary Table S7) showed that several enzymes from this pathway were present in normal culture conditions. Indeed, myo-inositol has been extensively reported in several organisms as a signaling molecule [START_REF] Downes | Myo-inositol metabolites as cellular signals[END_REF][START_REF] Gillaspy | The cellular language of myo-inositol signaling[END_REF]. Moreover, the myo-inositol catabolism has been experimentally described as a key pathway for competitive host nodulation in the plant symbiont and nitrogen-fixing bacterium Sinorhizobium meliloti [START_REF] Kohler | Inositol catabolism, a key pathway in Sinorhizobium meliloti for competitive host nodulation[END_REF]. Host nodulation is a specific symbiotic event between a host plant and a bacterium. Kohler and collaborators (2010) showed that whenever inositol catabolism is disrupted (by single gene knockouts from the inositol operon), the mutants are outcompeted by the wild type for nodule occupancy. This means that genes for the catabolism of inositol are required for a successful competition in this particular symbiosis. Moreover, the authors were not able to find a suitable candidate for the IolJ activity. In our case, we proposed that the activity of the missing enzyme IolJ is taken over by a duplication of fba. We were able to find a similar duplication (also not inside the myo-inositol cluster) in the genome of S. meliloti 1021 (SM_b21192 and SM_b20199, both annotated as fructosebisphosphate-aldolase, EC 4.1.2.13). This means that in at least one other symbiont that has the myo-inositol catabolism genes, there could exist a putative IolJ not close to the myo-inositol cluster, just as we proposed here.
Whether this entire pathway is functional in M. hyopneumoniae is yet to be tested and further experiments should take place to support this hypothesis. However, the ability of M. hyopneumoniae to persist longer in the swine lung if compared to the other two mycoplasmas might come from the fact that this species is able to uptake and process myo-inositol. Furthermore, the ability of M. hyopneumoniae to grow in diverse sites [START_REF] Carrou | Persistence of Mycoplasma hyopneumoniae in experimentally infected pigs after marbofloxacin treatment and detection of mutations in the parC gene[END_REF] if compared to M. flocculare might also be due to this specific trait.
Concluding remarks
It is important to remember that even though M. hyopneumoniae is considered highly pathogenic, the three Mycoplasma species studied here are widespread in pig populations and can easily be found in healthy hosts [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Pieters | An experimental model to evaluate Mycoplasma hyopneumoniae transmission from asymptomatic carriers to unvaccinated and vaccinated sentinel pigs[END_REF]. However, the main question permeating this fact is: what causes the switch from a nonpathogenic Mycoplasma community to a pathogenic one? And what makes some strains pathogenic while others inflict no harm to the host cells? Some strains of M. hyopneumoniae become less pathogenic in broth culture and, after serial passages, they lose their ability to produce gross pneumonia in pigs [START_REF] Whittlestone | Porcine mycoplasmas[END_REF]. In a proteomic study comparing strains 232 and J, researchers have described that the attenuated strain J switches its focus to metabolism and therefore has developed better capabilities to profit from the rich culture medium while the ability to infect host cells becomes less important so that adhesionrelated genes are downregulated [START_REF] Li | Proteomic comparative analysis of pathogenic strain 232 and avirulent strain J of Mycoplasma hyopneumoniae[END_REF]. This might be related to the fact that here we detected a higher production of ATP in this attenuated strain when compared to the pathogenic strains 7448 and 7422. Liu and collaborators [START_REF] Liu | Comparative genomic analyses of Mycoplasma hyopneumoniae pathogenic 168 strain and its high-passaged attenuated strain[END_REF] have investigated genetic variations between M. hyopneumoniae strains 168 and attenuated 168-L and found out that almost all reported Mycoplasma adhesins were affected by mutations. Tajima and Yagihashi [START_REF] Tajima | Interaction of Mycoplasma hyopneumoniae with the porcine respiratory epithelium as observed by electron microscopy[END_REF] reported that capsular polysaccharides from M. hyopneumoniae play a key role in the interaction between pathogen and host. Indeed in several bacterial species it has been reported that the amount of capsular polysaccharide is a major factor in their virulence [START_REF] Corbett | The role of microbial polysaccharides in hostpathogen interaction[END_REF] and it decreases significantly with in vitro passages [START_REF] Kasper | Capsular polysaccharides and lipopolysaccharides from two Bacteroides fragilis reference strains: chemical and immunochemical characterization[END_REF]. In this way, it is likely that the difference in pathogenicity between strains in M. hyopneumoniae does not solely depend on their metabolism, but also on their ability to adhere to the host.
A recent metagenomic analysis of community composition [START_REF] Siqueira | Microbiome overview in swine lungs[END_REF] has described that M. hyopneumoniae is by far the most prevalent species in both healthy and diseased hosts. The difficult isolation of Mycoplasma species from diseased lung extracts is due to the fact that, in culture, fast-growing bacteria will overcome the slow-growth of mycoplasmas [START_REF] Mckean | Evaluation of diagnostic procedures for detection of mycoplasmal pneumonia of swine[END_REF]. This means that, in vitro, the competition for an energy source between fast and slow-growing bacteria usually ends with an overpopulation of the fast growing ones. Given the fact that mycoplasmas survive for longer periods inside the host even in competition with other bacteria [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Overesch | Persistence of Mycoplasma hyopneumoniae sequence types in spite of a control program for enzootic pneumonia in pigs[END_REF], we must assume that other factors exist and are usually not mimicked in cell culture.
While M. hyopneumoniae might cause no harm, depending mostly on the environment, the characteristics of the host, and the composition of this dynamic lung microbiome, any unbalance in this system is probably capable of turning a non-pathogenic community into a pathogenic one. The final conclusion is that the disease is a multifactorial process depending on several elements that include intra-species mechanisms, community composition, host susceptibility and environmental factors. One possibility is that the competition with fast-growing species could result in a lower carbohydrate concentration and that M. hyopneumoniae might have to overcome this environmental starvation with the uptake of glycerol or myo-inositol. Since the uptake of myo-inositol does not lead to the production of any toxic metabolite, it is more interesting for its persistence in the long run. Other bacteria will strongly compete for glucose and other related carbohydrates, while M. hyopneumoniae will have the entire supply of myoinositol for itself. The uptake of glycerol as an energy source, on the other hand, will probably lead to the production of toxic hydrogen peroxide as reported in other Mycoplasma species. This toxic product combined with other toxins from the external bacteria in the system would most probably recruit immune system effectors. Since M. hyopneumoniae has efficient mechanisms of host evasion [START_REF] Fano | Dynamics and persistence of Mycoplasma hyopneumoniae infection in pigs[END_REF][START_REF] Maes | Update on Mycoplasma hyopneumoniae infections in pigs: Knowledge gaps for improved disease control[END_REF], the newly introduced and fast-growing bacteria might be eliminated faster and M. hyopneumoniae, in this way, would be able to persist longer than other species inside the host (as it is reported in vivo).
As mentioned before, virulence factors in Mycoplasma species cover a broader concept if compared to other species: they are genes not essential for in vitro conventional growth that are instead essential for optimal survival in vivo. From our M. hyopneumoniae metabolic models, neither the GlpO activity nor the uptake and metabolism of myo-inositol seem to be essential features for in vitro growth. However, we were able to show that they might be two metabolic traits important for the enhanced virulence of M. hyopneumoniae when compared to M. hyorhinis and M. flocculare and could be essential for its survival in vivo and directly affect its pathogenicity.
Experimental Procedures
Mycoplasma cultivation
We used the following strains for experimental validation: M. hyopneumoniae strains 7448, 7422 (field isolates) and J (ATCC 25934), M. hyorhinis ATCC 17981 and M. flocculare ATCC 27716. Cells were cultivated in Friis medium [START_REF] Friis | Some recommendations concerning primary isolation of Mycoplasma suipneumoniae and Mycoplasma flocculare a survey[END_REF] at 37 ∘ C for varying periods of time with gentle agitation in a roller drum.
Hydrogen peroxide detection
Hydrogen peroxide was detected in culture medium by the Amplex® Red Hydrogen Peroxide/Peroxidase Assay Kit (Invitrogen Cat. No A22188), according to the manufacturer's manual. M. hyopneumoniae, M. hyorhinis and M. flocculare were cultivated for 48 h in modified Friis medium (with no Phenol Red) and thereafter centrifuged. The supernatant was used for the hydrogen peroxide readings compared to a standard curve (Supplementary Figure S7). The medium without bacterial growth was used as negative control. We used biological and technical triplicates to infer the average amount of hydrogen peroxide produced, and the concentration was standardized based on the average number of cells from each culture. Statistical analyses were performed using GraphPad Prism 6 software by one-way ANOVA followed by Dunnett's multiple comparison test considering M. flocculare as a control (p < 0.05).
In order to determine if the hydrogen peroxide production was dependent on the glycerol metabolism, we used the Merckoquant Peroxide Test (Merck Cat. No 110011) with detection range of 0.5 to 25 μg of peroxide per mL of solution (as described in [START_REF] Hames | Glycerol metabolism is important for cytotoxicity of Mycoplasma pneumoniae[END_REF]). Fifteen mL of M. hyopneumoniae 7448 and 7422 strains were grown for 48 h in Friis medium, harvested by centrifugation at 3360 g and washed twice in the incubation buffer (67.7 mM HEPES pH 7.3, 140 mM NaCl, 7 mM MgCl 2 ). Cells were resuspended in 4 mL of incubation buffer and aliquots of 1 mL were incubated for 1 h at 37 ∘ C. To induce hydrogen peroxide production, either glycerol or glucose (final concentration 100 μM or 1 mM) was added to the cell suspension and samples were incubated at 37 ∘ C for additional 2 h. Hydrogen peroxide levels were measured using colorimetric strips according to the manufacturer's instructions. Aliquots without any added carbon source served as an incubation control. The statistical significance of the results was calculated using one-way ANOVA followed by Dunnett's multiple comparison test (p < 0.05). The results represent four biological replicates with at least two technical replicates each.
Mycoplasma cell count with flow cytometry
Mycoplasma cells cultivated for hydrogen peroxide detection were sedimented at 3360 g for 20 min at 4 ∘ C and washed three times with NaCl 0,9% (1x 3360 g for 20 min and 2x 3360 g for 4 min). Cells were resuspended in 1 mL of NaCl 0,9% and diluted 1:30 for flow cytometry readings in a Guava EasyCyte cytometer (Millipore, USA). Cells were characterized by side-angle scatter (SSC) and forward-angle scatter (FSC) in a four-decade logarithmic scale. Absolute cell counting was performed up to 5000 events and the samples were diluted until the cell concentration was below 500 cells/μL. The number of counts obtained was then converted to cells/mL.
Transcript levels of glpO with the use of real-time quantitative RT-PCR
Total RNA was isolated from 20 mL culture of M. hyopneumoniae strains 7448, 7422 and J grown at 37 ∘ C for 24 h. Cells were harvested by centrifugation at 3360 g for 15 min, resuspended in 1mL of TRizol (Invitrogen, USA) and processed according to the manufacturer's instructions followed by DNA digestion with 50 U of DNaseI (Fermentas, USA). Absence of DNA in the RNA preparations was monitored by PCR assays. The extracted RNA was analysed by gel electrophoresis and quantified with the Qubit TM system (Invitrogen, USA).
A first-strand cDNA synthesis reaction was conducted by adding 500 ng of total RNA to 500 ng of pd(N)6 random hexamer (Promega, USA) and 10 mM deoxynucleotide triphosphates. The mixture was heated for 65 ∘ C for 5 min and then incubated on ice for 5 min. First-strand buffer (Invitrogen, USA), 0.1 M dithiothreitol and 200 U M-MLV RT (Moloney Murine Leukemia Virus Reverse Transcriptase -Invitrogen, USA) were then added to a total volume of 20 μL. The reaction was incubated at 25 ∘ C for 10 min and at 37 ∘ C for 50 min followed by 15 min at 70 ∘ C for enzyme inactivation. A negative control was prepared in parallel, differing only by the absence of the RT enzyme. Quantitative PCR (qPCR) assay was performed using 1:2.5 cDNA as template and Platinum SYBR Green qPCR SuperMix-UDG with ROX (Invitrogen, USA) with specific primers for glpO (5'GGTCGGGAACCTGCTAAAGC3' and 5'CCAGACGGAAACATCTTAGTTGG3') on StepOne Real-Time PCR Systems (Applied Biosystems, USA). The qPCR reactions were carried out at 90 ∘ C for 2 min and 95 ∘ C for 10 min followed by 40 cycles of 95 ∘ C for 15 s and 60 ∘ C for 1 min. A melting curve analysis was done to verify the specificity of the synthesized products and the absence of primer dimers. The amplification efficiency was calculated with the LinRegPCR software application [START_REF] Ruijter | Amplification efficiency: linking baseline and bias in the analysis of quantitative PCR data[END_REF]
Comparative modeling and protein-ligand interaction analysis of Fba and IolJ
The SWISS-MODEL server [START_REF] Schwede | SWISS-MODEL: An automated protein homology-modeling server[END_REF][START_REF] Biasini | SWISS-MODEL: modelling protein tertiary and quaternary structure using evolutionary information[END_REF] was used for template search and the comparative modeling for all Fba and IolJ proteins in this study. The best homology models were selected according to coverage, sequence identity, Global Model Quality Estimation (GMQE) and QMEAN statistical parameters [START_REF] Benkert | QMEAN server for protein model quality estimation[END_REF][START_REF] Benkert | Toward the estimation of the absolute quality of individual protein structure models[END_REF]. The Fba from M. hyopneumoniae along with IolJ and Fba from B.
subtilis were modeled using the crystal structure of fructose 1,6-bisphosphate aldolase from Bacillus anthracis in complex with 1,3-dihydroxyacetonephosphate (PDB 3Q94) while Fba-1 from M. hyopneumoniae was modeled using the fructose-1,6-bisphosphate aldolase from Helicobacter pylori in complex with phosphoglycolohydroxamic acid (PDB 3C52). Both selected templates have the same resolution range (2.30Å). Fba structures experimentally solved from E. coli [START_REF] Hall | The crystal structure of Escherichia coli class II fructose-1, 6-bisphosphate aldolase in complex with phosphoglycolohydroxamate reveals details of mechanism and specificity[END_REF] and G. intestinalis [START_REF] Galkin | Structural insights into the substrate binding and stereoselectivity of Giardia fructose-1,6-bisphosphate aldolase[END_REF] were used to include information about substrate binding in the active site. The DKGP and FBP ligands were drawn in the Avogadro version 1.1.1 [START_REF] Hanwell | Avogadro: an advanced semantic chemical editor, visualization, and analysis platform[END_REF] by editing the tagatose-1,6-biphosphate (TBP) molecule complexed with the Fba structure of G. intestinalis (PDB 3GAY). Each model was submitted to 500 steps of an energy minimization protocol using the universal force field (UFF). The DKGP and FBP molecules were inserted into the substrate binding sites of the acquisition models obtained by superposition of the models with the Fba structure of G. intestinalis.
Detection of marked myo-inositol through mass spectrometry
Sample preparation
All samples were filtered and concentrated with the use of Amicon Ultra 3 kDa (Merck Millipore Cat. No. UFC200324). After this step, samples were dried in a miVac sample concentrator (Genevac, Ipswich, UK) for approximately 45 min at 50 ∘ C. All samples were ressuspended in ultra pure water to a final concentration of 10 g/L and were subsequently submitted to mass spectrometry.
Mass spectrometry
Aqueous extracts of Mycoplasma sp. and commercial deuterated myo-inositol-1,2,3,4,5,6-d6 were analysed using an Accurate-Mass Q-TOF LCMS 6530 with LC 1290 Infinity system and Poroshell 120 Hilic column (3x100 mm, 2.7 μm) (Agilent Technologies, Santa Clara, USA). The extracts were dissolved in water (10 g/L) and injection volume was 3 μL. A binary mobile phase system (A: 0.4% formic acid in milliQ-water and B: acetonitrile) was pumped at a flow rate of 0.9 mL/min at the following gradient: 0-3.5 min, 90% B; 3.5-7 min, 90% to 0% B; 7-9.5 min, 0% B; 9.5-10 min 0% to 90% B; 10-15 min, 90% B (total run: 15 min).
MS and MS
Determination of cell viability of M. hyopneumoniae in myo-inositol defined medium
All available strains were grown in Friis medium at 37 ∘ C for 48 h, sedimented by centrifugation at 3360 g for 20 min at 4 ∘ C, washed twice with ice cold PBS and inoculated in glucose regular defined medium (described in [START_REF] Ferrarini | Insights on the virulence of swine respiratory tract mycoplasmas through genome-scale metabolic modeling[END_REF], supplemented with 5 g/L of succinate) or myo-inositol defined medium (regular defined medium depleted with glucose and glycerol and supplemented with 0.5 g/L of myoinositol). Viability of cells was measured by ATP production with live cells recovered after 8 h of growth in either media with a BacTiter-Glo TM Microbial Cell Viability Assay Kit (Promega, USA) according to the manufacturer's manual.
Luminescence was recorded in a SpectraMax MiniMax 300 Imaging Cytometer (Molecular Devices, USA) with an integration time of 0.5 s in an opaque-walled multiwell plate. Average ATP production was calculated with biological duplicates and technical triplicates. The ATP production of each strain was compared between regular defined medium and myo-inositol defined medium to determine the ratio of viable cells and to allow a comparison between strains. A 10-fold serial dilution of ATP was used as a standard curve (Supplementary Figure S8).
Statistical analyses were performed using GraphPad Prism 6 software by oneway ANOVA followed by Tukey's multiple comparison test (p < 0.05). Fig. 2 Expression levels of glpO gene in M. hyopneumoniae strains. We did not find any significant difference on the transcript levels of glpO from all tested strains. Bars show the average relative quantification normalized against unit mass (500 ng of total RNA) and replicate 2 from strain 7448 was used as the calibrator. Average expression levels were calculated with independent biological triplicates (p < 0.05). hyopneumoniae). In contrast, Fbas generally bear glycines in this position (for complete explanation see Supplementary Figures S3 andS4). While Fba-1 from M. hyopneumoniae resembles more the experimentally solved Fba enzymes from B. subtilis, E. coli and, G. intestinalis, the predicted structure of Fba from M.
hyopneumoniae is more similar to the IolJ structure from B. subtilis. hyorhinis ATCC 17981 (MHR). While there is no significant difference in the concentrations between MFL and MHR and the control medium (CTRL), both M.
hyopneumoniae strains seem to be able to uptake myo-inositol. B. We also collected two extra time points for MHP_7448 and CTRL: 8h and 24h of growth.
In all time points there is significant difference between residual marked myoinositol and the control medium. Data are presented as mean and standard deviation of 4 independent biological replicates. Asterisks indicate statistically significant differences in residual marked myo-inositol (*p < 0.05; **p < 0.01).
M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e M. h y o p n e u mo n i a e B. s u b t i l i s B. s u b t i l i s B. s u b t i l i s
1 3
13 Comparative genomics of glpO from glycerol metabolism 1.2 Pathogenic M. hyopneumoniae strains produce hydrogen peroxide from glycerol 1.3 Levels of glpO transcripts do not differ from pathogenic to attenuated strains of M. hyopneumoniae 1.4 Enzymes for the uptake and catabolism of myo-inositol are specific to M. hyopneumoniae strains 1.5 M. hyopneumoniae is able to uptake myo-inositol from the culture Mycoplasma cell count with flow cytometry 3.4 Transcript levels of glpO with the use of real-time quantitative RT-PCR 3.5 Comparative modeling and protein-ligand interaction analysis of Fba and IolJ 3.6 Detection of marked myo-inositol through mass spectrometry 3.7 Determination of cell viability of M. hyopneumoniae in myo-inositol defined medium
Acetonitrile and formic acid (Optima LC/MS Grade) were purchased from Fisher Scientific (Loughborough, UK). MilliQ water was obtained from a Direct-Q 5UV system (Merck Millipore, Billerica, Massachusetts, USA). Deuterated myoinositol-1,2,3,4,5,6-d6 was purchased from CIL (C/D/N Isotopes Inc. Cat No. D-3019, Canada).Cultivation in the presence of marked myo-inositolCells were cultivated in Friis medium supplemented with 0.25 g/L of deuterated myo-inositol-1,2,3,4,5,6-d6 (C/D/N Isotopes Inc. Cat No. D-3019). Cultures were interrupted after 8 h, 24 h and 48 h of cultivation for mass spectrometry analysis.
Fig. 1
1 Fig.1Hydrogen peroxide production by swine mycoplasmas. A. In Friis medium after bacterial growth: Hydrogen peroxide was only detected in growth media from pathogenic strains (field isolates) of M. hyopneumoniae 7448 (MHP_7448) and 7422 (MHP_7422). Neither the attenuated strain J (MHP_J) nor the other species M. hyorhinis (MHR) and M. flocculare (MFL) produced detectable amounts of this toxic product. The concentration was also standardized based on the average number of cells from each culture. Data are presented as mean and standard deviation of three independent samples and statistical analysis was performed considering M. flocculare as a control strain (since it lacks the glpO gene). B. In the presence of different carbon sources: Pathogenic M. hyopneumoniae strains were used to test hydrogen peroxide production in incubation buffer supplemented with either glycerol or glucose after 2 h of incubation. Both strains were able to produce significant amounts of the
Fig. 3
3 Fig. 3 Myo-inositol catabolism pathway in all M. hyopneumoniae strains and its transcriptional unit in M. hyopneumoniae strain 7448. Metabolites are depicted in dark green and enzymatic activities present in M. hyopneumoniae can be seen in pink. Metabolite abbreviations are as follows: MI (myo-inositol), 2KMI (2-ketomyo-inositol), THcHDO (3D-(3,5/4)-trihydroxycyclohexane-1,2-dione), 5DG (5deoxy-D-glucuronate), DKG (2-deoxy-5-dehydro-D-gluconate), DKGP (6phospho-5-dehydro-2-deoxy-D-gluconate), MSA (malonate semialdehyde), AcCoA (acetyl coenzyme-A), DHAP (dihydroxyacetone phosphate). EC 1.2.1.27:
Fig. 4
4 Fig. 4 Substrate cavity prediction for Fba and Fba-1 from M. hyopneumoniae strain 7448. Cavities from the comparative models of Fba and Fba-1 from M. hyopneumoniae in comparison to the models constructed for Fba and IolJ from B. subtilis. The specificity for DKPG in IolJ seems to be strongly associated to the presence of a conserved arginine in position 'a' (R52 in Fba-1 from M.
Fig. 5
5 Fig. 5 Deuterated myo-inositol-1,2,3,4,5,6-d6 uptake in complex medium. A. Comparison after 48 h of growth of M. hyopneumoniae J ATCC 25934 (MHP_J) and field isolate 7448 (MHP_7448), M. flocculare ATCC 27716 (MFL) and M.
Fig. 6
6 Fig.6Viability of M. hyopneumoniae, M. hyorhinis and M. flocculare after 8 hours of incubation in myo-inositol defined medium. The viability of cells in myoinositol defined medium was measured by ATP production in comparison to inoculation in regular defined medium (glucose-containing medium). Data are represented as the ratio between ATP production in each media. There is a significant decrease of ATP production in M. hyorhinis and M. flocculare whereas at least 75% of the cells from M. hyopneumoniae remained viable after cultivation in the myo-inositol defined medium (***p < 0.001; ****P < 0.0001).
. A relative quantification normalized against unit mass (500 ng of total RNA) was used to analyse the
expression data with the equation: ( Ratio test calibrator / ) 2 CT , where
CT CT CT [80] and MHP_7448 (Replicate 2) was chosen as
test calibrator
calibrator. Statistical analyses were performed using GraphPad Prism 6 software
by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05).
/MS spectra were obtained in negative mode, with the following conditions: nebulization gas (nitrogen) at 310 ∘ C, at a flow of 10 L/min and 40 psg pressure. The capillary tension was 3600 V and gave ionisation energy of 100 eV. In targeted MS/MS mode, collision energy was set at 18 eV. Acquisition range was m/z 50-500. MassHunter Qualitative Analysis Software (version B.07.00) was used for data analysis.
Data analysis
Deuterated myo-inositol-1,2,3,4,5,6-d6 was quantified in all aqueous extracts by
HPLC-MS. For that, a calibration curve (based on peak area) of a commercial
myo-inositol was performed from 0.001 g/L to 0.05 g/L in replicate (4 times during
the batch analysis). Statistical analyses were performed using GraphPad Prism 6
software. One-way ANOVA followed by Dunnett's multiple comparison test was
used to test for differences in residual marked myo-inositol in culture after
bacterial growth of all tested strains for 48 h (p < 0.05). A two-tailed unpaired t-
test was used to compare the residual marked myo-inositol between M.
hyopneumoniae 7448 and the control medium with two extra timepoints: 8 and
24 h (p < 0.05).
Acknowledgments Authors' contributions
Acknowledgments
This work was supported by grants from CAPES-COFECUB 782/13 and Inria. MGF was granted post doctoral fellowship funded by the European Research Council under the European Community's Seventh Framework Programme (FP7 / 2007-2013)/ ERC grant agreement no. [247073]10. SGM was the recipient of a CAPES doctoral fellowship. DP was granted post doctoral fellowship funded by the European Union Framework Program 7, Project BacHbERRY number FP7-613793. JFRB is a recipient of a CAPES postdoctoral fellowship. The mass spectrometry analysis was carried out in the Centre d'Etude des Substances Naturelles at the University of Lyon.
Competing interests
Authors' contributions
MGF, SGM, MFS and AZ conceived and designed the work. MGF and SGM performed most of the experimental work. DP, GM and GC collaborated in the mass spectrometry experiments and analysis. JFB performed tridimensional analysis of proteins. All authors collaborated in the analysis of all data. MGF and SGM wrote the manuscript with inputs from the other authors. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests. Table S1: Cytometry cell counts and replicate readings for the calculation of hydrogen peroxide production. Table S2: Replicate readings of real-time RT qPCR for glpO transcript relative expression. Table S3: Gene locus tag for the genes from the uptake and metabolism of myo-inositol in M. hyopneumoniae strains. Table S4: Comparative modeling summary. Table S5: Average peak surface for marked myo-inositol from mass spectrometry experiments. Table S6: ATP production replicates and average for each sample. Table S7: Literature experimental data available for genes important for the glycerol and myo-inositol metabolism.
Supplementary material
File S1: Statistical analyses results. |
01467553 | en | [
"sdv.bibs",
"sdv.bbm.bc"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01467553/file/FDR-controlled%20metabolite%20annotation--accepted.pdf | I Phapale
Régis Chernyavsky
D Lavigne
A Fay
V Tarasov
J Kovalev
S Fuchser
Charles Nikolenko
Pineau
Andrew Palmer
Prasad Phapale
Ilya Chernyavsky
Regis Lavigne
Dominik Fay
Artem Tarasov
Vitaly Kovalev
Jens Fuchser
Sergey Nikolenko
Charles Pineau
Michael Becker
Theodore Alexandrov
email: [email protected]
FDR-controlled metabolite annotation for high-resolution imaging mass spectrometry
FDR-controlled metabolite annotation for high-resolution imaging mass spectrometry
mass-resolution (HR) MS that discriminates metabolites differing by a few mDa promises to achieve unprecedented reliability of metabolite annotation. However, no bioinformatics exists for automated metabolite annotation in HR imaging MS. This has restricted this powerful technique mainly to targeted imaging of a few metabolites only [START_REF] Spengler | Mass spectrometry imaging of biomolecular information[END_REF] . Existing approaches either need visual examination or are based on the exact mass filtering known to produce false positives even for ultra-HR MS [START_REF] Kind | Metabolomic database annotations via query of elemental compositions: mass accuracy is insufficient even at less than 1 ppm[END_REF] . This gap can be explained by the field novelty and high requirements to the algorithms which should be robust to strong pixel-to-pixel noise and efficient enough to mine 10-100 gigabyte datasets.
An additional obstacle is the lack of a metabolomics-compatible approach for estimating False Discovery Rate (FDR) [START_REF] Benjamini | Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing[END_REF][START_REF] Storey | A direct approach to false discovery rates[END_REF] . FDR is defined as the ratio of false positives in a set of annotations. FDR is a cornerstone of quantifying quality of annotations in genomics, transcriptomics, and proteomics [START_REF] Käll | Assigning significance to peptides identified by tandem mass spectrometry using decoy databases[END_REF] . The proteomics target-decoy FDR-estimation is not directly applicable in metabolomics where there is no equivalent of a decoy database of implausible peptide sequences. An FDR-estimate in metabolomics was proposed earlier [START_REF] Matsuda | Assessment of metabolome annotation quality: a method for evaluating the false discovery rate of elemental composition searches[END_REF] but is limited to phytochemical metabolites, has not found widespread use and cannot be applied to imaging MS as it does not allow incorporating spatial information. An alternative approach to estimate FDR is to use a phantom sample with controlled molecular content but it is inherently complex and narrowed to a specific protocol.
We have addressed this double challenge and developed a comprehensive bioinformatics framework for FDR-controlled metabolite annotation for HR imaging MS. Our open-source framework (https://github.com/alexandrovteam/pySM) is based on the following principles: database-driven annotation by screening for metabolites with known sum formulas, an original Metabolite-Signal Match (MSM) score combining spectral and spatial measures, a novel target-decoy FDR-estimation approach with a decoy set generated by using implausible adducts.
Our framework takes as input: 1) an HR imaging MS dataset in the imzML format, 2) a database of metabolite sum formulas in a CSV format (e.g., HMDB [START_REF] Wishart | HMDB 3.0--The Human Metabolome Database in 2013[END_REF] ), 3) an adduct of interest (e.g., +H, +Na, +K). For a specified FDR level (e.g., 0.1), the framework provides metabolite annotations: metabolites from the database detected as present in the sample. The framework cannot resolve isomeric metabolites; the provided putative molecular annotations are on the level of sum formulas [START_REF] Sumner | Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI)[END_REF] .
Our novel MSM score quantifies the likelihood of the presence of a metabolite with a given sum formula in the sample (Figure 1; Supplementary Note 1, Figure S2). For an ion (sum formula plus ion adduct, e.g., +H), we generate its isotopic pattern accounting for the instrument resolving power with isotopic fine structure if resolvable. Then, we sample from the imaging MS dataset an ion signal, namely, the ion images for all isotopic peaks with predicted intensity greater than 0.01% of the principal peak (Supplementary Note 1, Figure S1). MSM is computed by multiplying the following measures. (1) Measure of spatial chaos quantifies spatial informativeness within the image of the principal peak [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF] . We introduce an improved measure of spatial chaos (Algorithm OM1) which outperforms earlier proposed measures [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF][START_REF] Wijetunge | EXIMS: an improved data analysis pipeline based on a new peak picking method for EXploring Imaging Mass Spectrometry data[END_REF] in both speed and accuracy (Supplementary Note 1). ( 2) Spectral isotope measure quantifies spectral similarity between a theoretical isotopic pattern and relative sampled isotopic intensities. (3) Spatial isotope measure quantifies spatial co-localization between isotopic ion images. The MSM score of 1 indicates the maximal likelihood of the signal to correspond to the ion.
Our novel FDR-estimate helps select an MSM cutoff so that the ions with MSM scores above the cutoff will confidently correspond to metabolites from the sample (Figure 1; Supplementary Note 1, Figure S2). According to the target-decoy approach [START_REF] Käll | Assigning significance to peptides identified by tandem mass spectrometry using decoy databases[END_REF] , we propose to construct a decoy set as follows. We define a target set as ions from a metabolite database with a given ion adduct (e.g., +H). We define the decoy set as ions for the same sum formulas but with the following implausible adducts. For each sum formula, we randomly select an implausible adduct from the CIAAW 2009 list of the elements (e.g., +B, +Db, +Ag) excluding plausible adducts. MSM scores are calculated for target and decoy ions. For any MSM cutoff, FDR is estimated as the ratio between the numbers of decoy false positives (the decoy ions with MSM scores above the cutoff, FP D ) and target positives (the target ions with MSM scores above the cutoff). Here, we approximate the number of target false positives (FP T ) by FP D assuming the target and decoy sets to be similar. The sampling of implausible adducts is repeated, averaging the resulting FDR-estimate. FDR-controlled metabolite annotation is performed by specifying the desired value of FDR (e.g., 0.1) and choosing the smallest MSM cutoff providing the desired FDR (Figure 1; Supplementary Note 1, Figure S2). FDR-controlling provides annotations of a given confidence independently on the MSM cutoff, dataset, MS settings and operator, and can be used for comparative and inter-lab studies.
We evaluated the proposed FDR-estimation (Supplementary Note 1). First, we studied the similarity between the decoy and target ions required to fulfill FP D ≈FP T . Relative intensities of isotopic patterns for target and decoy ions were found to be similar (Figure 2a) despite the decoy ions have higher relative intensities for heavier isotopic peaks due to more complex isotopic patterns. The target and decoy ions were also found to be similar in the m/z-and mass defect-space (Figure 2b), with a positive offset in m/z for decoy adducts which typically have heavier elements. Second, we compared the estimated and true FDR for a simulated dataset with a known ground truth (Figure 2c; Supplementary Note 1). Although there is some difference in the low-values region, estimated FDR follows the true FDR overall. Finally, negative control experiments using each of the implausible adducts as a target one showed that FDR values for implausible adducts are characteristically higher (Figure 2d; Supplementary Note 1).
We showcased our framework on HR imaging MS datasets from two (a1 and a2) female adult wild-type mice (Supplementary Note 1). The brains were extracted, snap-frozen, and sectioned using a cryostat. Five coronal sections were collected from each brain: 3 serial sections (s1-s3) at the Bregma 1.42 mm, s4 at -1.46 mm and s5 at -3.88 mm. The sections were imaged using a 7T MALDI-FTICR mass spectrometer solariX XR (Bruker Daltonics) in the positive mode with 50 µm raster size. The datasets were of 20-35 gigabytes in size each. FDR-controlled annotation was performed with the desired level of FDR=0.1 for metabolites from HMDB with +H, +Na, +K adducts, and m/z-tolerance of 2.5 ppm (Figure 2e-i). Venn diagrams of annotated metabolites (Figure 2e) show a high reproducibility between sections from the same animal (especially between the serial sections from a2 where 51 of 73 sum formulas were annotated in all three sections), and between the animals (only two sum formulas were annotated in the animal a1 only). The numbers of detected adducts were similar (Figure 1f). Exemplary molecular images of annotations illustrate a high reproducibility between technical replicates and animals (Figure 1g). Phospholipids were detected mostly (PCs, PEs, SMs, PAs; Supplementary Note 1, Table S5 and Figure S10) that is typical for MALDI imaging MS of brain tissue using the HCCA matrix [START_REF] Gode | Lipid imaging by mass spectrometry --a review[END_REF] . From overall 103 annotations, 16 representative ones were validated with LC-MS/MS by either using authentic standards or assigning fragment structures to MS/MS data (Supplementary Note 3).
We demonstrated the potential of using FDR curves in two examples. First, we showed that MSM outperforms the individual measures (Figure 2h; Supplementary Note 1, Figure S8). The exact mass filtering performs significantly worse, achieving the lowest FDR=0.25 for 10 annotations (vs. FDR=0 for the same number of annotations when using MSM). Second, we demonstrated that the number of FDR-controlled annotations decreases with the decreasing mass resolving power (Figure 2i; Supplementary Note 1, Figure S9). For this, we artificially reduced mass resolving power by using different m/z-tolerances when sampling m/z-signals: 1, 2.5 (default), 5, 30, 100, 1000, and 5000 ppm. This indicates that a high mass accuracy and resolution are essential for confident metabolite annotation.
Our framework is directly applicable to other types of HR imaging MS with FTICR or Orbitrap analyzers (MALDI-, DESI-, SIMS-, IR-MALDESI-, etc.; with proper adducts to be selected for each source) and other types of samples (plant tissue, cell culture, agar plate, etc.) for which a proper metabolite database can be selected.
Accession Codes
MTBLS313: imaging mass spectrometry data from mouse and rat brain samples, MTBLS317: simulated imaging mass spectrometry data and MTBLS378: LC-MS/MS data from mouse brain samples. Dorrestein (UCSD) and Peter Maass (University of Bremen) for providing a stimulating environment as well as for discussions on mass spectrometry and image analysis during the years of this work. S5 for breakdown about the annotations), f) overlaps between adducts of the annotations, g) examples of molecular ion images for annotations validated using LC-MS/MS (cf. Supplementary Note 2, Figures S11 andS12; Supplementary Note 3), as well as FDR curves illustrating h) superiority of MSM as compared to individual measures for a2s3, +K (see Supplementary Note 1, Figure S8 for other datasets and adducts), and g) decrease of number of annotations when simulating lower mass resolution/accuracy for a1s3, +K (cf. Supplementary Note 1, Figure S9).
Tables N/A
Online Methods
Imaging mass spectrometry 1.1 Imaging mass spectrometry data from mouse brain samples
Samples
Two female adult wild-type C57 mice (a1, a2) were obtained from Inserm U1085 -Irset Research Institute (University of Rennes1, France). Animals were age 60 days and were reared under ad-lib conditions. Care and handling of all animals complied with EU directive 2010/63/EU on the protection of animals used for scientific purposes. The whole brain was excised from each animal immediately post-mortem and are loosely wrapped rapidly in an aluminum foil to preserve their morphology and snap-frozen in liquid nitrogen. Frozen tissues were stored at -80 °C until use to avoid degradation.
Sample preparation
For each animal, five coronal 12 µm-thick brain sections were collected on a cryomicrotome CM3050S (Leica, Wetzlar, Germany) as follows. Three consecutive sections were acquired at the Bregma distance of 1.42 mm (sections s1, s2, s3) and two further sections were acquired at the Bregma distances of -1.46 and -3.88 mm (datasets s4 and s5). The sections were thaw-mounted onto indium tin oxide (ITO) coated glass slides (Bruker Daltonics, Bremen, Germany) and immediately desiccated. Alpha-Cyano-4-hydroxycinnamic acid (HCCA) MALDI-matrix was applied using the ImagePrep matrix deposition device (Bruker Daltonics). The method for matrix deposition was set as described: after an initialization step consisting in between 10-15 cycles with a spray power at 15%, an incubation time of 15 s and a drying time of 65 s, 3 cycles were performed under sensor control with a final voltage difference at 0.07 V, a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 10 s; then 6 cycles were performed under sensor control with a final voltage difference at 0.07 V, a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 15 s; 9 cycles were performed under sensor control with a final voltage difference at 0.2 V, a spray power at 15%, an incubation time of 30 s and a drying time under sensor control at 20% and a safe dry of 50 s; finally 20 cycles were performed under sensor control with a final voltage difference at 0.6 V (+/-0.5 V), a spray power at 25%, an incubation time of 30 s and a drying time under sensor control at 40% and a safe dry of 30 s.
Imaging mass spectrometry
For MALDI-MS measurements the prepared slides were mounted into a slide adapter (Bruker Daltonics) and loaded into the dual source of a 7T FTICR mass spectrometer solariX XR (Bruker Daltonics) equipped with a Paracell, at the resolving power R=130000 at m/z 400. The x-y raster width was set to 50µm using smartbeam II laser optics with the laser focus setting 'small' (20-30 µm). For a pixel, a spectrum was accumulated from 10 laser shots. The laser was running at 1000 Hz and the ions were accumulated externally (hexapole) before being transferred into the ICR cell for a single scan. For animal a1, each spectrum was internally calibrated by one-point correction using a known phospholipid with the ion C 42 H 82 NO 8 P+K + , at the m/z 798.540963. For animal a2, every spectrum was internally calibrated by several point correction using: matrix cluster of HCCA
Signal processing
Centroid data was exported into the imzML format by using the SCiLS Lab software, version 2016a (SCiLS, Bremen, Germany). Ion images were generated with the tolerance ±2.5 ppm. A hot-spot removal was performed for each image independently by setting the value of 1% highest-intensity pixels to the value of the 99'th percentile followed by an edgepreserving denoising using a median 3x3-window filter.
Data availability
The imaging mass spectrometry data is publicly available at the MetaboLights repository under the accession numbers MTBLS313.
Simulated imaging mass spectrometry data
An imaging MS dataset was simulated that contained 300 sum formulas from the HMDB metabolite database, version 2.5, and 300 randomly generated formulas not contained in HMDB. To each sum formula, either +H, +Na, or +K adduct was randomly assigned. Random sum formulas were generated such that the probability distributions of the number of CHNOPS atoms, the C-H ratio, and the C-O ratio are the same as all formulas from HMDB. Isotope patterns were generated for each formula at a resolving power of R=140000 at m/z 400. Each isotope pattern was multiplied by a random intensity in the range [0.2-1.0]. The patterns were assigned to one of two partially overlapping square regions: one with sum formulas from HMDB, the other with sum formulas not from HMDB. Additionally 700 peaks at randomly selected m/z-values were added independently to each spectrum so that a spectrum inside one of the squares would have 3500 ± 127 peaks. The resulted line spectra were then convolved with a Gaussian function with the sigma equal to 0.015.
Data availability
The simulated imaging mass spectrometry data is publicly available at the MetaboLights repository under the accession numbers MTBLS317.
Metabolite-Signal Match score
Individual measures used in the Metabolite-Signal Match (MSM) score were defined based on the ion images generated from each peak within the isotope pattern for a particular sum formula and adduct. Isotope envelopes were predicted for an ion (sum formula plus adduct) at the mass resolution of the dataset and peak centroids were detected.
Measure of spatial chaos
The measure of spatial chaos (Algorithm OM1) quantifies whether the principal ion image is informative (structured) or non-informative (noise). This approach was previously proposed by us for image-based peak picking [START_REF] Alexandrov | Testing for presence of known and unknown molecules in imaging mass spectrometry[END_REF] but here we developed an improved measure based on the concept of level sets earlier applied for image segmentation [START_REF] Vese | A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model[END_REF] . For an ion image, its range of intensities is split into a number of levels. For each level, a level set is calculated as an 0-or-1-valued indicator set having 1-values for pixels with intensities above the level. Then, the number of closed 1-valued objects (connected areas of 1-valued pixels) in the produced level set is computed. Images with structure tend to exhibit a small number of objects that simply shrink in size as the threshold increases whilst images with a noisy distribution produce a great number of objects as the pixels above the threshold level are randomly spatially distributed (see Figure S3a). The algorithm was inspired by a concept of computational topology called persistent homology [START_REF] Edelsbrunner | Persistent homology-a survey[END_REF] . The proposed measure of spatial chaos returns a value between zero and one which is high for spatially-structured images and low for noisy images. uses the label function from scipy 25 with 4-connectivity and returns the number of disconnected objects in an image.
Input
The computational complexity of the level-sets algorithm is
where is the number of pixels. The parameters controls the smoothness of the curve seen in Figure S3b and above a certain granularity the value of stabilises to a constant for a particular image. A value of was found to be sufficient to provide stable results for both the test images from 2 and random noise (data not shown).
Spatial isotope measure
The spatial isotope measure quantifies the spatial similarity between the ion images of isotopic peaks, composing a signal for a sum formula. It is calculated as a weighted average linear correlation between the ion image from the most intense isotope peak (
) and all others ( ) where is the number of theoretically predicted isotope peak centroids for a particular sum formula and adduct with an intensity greater than 1% of the principal (largest) peak. Each image is weighted by the relative abundance of the theoretical isotope peak height . Negative values are set to zero so the spatial isotope measure returns a value between zero and one; the higher values imply a better match.
Equation OM1. Spatial isotope measure quantifying the spatial similarity of each isotope peak to the principal peak where returns Pearson's correlation coefficient and where
is a vector of intensities from ion image of the 'th isotope peak.
Spectral isotope measure
The spectral isotope measure quantifies the spectral similarity between a predicted isotope pattern and measured spatial intensities. It is calculated as the average difference between normalised predicted isotope ratios and normalised measured intensities, reported so that larger values imply a better match.
Equation OM2. Spectral isotope measure quantifying the spectral similarity between a predicted isotope pattern and the measured intensities of a signal.
In Equation 2, is a vector containing the mean image intensity from the ion images for the pixels in with non-zero intensity values and , where . This can be considered as projecting both theoretical and empirical isotope patterns onto a sphere and then calculating one minus the average coordinate difference.
Metabolite-Signal Match score
The Metabolite-Signal Match (MSM) score quantifies the similarity between the theoretical signal of a sum formula and its measured counterpart, with the higher value corresponding to higher similarity. It is calculated according to Equation OM3, as a product of the individual measures: measure of spatial chaos, spatial isotope measure and spectral isotope measure). This puts an equal weighting on all measures whilst penalizing any annotation that gets low value for any of the measures.
Equation OM3. Metabolite-Signal Match (MSM) score quantifying similarity between a theoretical signal of a sum formula and its counterpart sampled from the dataset.
Section OM3. False Discovery Rate-controlled metabolite annotation
Molecular annotation
First, we consider all unique sum formulas from a metabolite database of interest. We used the Human Metabolome Database (HMDB), v. 2.5, considering only 7708 carbon-containing sum formulas [START_REF] Wishart | HMDB 3.0--The Human Metabolome Database in 2013[END_REF] . Then, we select a list of potential ion adducts. The adducts +H, +Na and +K were used as the adducts commonly detected during tissue MALDI imaging MS in the positive mode 27 . Then, we perform molecular annotation of an imaging MS dataset for each ion (combination of a sum formula and an adduct) independently as described in Algorithm OM2. Note that in this algorithm the MSM threshold needs to be specified; for the updated algorithm selecting the MSM threshold in an FDR-controlled way, please see Algorithm OM3.
Input
Calculation of the False Discovery Rate
To calculate the False Discovery Rate among the molecular annotations provided using Algorithm OM2 with an MSM threshold , we developed a target-decoy approach similar to (Elias and Gygi 2007) 28 . The innovative part of his development is in applying the targetdecoy approach in the spatial metabolomics context by defining a decoy set appropriate for metabolomics.
A target set was defined as a set of molecular ions for the sum formulas from a metabolite database (e.g. HMBD), with a given ion adduct type (e.g. +H, +Na, +K). A decoy search was defined as a set of implausible ions for the same sum formulas but with implausible ion adduct types. For each sum formula, an implausible elemental adduct is randomly chosen from the CIAAW 2009 list of isotopic compositions of the elements [START_REF] Berglund | Isotopic compositions of the elements 2009 (IUPAC Technical Report)[END_REF] excluding the plausible adducts, namely from He, Li, Be, B, C, N, O, F, Ne, Mg, Al, Si, P, S, Cl, Ar, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Ge, As, Se, Br, Kr, Rb, Sr, Y, Zr, Nb, Mo, Ru, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, I, Xe, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Ir, Th, Pt, Pu, Os, Yb, Lu, Bi, Pb, Re, Tl, Tm, U, W, Au, Er, Hf, Hg, Ta. Once the target and decoy sets are defined, the MSM scores are calculated for all target and decoy ions.
The MSM cutoff (
) is a key parameter of the molecular annotation. Setting the MSM cutoff changes the number of molecular annotations made. For any MSM cutoff, we define positives as the ions with MSM scores above the cutoff and negatives as the ions with MSM scores below the cutoff. We define as positive hits from the decoy. Since any decoy ion is constructed to be implausible, all decoy ions detected as positive are false positives. Then, we estimate FDR with FDR' according to Equation OM4.
Equation OM4. Definition of FDR and the proposed estimate of FDR (FDR'). FP, TP are False Positive and respectively True Positive,
and are the numbers of annotations from the target and decoy sets for the MSM cutoff .
Similar to the approach of FDR calculation in genome-wide studies proposed by (Storey & Tibshirani, 2003) [START_REF] Storey | Statistical significance for genomewide studies[END_REF] and picked up later in proteomics, Equation OM4 proposes an approximation of the true FDR defined as . This approach relies on having a high similarity between false-positives in the target set and the decoy set. The decoy set must be the same size as the target set and share the same statistical distributions as used by the measures used in the annotation. If these assumptions are satisfied then the number of false positives from the decoy ( ) approximates the number of false positives from the target ( ) while the denominator ( ) is equal between FDR and FDR'.
As the decoy generation is a randomized process, with one decoy search formed by a sampling of implausible adducts from all possible implausible adducts, FDR calculation is a repeated sampling process. We propose to repeat it (20 times for the presented results) and calculate the median of the observed FDR values. We favored median over mean for its general robustness to outliers and for providing integer values that can be translated into the numbers of annotations.
FDR-controlled molecular annotation
The term FDR-controlled molecular annotation means that parameters of molecular annotation are optimized so that the set of provided annotations has a desired level of FDR. This is the most widely used approach in proteomics for choosing parameters of molecular identification [START_REF] Choi | Significance analysis of spectral count data in label-free shotgun proteomics[END_REF] . We employed this approach to develop in Algorithm OM3 for selecting a key parameter of the molecular annotation, the MSM cutoff . This was performed similarly to (Zhang et al., 2012) [START_REF] Zhang | De Novo Sequencing Assisted Database Search for Sensitive and Accurate Peptide Identification[END_REF] by simultaneously sorting the MSM values for the target and decoy ions, decreasing the MSM cutoff thus one-by-one increasing the number of target ions annotated, recalculating the FDR after every new ion is annotated, and selecting the maximal number of annotations that provide FDR below the desired value (see Figure 1 in the main text). This process is repeated 20 times with the decoy adducts every time randomly sampled from the set of all considered implausible adducts and an observed recorded. After repetitions, the final MSM cutoff value is set at the median of the observed values. The final set of molecular annotations is a set of target ions with the MSM scores above the median cutoff value.
Input
LC-MS/MS validation of annotations 5.1 Samples
Mouse brain sample
One female adult wild-type C57 mouse age 10 weeks was obtained from the European Molecular Biology Laboratory animal resource (EMBL-LAR, Heidelberg, Germany). The animal was reared under ad-lib conditions within the specific pathogen free facility. Care and handling of the animal complied with EU directive 2010/63/EU on the protection of animals used for scientific purposes. The whole brain was excised from each animal immediately post-mortem and rapidly cryo-frozen in CO 2 cooled isopentane. Tissue was stored at -80 °C until use.
Authentic lipid standards and chemicals
All lipid standards used for validation of annotations were purchased from Sigma Chemicals (Sigma-Aldrich Co., St. Louis, MO) and Avanti Polar Lipids (Alabaster, LA, USA). The LC-MS grade buffers and other reagents were purchased from Sigma Chemical. All mass spectrometry grade solvents and MiliQ grade water was used throughout the analysis.
Sample preparation
20 mg of brain tissue was extracted using Bligh and Dyer extraction method [START_REF] Bligh | A rapid method of total lipid extraction and purification[END_REF] . The dried extract was reconstituted with 100 µL of methanol and isopropanol (1:1) and 10µL of this sample solution was injected LC-MS system for each run. Lipid standards were prepared in same solvent with concentration of 100 ng/mL each.
LC-MS/MS methods
The separation of lipids was carried out on Agilent 1260 liquid chromatography (LC) system with Ascentis® Express C 18 column (100 x 2.1 mm; 2.7uM) and detected with high resolution mass spectrometry (Q Exactive Plus MS, Thermo Scientific).
Three LC-MS/MS methods were used: Positive: ESI positive mode using 'buffer 1'. Negative 1: ESI negative mode method using 'buffer 1'. Negative 2: ESI negative mode method used 'buffer 2'. LC was run with flow rate of 0.25 ml/min with solvent A consisted of acetonitrile-water (6:4) and solvent B of isopropyl alcohol-acetonitrile (9:1), which are buffered with either 10 mM ammonium formate and 0.1% formic acid (buffer 1) or 10 mM ammonium acetate (buffer 2). MS parameters (Tune ,Thermo Scientific) were set as: spray voltage of 4 kV, sheath gas 30 and auxiliary gas 10 units, S-Lens 65 eV, capillary temperature 280 o C and vaporisation temperature of auxiliary gas was 280 o C.
LC-MS/MS validation strategy
LC-MS/MS validation of lipid annotations was performed differently for annotations when lipid standards are available and for other annotations. When lipid standards were available, LC-MS/MS information in particular the LC retention time (RT), MS and MS/MS (MS2) was used to compare the data from a standard with the data from a sample (both acquired using exactly the same LC-MS method and precursor selection range). First, extracted ion chromatograms (XICs) were evaluated for all possible adducts to confirm the presence of the ion of the sum formula obtained from imaging data. As for the tolerance value for XICs: for data with standards we used the 5 ppm; for data with no standards we selected the best fitting tolerance value from 2, 3, and 5 ppm. We considered possible adducts for each metabolite (+H, +Na, +NH4 for the 'Positive' method; -H, +FA-H for the 'Negative' methods, FA stands for the formic acid) and selected the best matching adduct as follows. The precursor delta m/z was calculated for the sample both in MS1 and MS/MS data. The matching MS/MS spectrum was searched within the elution profile and manually interpreted for fragments corresponding to head-group and fatty acid side chains. Only precursor and fragments with accuracy <6 ppm were considered for structural interpretation to identify possible lipid species. The lipid class was confirmed by the presence of head-group fragment or its neutral loss (e.g. MS/MS fragment with m/z 184.0735 corresponds to the phosphocholine head-group). Since lipids from the classes of phosphatidylcholines (PC) and sphingomyelins (SM) have the same head-group (m/z 184.0735), given a sum formula, we searched in HMDB and SwissLipids to rule out a possibility of the sum formula to correspond to a lipid from another class other than annotated by our framework. Further to confirm the fatty acid side chains, the 'Negative' LC-MS methods were used (e.g. fatty acid fragments for phosphocholines were obtained after fragmentation of formate ion precursors using the 'Negative' LC-MS method). The collision energy was selected as best representing the precursor and the expected fragments. When standards were available, the RT, precursor m/z and MS/MS fragments corresponding to head-groups and fatty acid chains from the sample were matched with spectra from the corresponding standard. When standards were not available the fragments were manually interpreted. Finally, structural annotation of the matching peaks in the MS/MS spectra was performed with the help of the HighChem MassFrontier software (Thermo Scientific). The MS, MS/MS and RT (for standards) data is presented in Supplementary Note 3 and summarized in Table S5.
Figures
Figures
Figure 1 .
1 Figure 1. The proposed framework for metabolite annotation for HR imaging MS;
Figure 2 .
2 Figure 2. Evaluation of the proposed framework: a) intensities of highest four peaks in the
[C 20 H 14 N 2 O 6 +H + , m/z 379.092462] if present and known phospholipids present in the mouse brain [C 40 H 80 NO 8 P+H + , m/z 734.569432] and [C 42 H 82 NO 8 P+K + , m/z 798.540963]. Data was acquired for the mass range 100 < m/z < 1200 followed by a single zero filling and a sinapodization. Online feature reduction was performed in the ftmsControl software, version 2.1.0 (Bruker Daltonics) to return only the peak centroids and intensities.
:
Real-valued image , number of levels
6.
7.
8. return
Algorithm OM1. The level-sets based algorithm for calculating the measure of spatial
chaos of an ion image. is a hole-filling operation to 'fill in' isolated missing pixels that
can happen in HR imaging MS (and to avoid overestimating the number of objects). It
consists of a sequence of morphological operation: with
structuring elements 24 .
Output: measure of spatial chaos
Algorithm:
// scale image intensity range to [0 1]
1.
// main part
2. For n in :
// threshold image at a current level
3.
4.
// fill single-pixel holes
5.
// count separate objects with 4-connectivity
:
Metabolite sum formula, adduct, charge, resolving power of the spectra, imaging MS dataset, MSM threshold
Output: Decision whether the ion is present in the dataset
Algorithm:
// Predict isotopic patterns
1. Predict the isotope envelope* for at the resolving
power
2. Detect centroids of the isotope envelope*, exact m/z's and relative intensities ( )
// Generate and score signals from the dataset
3. For in :
4. Generate an ion image for the i'th isotopic peak at m/z
5. Calculate from and from and according to Algorithm
1,
Equation OM1, and Equation OM2, respectively
6. Calculate the score according to Equation OM3
// Annotate the data
7. If :
8. the ion is annotated as being present in the dataset
Algorithm OM2. MSM-based molecular annotation determining whether a metabolite ion
is present in an imaging MS dataset.
:
Metabolite database, resolving power of the mass spectrometer used, imaging MS dataset, ion charge, target adduct, decoy adducts, desired FDR level FDR-controlled molecular annotation that screens for metabolite ions present in an imaging MS dataset, with the desired FDR level.
Algorithm OM3.
, number
of decoy samplings
Output: A set of molecular annotations (ions from the metabolite database detected as
present in the dataset)
Algorithm:
// Predict and score all metabolite signals
1. For in :
2.
3. Calculate according to Algorithm
4. , where decoy adduct is
randomly chosen from the list of decoy adducts
5. Calculate according to Algorithm OM2.(1-3)
// Calculate the MSM cutoff corresponding to the desired FDR level
6. Form a combined vector of values
// Find the maximal number of annotations providing FDR below
7. Sort in descending order.
8.
9. While :
10.
11.
12. Calculate according to Equation OM4
13.
14.
15. Repeat steps 1-11 according to the number of decoy samplings,
16.
// Perform the MSM-based molecular annotation with the calculated cutoff
17. For in :
a. If then add into the list of molecular
annotations
Acknowledgements
We thank Olga Vitek (Northeastern University), Alexander Makarov (ThermoFisher Scientific) and Mikhail Savitski (EMBL) for discussions on FDR and Dmitry Feichtner-Kozlov (University of Bremen) for discussions on computational topology. We acknowledge funding from the European Union's Horizon2020 and FP7 programmes under the grant agreements No. 634402 (AP, RL, AT, VK, SN, CP, TA), 305259 (IC, RL, CP), and from the Russian Government Program of Competitive Growth of Kazan Federal University (SN). We thank EMBL Core Facilities for instrumentation for LC-MS/MS analysis. TA thanks Pieter
Data Availability Statement
The data is publicly available at the MetaboLights repository under the following accession numbers: MTBLS313: imaging mass spectrometry data from mouse and rat brain samples, MTBLS378: LC-MS/MS data from mouse brain samples, and MTBLS317: simulated imaging mass spectrometry data.
Code availability
The reference implementation of the developed framework is freely available at https://github.com/alexandrovteam/pySM as open source under the permissive license Apache 2.0.
Code availability
The reference implementation of the developed framework is freely available at https://github.com/alexandrovteam/pySM as open source under the permissive license Apache 2.0. Data was acquired in full scan mode in mass range of 150-900 m/z (resolving power R=70000) and data dependent tandem mass spectra (MS/MS) were obtained for all precursors from an inclusion list (resolving power R=35000). Tandem mass spectra (MS/MS) were acquired using higher energy collisional dissociation (HCD) with normalized collision energies of 10, 20 and 30 units at the mass. The inclusion list was composed of all annotations provided from imaging MS analysis and detected in all three serial sections (s1, s2, s3 at the Bregma 1.42) for either of two animals. We considered adducts relevant for LC-MS (+H, +NH4, +Na for the Positive method; -H, -H+HCOOH for the Negative methods).
Data availability
The LC-MS/MS data from mouse brain samples is publicly available at the MetaboLights repository under the accession numbers MTBLS378.
Author Contributions
AP and TA conceived the study, AP, IC, DF, AT, VK implemented the algorithms, RL, JF, CP, MB provided imaging data, AP and TA analyzed imaging data, PP collected LC-MS/MS data, PP and TA performed LC-MS/MS validation, AP and TA wrote manuscript, with feedback from all other coauthors, TA coordinated the project.
Competing Financial Interest Statements
Theodore Alexandrov is the scientific director and a shareholder of SCiLS GmbH, a company providing software for imaging mass spectrometry. During the work presented in the paper, Michael Becker was an employee of Bruker Daltonik GmbH, a company providing instrumentation and software for imaging mass spectrometry. |
01766112 | en | [
"sdu.stu.pg",
"sdv.bid.evo",
"sdv.bid.spt"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01766112/file/Croitor2016Bison.pdf | Keywords:
BISON
Abstract. Remains of Bison (Eobison) |
01766144 | en | [
"shs.anthro-se",
"shs.geo"
] | 2024/03/05 22:32:13 | 2011 | https://hal.science/hal-01766144/file/1.%20TJSEAS-LH%28%C2%8C%21%29-20110916.pdf | Laurence Husson
email: [email protected]
、 必須向外謀生的「推力」進行分析之外
, 我們也不能
忽略出於政治因素才產生的移民制度 , 特別是在上述三個亞洲島嶼國家
Is a Unique Culture of Labour Migration Emerging in the Island Nations of Asia?
Keywords: female migrant workers, Indonesia, Philippines, Sri Lanka
In the space of three decades, three countries, the Philippines, Indonesia and Sri Lanka have become the main exporters of labor on a worldwide scale.
Are island nations, such as the two archipelagos of the Philippines, Indonesia and the island of Sri Lanka, predisposed to the current large exodus of female migrant workers? Another record shared by these three countries: the very high percentage of women making up these migrant workers. This paper will analyze the principal factors of geography, population and the international labour market that explain this massive exportation of female migrant workers together with the state policies that are actively encouraging female migration.
Beyond the determining geographical factor and the need to leave an overpopulated land to earn a living, we should indeed take into consideration the presence of a political will that contributed to the formation of a system of migration which is possibly particular to the island nations of Asia.
TJSEAS 113
The emergence of a globalised labour market has encouraged the free movement of people. Instead of weakening the links between the place of origin and the place of living and working, migratory movements reinforce connections. Real networks are created that organize the way people relocate. These networks also contribute to maintaining the collective identity links beyond national borders. Considered from the identity point of view, labour-based migrations illustrate the delicate connection between local and global contexts and show how individuals practice their dual relationship between their country of origin and the country where they find employment.
Migrations due to work, voluntary or forced, supervised or spontaneous, have a long tradition in Asia. After the abolition of slavery in the late 1800s, European colonial powers introduced the labor contracts that led to the extensive Chinese and Indian diasporas.
The vast continent of Asia comprises 60% of the world's population and two-thirds of the world's workforce. This labour market is expected to remain a very mobile zone for a long time [START_REF] Hugo | The Demographic Underpinnings of Current and Future International Migration in Asia[END_REF]. The two archipelagos of Indonesia and the Philippines are at the crossroads of the trade routes between China and India. It appears that this geographic region, made of archipelagos, peninsulas and straits is an eastern equivalent of the "Mediterranean sea" that encourages mobility and flows in all kinds of exchanges.
In the space of three decades, the Philippines, Indonesia and Sri Lanka, have become the world's leading exporters of labour. Another feature that these two archipelagos and one island nation share is the record high percentage of women TJSEAS 114 making up these migrant workers. These two striking facts have been the catalyst for this paper in which we will analyze the principal factors that explain this massive exportation of female migrant workers.
Hania [START_REF] Zlotnik | The Global Dimensions of Female Migration[END_REF] estimates that women represent 47% of migrants in Asia.
However, in Sri Lanka, the Philippines and Indonesia, the proportion is higher than 70%. Are the islands of Asia predisposed to such a large exodus? Why are so many women leaving for foreign countries?
Beyond the determining geographical factors and the need to leave overpopulated islands to earn a living, state policies have played a determining role in the gender pattern of migration and have contributed to the formation of a system of labour migration which is possibly unique to Asia. The Philippines and Indonesia form the primary focus of this paper. The consideration of Sri Lankan worker migration is used as a comparison.
A Significant Recent Development
The rise of Asian migrations has followed global trends in migration. In 1965 the world accounted for 75 million international migrants. Twenty years later it was 105 million, and in 2000 it was 175 million. From the 1980s, the growth rate of the world population declined to 1.7% per year while international migration rose considerably to 2.59% per year (IOM 2003).
It was not until 1973, at the time of the extreme petroleum price escalations, that the large scale immigration of workers to the Gulf States began, firstly from Southern Asia and then from South-East Asia. The oil-rich states of the Arabian Peninsula 2000,2004).
The Gulf War (1990War ( -1991) ) as well as the Asian financial crisis of 1997 provoked a massive return, albeit temporary, of migrant workers. Since then migrant flows have resumed. Maruja Asis noticed that "unlike male migration, the demand for female migration is more constant and resilient during economic swings. The 1977 1997 crisis in Asia was instructive in this regard. While the demand for migrant workers in the construction and manufacturing sectors declined, no such change was observed for domestic workers" [START_REF] Asis | When Men and Women Migrate: Comparing Gendered Migration in Asia[END_REF]).
However, available statistical figures seem to be contradictory and are therefore difficult to analyze with confidence. For example, [START_REF] Stalker | Workers without Frontiers -The Impact of Globalization on International Migration[END_REF] stated that in 1997 there were up to 6.5 million Asian migrant workers in Japan, South Korea, Malaysia, Singapore, Thailand, Hong Kong and Taiwan. While, [START_REF] Huguet | International Migration and Development: Opportunities and Challenges for Poverty Reduction[END_REF] estimated that at the end of 2000 approximately 5.5 million foreign workers were living in a host East and Southeast Asian country. The misleading implication from comparing these two estimates is that the number of Asian migrants has decreased during those three years.
Whereas, a more careful consideration of these estimates suggests that the collection and analyses of migration statistics in this part of the world are not yet sufficiently reliable due to movement complexities. What may be discerned is that the global circulation of information, capital, ideas and labor and wider access to air travel has increased mobility and overcome the problem of large geographical distances. During this time, the main destination for female Asian migrants shifted from the Middle East TJSEAS 117 to the other Asian countries whose booming economies needed additional migrant workers to fill labor shortages.
A Growing Feminization
Since the 1980s, the massive participation of women in the international labour market has been a phenomenon without precedent in the history of human migrations.
While most researchers agree that global restructuring increasingly forces a larger number of women in developing countries to participate in the international labour market, Nana [START_REF] Oishi | Women in Motion. Globalization, State Policies and Labor Migration in Asia[END_REF] demonstrated the need to investigate the differential impacts of globalization, state policies, individual autonomy and social factors.
In the past migration flows, women have been the wives, mothers, daughters or sisters of male migrants. In contrast, since the 1990s women, with or without work contracts, have become active participants in the international labour market, and not just to join or accompany a male migrant. Since this period, these female migrants have became fully integrated into the host country's job market. This phenomenon is referred as "the feminization of migration".
In addition to the feminisation of migration, the other significant change has been a new level of awareness on the part of migration scholars and policy-makers as to the significance of female migration, the role of gender in shaping migratory processes and, most importantly, the increasingly important role of women as remittance senders (Instraw 2007). The trend seems now to be irreversible. The inclusion of the gender perspective in the analysis of migration has illuminated the new geographic mobility of women. The development in recent years of feminists studies has allowed female migrations to be understood as a different social phenomenon to the mobility TJSEAS 118 of men. Applying a gender lens to migration patterns can help identify ways to enhance the positive aspects of migration and to reduce the negative ones.
In 1990, the United Nations estimated that the total number of migrants living outside their native countries at 57 million, that is to say, 48% on the global scale.
According to an estimate by the ILO (International Labour Organisation) in 1996, at least 1.5 million female Asian workers were employed outside their country of origin.
Each year, almost 800,000 Asian women leave their own country for an employment under contract in the UAE (United Arab Emirates), Singapore, Hong Kong, Taiwan, Korea or Malaysia, where they will reside for a minimum of two years [START_REF] Lim | International Labor Migration of Asian Women: Distinctive Characteristics and Policy Concerns[END_REF]. The migrations of female workers henceforth, constitute the majority of the migrant work-flow under contract. Indeed, the Philippines and Indonesia export the largest number of migrant workers in Southeast Asia and are also the world's top exporters of female workers. The females of these two archipelagos are far more numerous than their male counterparts-as women represent 60% to 70% of workers sent abroad by these two countries. Sri Lanka created an office for foreign employment (SLBFE) with the express objective to promote and develop their export of workers, and especially female workers. . The number of Sri Lankan women leaving under contract to work abroad, in particular to Saudi Arabia, United Arab Emirates and Kuwait, grew from 9,000 in 1988 to 42,000 in 199442,000 in and to 115,000 in 199642,000 in (UNO 2003)). Even though Sri Lanka started sending its domestic assistants to the Gulf States later than Bangladesh, Pakistan or India, it remains the only country to continue doing so. The proportion is one male migrant to three female migrants, of which more than 60% work as domestics almost exclusively in one of the seven member states of the Gulf Cooperation Council. Besides the numerical importance of these flows and their visibility, another feature of these Asian female migrations is their disproportionate concentration in a very limited number of jobs.
It seems that female labor is mainly related to medical care, private domestic services and commercial sexual services. The categories of typical female employment returns to the stereotype roles of women as maids, waitresses, prostitutes and nurses [START_REF] Chin | Service and Servitude: Foreign Female Domestic Workers and the Mlaysian Modernity Project[END_REF][START_REF] Parrenas | Servants of Globalization: Women, Migration and Domestic Work[END_REF]). In the eyes of employers, Asian women are traditionally perceived to be discrete, subservient, docile, gentle, ready to please and serve, particularly suited to these subordinate employments as carers, nurses, domestic servants and as sex workers. The female migrants are forced into a limited number of trades due to a clear segregation of the sexes in the international labor market. They are concentrated in the service sector, domestic house-work, and a large number of entertainment trades that are a thinly disguised form of prostitution.
We will now compare the female migrations of Indonesia and the Philippines and consider the initiatives introduced in the Phillipines and Sri Lanka to protect vulnerable female migrant workers who emigrate to the Arabian Peninsula and Eastern Asia.
With 7.8 million migrant workers, the Philippines is an example of how to improve and defend the rights of migrant workers. Indonesia, has the largest total number of migrant workers and the majority of these are women who are recruited as maids. Indonesia has tried to protect them by providing training and by fighting against the non-official recruitment agencies. Repeated press articles on the vulnerability of foreign workers, where migrant workers and locals are rarely treated on an equal footing means that the states concerned can no longer remain insensitive to this problem.
We will use the following abbreviations: TKW (tenaga kerja wanita) in Indonesia to designate the female migrants workers and OFW (Overseas Filipino Workers) for the Philippines.
Over representation of female Asian migrants in the international labor market
The reasons for the strong participation of Asian women in the international labor flow are numerous and different in order: psychological, religious, economic, and political. In general, the offer of employment for female workers was as a result of supply and demand. These women were able to leave their families at short notice and for a set time in order to earn income at a time when the demand for male workers had diminished in the Gulf States. They profited from a demand for female workers.
This was even easier as the jobs for women generally did not need a diploma or qualifications and appeared very attractive due to the big differences in wages and salaries between the countries of departure and the countries of arrival.
In South East Asia, women have a certain level of freedom of movement and autonomy in decision making, and are used to working away from home. In several ethnic groups in Indonesia, women have traditionally had a significant role in the generation of household income, through productive work both within and outside the household (Williams 1990: 50). In 2005 the ILO (2006: 13) stated that up to 53% of women in Indonesia participate in the work force compared to 87.1% of men.
TJSEAS 122
Whereas in the Philippines, the proportions were 56.7% for women and 70.7% for men. In the families that need supplementary income, the women know how to cope with bringing money home, this would oviously favor emigration. However, many Islamic countries, including Pakistan and Bangladesh, have forbidden sending female workers overseas, as it was considered that young women travelling without a male escort outside their homes was against the teachings of the Koran. India has also put a brake on the export of female workers following too many denunciations by the national press.
To explain this massive feminine participation of women in migrational work, it is tempting to invoke traditional Asian values and the notion of family responsibility.
Family responsibility is a foundational concept and is not a question of filial devotion.
The sending of a woman abroad to earn money is related to the idea that she will remain committed to her family and that she will willingly sacrifice her own wellbeing and send all her savings home. A man in the same position, may be more inclined to play, drink and spend his savings. Overall, men remit more than women because they earn more, though women tend to remit a larger proportion of their earnings. This study carried out in Thailand confirms that despite their salaries being lower than men, the female migrants who work abroad to help their families managed to save more and sent most of their savings to their families [START_REF] Osaki | Economic Interactions of Migrants and their Households of Origin: Are Women More Reliable Supporters?[END_REF]. Women, especially if they are mothers, do not leave home easily and conversely the families certainly preferred to see the men go abroad rather than the women. In general, an extended family always found it more difficult to replace a mother than a father. She would often carry out a great number of tasks and is hard to replace, especially when it concerns the needs of very young children and elderly parents. But there is little TJSEAS 123 choice when the only employment offered is for females. The woman, whether she is a mother, daughter or sister, accepts temporary emigration and the family left behind will have to deal with her staying abroad.
Another possible factor, that needs to be verified with specific studies, is the incidence of bad conjugal relationships preceding female emigration. The researchers Gunatillake and Perera (Gunatillake and Perera 1995: 131) showed that in Sri Lanka, female worker migration is often a form of disguised divorce, in a society where divorce is still negatively perceived. The divorce is then not officially announced, but the emigration of the wife marks her economic independence and their physical separation as a couple. In Indonesia, while married women constitute the largest proportion of migrants, divorced women, single mothers and widows are over represented. It is clear that the departure is a solution to survive, a way of forgetting or escaping a situation of failure.
Women can also be "trapped" into migration. Low wages, financial difficulties, irresponsible spending, spousal infidelity, estrangement from children, and many others personal factors at home, may compel women to stay in, or return to the same country, or find another labor contract elswhere.
It is necessary to add that working abroad has become more commonplace and less harrowing as the cost of transport and overseas communications via phone, Internet, MSN, Skype have been considerably reduced. With quicker and cheaper exchanges, the effective distances have been shortened and the social-emotional separation from family has became more acceptable by the women. For example, in certain Javanese villages, the migration of women is so frequent, so usual, that they have become normal, and as one might say, they constitute a standard.
An additional religious factor is often proposed by Indonesians wanting to work in the Arabian Peninsula, especially in Saudi Arabia as it allows them to make the pilgrimage to Mecca at a reduced cost. It has been attested by researchers that Sri Lankan Muslim female workers were able to gain respectability and self-esteem by working in the Arabian peninsula, acquiring material assets, adopting Arab customs in fashion, cooking, interior decoration, and obtaining religious education for them and their relatives (Thangarajah 2003: 144). As pertinent as they are, these reasons alone cannot explain these growing flows.
It is also necessary to take into account the actions of governments in order to augment foreign-exchange revenues and private organizations that promote the export of female labour, and who apply pressure on family and social networks. Authorised Asian female migration would never have known such growth without an initial worker migration industry and the setting up of labour market channels and networks.
The Multiple Channels And Networks
Migrant workers have become commercial objects that constitute a valuable resource. In Asia, the recruitment of foreign workers has become a lucrative activity.
The agencies, whether public or private, legal or illegal cover the financial cost of migration, in order to maximise the profits on each departing candidate. Manolo In the Philippines, there are approximately 2,876 Foreign Employment Agencies, amongst which, 1400 are regarded as reliable (POEA 2004). Because of the competition, these agencies tend to specialize in particular destinations or types of employment.
In Indonesia, 412 employment/placement agencies were listed in 2000 (Cohen 2000). Sri Lanka had 524 licensed recruitment agencies by the end 2002 (plus many more illegal operators) which placed 204,000 workers abroad in that year [START_REF] Abella | Social Issues in the Management of Labour Migration in Asia and the Pacific[END_REF]. These numbers clearly show the commercial and profitable character of worker migrations. A large number of temporary work migrations are thus orchestrated by these paid recruiter agencies. The OIT (International Work Organization 1997) points out "their intervention in 80% of all movements of the Asian work-force to the Arab States, one of the biggest migrant flows in the world". In the same press release, the OIT added "In Indonesia and in the Philippines, the private agencies dominate the organization of migrant workers, placing 60% to 80% of migrants". [START_REF] Kassim | Labour Market Developments and Migration Movements and Policy in Malaysia[END_REF] has pointed out that both the heavy burden of the formal bureaucratic procedure and the high financial costs involved may induce Indonesian migrant workers to look for irregular recruitment channels in order to get a job in Malaysia.
TJSEAS 127
The Journey Of The Migrant Through Legal Or Illegal
Channels
For example, an Indonesian who wants to work in Malaysia has three possible choices. One is to contact a legal placement agency (PJTKI, Perusahaan Jasa Tenega Kerja Indonesia, Office for the Indonesian workers) situated in town and endure a long and costly administrative procedure due to civil servants demanding bribes.
The second way is to go to a local intermediary/recruiter (called locally calo, boss, taikong, mandor or patron/sponsor), who is often a notably rich and respected man, having performed the hadj (pilgrimage to Mecca), and who will serve as intermediary between the candidate and an official agency based in Jakarta. His performance is equally costly but it has the advantage of reassuring the candidate, as the calo is a well-known person.
In the embarcation ports to foreign countries, situated mostly in Sumatra (for example Medan, Tanjung, Pinang, Dumai, Batam, Tanjung Balai, and Pekanbaru) the migrant candidates may go to a third kind of intermediary, unscrupulous ferrymen who try to transport them to Malaysia on the first available boat. In the Indonesian archipelago, generally the official legal procedure is badly perceived and does not prove to be more secure, less expensive or any more effective than the private recruitment agencies. Corruption exists at every stage in the migratory cycle.
Despite these factors, these intermediaries do enable the low-qualified and less well-informed women to go through the procedures to find foreign work. The beneficial services provided by these intermediaries is that they can connect the employee and employer, give training and find board and lodging, supervise their work contracts, organize the trip, lend them money, and organize the return journeys of the migrant.
The negative costs is that these agencies have a tendency to claim excessively high and unjustified fees, commissions and gratuities that force the applicants into debt.
This debt can bring about a relationship of dependence and abuse between the female migrant worker and her recruiter agency. Such abuses are more obvious in the case where the female migrant worker is involved in illegal or Mafia networks.
Outside of these channels of recruitment, the migrant workers forge their own networks through which information circulates, allowing them sometimes to meet the supply and demand. The combination of formal agencies and informal networks end up creating a chain migration.
The Indonesians prove to be more exposed than Philippine workers to extortion and abuse before their departure. This system, along with the tariffs applied by the agents and their high rates of interest mean that the female Indonesians are often in debt for several months of their pay. In 2003, the Indonesian Ministry of Work and Emigration recognized that 80% of the problems, such as falsification of documents and the various extortions undergone by the migrants, take place before departure [START_REF] Dursin | Would be migrants chafe against ban on unskilled labor[END_REF].
The period immediately before departure and immediately upon their return are the critical moments when Indonesian and Sri Lankan female migrants are most at risk of being robbed.
Every day, almost 800 migrants pass through Terminal 3 in Jakarta's airport, while in Sri Lanka around 300 women a day return to their home country. These female migrant groups, who return loaded with packets, presents and money, are targeted by civil servants, servicemen, policemen, porters, and bus-drivers who seek to take their money.
The Philippines And Sri-Lanka Provide Two Female Labour-Export Models For Indonesia?
Massive labor-exporting countries like the Philippines, Indonesia and Sri-Lanka To benefit from the Filipino example, Indonesia should:
-increase the level of general education of its population;
-"clean up" its recruiter system; -train the migrant candidates better, particularly in language;
-diversify the countries of destination.
Following the example of the NGOs that advocate for Filipino women migrants workers, Indonesian NGO's could similarly pressure the Indonesian government to help the migrant labor-force through better national and international coordination, information networking to provide accurate information on all aspects of migration, TJSEAS 131 and more strict regulations for the recruitment industry to help prevent abuses and malpractices.
Towards An Industry And A Culture Of Migration In
Asia
As we have seen, the global trend towards the feminization of migratory flows in Asia and the demand for female migrant workers is likely to increase. This phenomenon is more accentuated in Asia, where the proportion of women in the total number of migrant workers approaches 70%. Even though we cannot speak of a migratory system peculiar to the island nations of Asia, the migratory flows of these three countries present common characteristics which are unique.
In three decades, Indonesia, the Philippines and Sri Lanka constitute the majority of emigration workers in the world.
These nations have put in place policies favoring the emigration of workers to reduce the poverty and unemployment and to increase the foreign currency remittance by migrant workers.
These two archipelagos and the island nation of Sri Lanka share high unemployment rates and chronic under-employment. The respective rates of unemployment in Indonesia and the Philippines remain high at 9.9% and 10.1%, under-employment is considerable. While there remains a large inequality in the wages between men and women, women will continue to comprise the majority of poor workers. .
TJSEAS 132
These countries, via their recruitment agencies, filled an opportune labour-market niche that was vacant by others. The agencies could satisfy the growing demand for domestic personnel, nannies and home nurses. These female employment positions had the advantage for South East Asian migrant women of not requiring any particular qualifications.
Encouraged by the State, in Indonesia, the Philippines, and Sri Lanka, a culture of migration has emerged with the establishment of a solid "migration industry" with a network of agents and intermediaries. Advisors, recruiters, travel agents, trainers, started to work together at all stages of the migration process to find as much labor as possible for the greatest number of foreign employers. The Philippines enjoys a great deal of experience in this field.
Another common characteristic of these three labor exporters resides in the creation of formal and informal migrant networks and channels by which to disseminate important information for future migrant workers. Maruja M. B. Asis (2005: 36) suggests that the nunerical growth in the domestic personnel in various countries, of Philippine origin, partly reflects the "multiplying effect" of the informal migrant information network resulting in parents and friends going to the same destination and working in the same niche markets.
This suggests the possibility of a direct job appointment without having to go through the employment agencies. Moreover, the conditions for migrant workers
Abella ( 2005 )
2005 has emphasized the role played by private fee-charging job brokers in organizing labour migration in those countries. According to him, "recruitment and placement have been left largely in the hands of TJSEAS 125 commercially-motivated recruitment agencies because few labour-importing states in the region have shown any interest in organizing labour migration on the strength of bilateral labour agreements. As a consequence, over the years, the organization of migration has emerged as a big business in both countries of origin and of employment".Nevertheless, the governments of exporting workforce countries play a major role. Sending migrants workers abroad is a solution to national unemployment, a way to avoid social unrest, and a means to gain foreign-exchange reserves. As early as 1974, the Philippine government, recognized the importance of labour migration to the national economy by establishing the Philippine Overseas Employment Administration in order to promote labour export. Sri Lankan and Indonesian authorities have followed this example.Female labor migration is a demand-driven, rather than a supply driven, phenomenon. To respond to demand patterns in the host countries, labor-exporting countries have to promote both male and female overseas contract workers and to face increased competition for a good position in the labor export market. To achieve this goal, they have created within their Ministries of Labor, offices or agencies to increase the number of foreign employment with an aim to promote, control and organize the recruiter and exportation of workers: AKAN in Indonesia, the Philippine Government Agency for Overseas Employment, (POEA) in the Philippines, and the Sri Lanka Bureau/Office for Foreign Employment (SLBFE) in Sri Lanka.The AKAN Indonesian Office of Foreign Employment was created in 1984 under the supervision of the Ministry of Labor. A year later, in 1985, the SLBFE TJSEAS 126 adopted the same objectives. The government objectives were to reduce national unemployment and to increase the savings of migrants. The transfer of funds from migrant workers made up 8.2% of PNB in the Philippines or more than $7 billion, 6.3% in Sri Lanka[START_REF] Cesap | Fifth Asian and Pacific Population Conference: Report and Plan of Action on Population and Poverty[END_REF], and 4.7% in Indonesia. The three million Indonesians who work abroad bring in approximately $1 billion (ILO 2006b).
are confronted with the dilemma between promotion of female labour emigration and the protection of their national workers abroad. The Philippines, has substantial experience in labour export. The government, conscious to protect their "new heroes" (Bagong Bayani in Tagalog) who allow the country to prosper, create two distinct institutions. The POEA whose mission was to promote the export of the work-force and the Overseas Workers Welfare Administration (OWWA) which was established to defend and protect the rights of migrants. In the same spirit, the archipelago established an official charter, the Migrant Workers and Overseas Filipinos Act, voted in, in June 1995, so that the migrants are aware of their rights and their duties to be respected. The government enacted this charter to slow down the exportation of the less qualified workers who were the most vulnerable. Filipino NGO's, civil society and the Catholic Church have had a long history of activism, campaigns and debates to improve the life, conditions and rights of migrants workers.TJSEAS 130The civil society is better organised organized in the Philippines. They list more than a hundred NGOs who are very active in the fight to protect migrant workers. In comparison, Indonesia accounts for only 15 NGOs and a similar number in Sri Lanka.Cassettes, training modules, self-defense courses, handbooks and information booklets emanating from Filipino NGOs inform the migrants of the dangers in looking for employment abroad. Very rapidly, the Sri Lankan NGOs are following their example with pre-departure orientation and training programs. One of the major problems is the lack of clear, precise and reliable information explaining each stage in the migration process. This information is often not provided to Indonesian workers resulting in frequent misdirections, errors and fraud with their damaging consequences. In general, female Filipino migrants with better education and training and a good command of the English language have less problems in communication than Indonesian migrants.
could progressively improve. The International Convention on the International Protection of the Rights of Migrant Workers and their families provides a normative framework to the transnational labor migrations. It has been ratified by 34 countries, TJSEAS 133 including the Philippines and Sri Lanka, and came into effect in 2003. These three countries were pulled between the desire to increase the export of their work-force and their duty to protect them. By sending so many female workers abroad, in conditions that, in one way, could put them in danger, they highlighted the global question of Human Rights within the general framework of migrations and work legislation.This positive outcome puts into perspective some of the criticisms that is sometimes addressed to them, that by exporting their own work-force in such great numbers, they would not create local jobs and thereby would avoid the internal problem of unemployment. Perhaps the years to come will show that the returning migrants can fulfill the role of providing local employment thanks to their remittances, their ideas and new skills.
Table 1 Estimated numbers of Asian (origin) workers in the Middle East
1
Nationality Year Number
Filipinos 2003 1,471,849
Indonesians 2000 425,000 *
Sri Lankans 2003 900,000
* for Saudi Arabia only
Source: Hugo (2005: 10).
sought Asian workers for construction and general laboring because they were thought to be more docile and cheaper.
Rising from 1 to 5 million between 1975 and 1990, the number of migrant workers rose to almost 10 million in 2000 in Saudi Arabia, the United Arab Emirates (UAE) and Kuwait. The table 1 above illustrates the attraction that the Gulf States continue to exert on Asian labourers.
After 1980, the South-East Asian work migrations became more diversified. The Gulf States continued to absorb a large number of mostly South-East Asian laborers and notably women (Philippinos, Sri Lankans and Indonesians) to meet the increased demand for house-workers (maids and domestics) and the growth in service jobs.
By the middle of the 1980s, inter-Asian mobility rapidly developed. Japan, Taiwan, South Korea, Malaysia, Singapore, Hong Kong and Brunei became preferred migrant worker destinations. During this period, migrant flows became more complex as numerous countries became simultaneous importers and exporters of labor (cf.
Table 2 Estimated number of annual departures
2
Country of origin Year Annual departures Estimated Destinations
number of
illegals migrants
Indonesia 2002 480,393 + 50,000 Malaysia, UAE
Philippines 2002 265,000 + 25,000 Asia, OCDE, UAE
Sri Lanka 2003 192,000 + 16,000 UAE, Singapore
Source : ILO (2006).
Table 3 Proportion of Female Labor Migrants
3
Origin Year Number of migrants under contract Percentage of women
Philippines 2003 651,938 72.5 %
Indonesia 2003 293,674 72.8 %
Sri Lanka 2003 203,710 65.3 %
Source:
Hugo (2005: 18)
.
Table 4 Female Migration in Asia -proportion of female in percent regarding the total number of migrants
4
Regions 1960 1970 1980 1990 2000
South Asia 46.3% 46.9% 45.9% 44.4% 44.4%
East and SE Asia 46.1% 47.6% 47% 48.5% 50.1%
West part of Asia 45.2% 46.6% 47.2% 47.9% 48.3%
Source:
Jolly and Narayanaswany (2003: 7)
.
Keiko
Yamanaka and Nicola Piper (2005: 9)
have calculated the number of women as a percentage of total migrant workers in the Asian labour-importing countries for the early 2000s (Singapore: 43,8% ; Malaysia: 20,5% ; Thailand: 43%; Hong Kong SAR: 95%; Taiwan: 56%; Korea: 35.1%).
One must emphasize that these statistics apply only to official work migrants who are legally permitted to work abroad. They do not take into account people who leave their own country to study, travel, or to get married who subsequently work in the visited country. Illegant entrants or people who work without a work permit. It is probable, therefore, that the number of migrant workers would be much higher if we include those who migrate clandestinely. It is estimated that at least a third of all labor migration in Asia is unauthorized.
TJSEAS 120 |
01766151 | en | [
"sdu.stu.pg",
"sdv.bid",
"sdv.ba.zv"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01766151/file/Croitor2016megaceroides_author.pdf | Roman Croitor
email: [email protected]
Systematical position and paleoecology of the endemic deer Megaceroides algericus Lydekker, 1890 (Cervidae, Mammalia) from Late Pleistocene -Early Holocene of North Africa
Keywords: Cranial morphology, Pachyostosis, Evolution, Ecomorphology, Taxonomy, Paleobiogeography
Systematical position and paleoecology of the endemic deer Megaceroides algericus Lydekker, 1890 (Cervidae, Mammalia) from the late Pleistocene-early Holocene of North Africa
Introduction
Cervidae represents a successful family of ruminants that arose in the mid-Tertiary period in Eurasian tropics, however, because of its specific evolutionary and ecological strategy, this rich in species and ecological forms family, apart from a few exceptions, failed to colonize the African continent. According to [START_REF] Geist | Deer of the World: Their Evolution[END_REF], cervids with their low forage habit specialization are poor food competitors with other groups of herbivores, like bovids and equids, in old species-rich ecosystems among coevolved ecological specialists. Ecologically opportunistic cervids are most successful in young ecosystems with large amplitude of environmental fluctuations [START_REF] Geist | Deer of the World: Their Evolution[END_REF]. The paleontological record and modern fauna give only two examples of successful evolutionary survival of cervids on the African continent: Megaceroides algericus (Lydekker, 1890) and Cervus elaphus barbarus Bennet, 1833 [START_REF] Gentry | Cervidae[END_REF].
The origin and systematical position of the mysterious North African fossil deer Megaceroides algericus is a subject to debates and contradictions in the scientific literature for more than a century. The extinct species M. algericus represents the exceptional zoogeographic instance of an endemic extremely specialized form of deer that evolved on the African continent. The second African cervid, Cervus elaphus barbarus, is a primitive small-sized subspecies of red deer, which survived until the present days and does not show unusual or particular evolutionary specializations, possibly with exception of some traits of paedomorphosis [START_REF] Geist | Deer of the World: Their Evolution[END_REF]. The isolated and very restricted North African distribution of M. algericus represents a very interesting, but still poorly understood evolutionary and paleozoogeographic case. The present article propose a taxonomic, morphological, morpho-functional, paleobiological, and phylogenetic study of the "thick-jawed deer" M. algericus that aims to contribute to the better understanding of this rare zoogeographic instance of endemic North African cervid.
Historical background
The first description of the species belongs to Lydekker (1890). He described a maxilla with an upper tooth series comprising P 4 -M 3 of a medium-sized deer from Hammam Mescoutine (Algeria) as Cervus algericus, noting a strongly developed cingulum, and assumed a possible phylogenetic relationship of the new species with the giant deer Megaloceros giganteus. Some-what later, Pomel (1892) created another species Cervus pachygenys, which was based on a very pachyostotic and quite bizzare ("pathological", according to [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] lower mandibles from Neolithic of Algeria. The sample described by [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] also included an isolated upper molar without lingual cingulum. [START_REF] Joleaud | Sur le Cervus (Megaceroides) algericus Lydekker, 1890[END_REF][START_REF] Joleaud | Cervus (Megaceroides) algericus Leydekker, 1890[END_REF] brought together in synonymy Lydekker's and Pomel's species and stressed the affinity between the African deer and the European giant deer, assuming for the African form an intermediate position between Megaloceros and Dama. [START_REF] Joleaud | Sur le Cervus (Megaceroides) algericus Lydekker, 1890[END_REF][START_REF] Joleaud | Cervus (Megaceroides) algericus Leydekker, 1890[END_REF] placed the North African deer in his new subgenus Megaceroides within the genus Cervus in order to underline its assumed archaic character and transitional systematic position. [START_REF] Arambourg | Note préliminaire sur un nouvelle grotte à ossements des environs d'Alger[END_REF][START_REF] Arambourg | Mammifères fossiles du Maroc[END_REF] elevated Megaceroides to the genus level and reported on some new important findings of cranial remains of Megaceroides algericus from Algeria (Guyotville) and Morocco (Ain Tit Mellil). [START_REF] Arambourg | Mammifères fossiles du Maroc[END_REF] provided figures of those findings, but did not describe them in details.
The studies of Italian researchers published in the second half of the XX-th century gave a new impetus to the debates on taxonomy and systematical position of the endemic African cervid. [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] proposed a new evolutionary and systematic model of the genus Megaloceros, which included all giant and some smaller plesiometacarpal Old World cervids, including presumed descended Late Pleistocene dwarfed forms from Mediterranean islands and Megaceroides algericus. [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] divided the genus Megaloceros Brookes, 1828 (the genus name Megaceros Owen, 1843 was applied in the cited work) into two informal evolutionary branches called the "giganteus group" and the "verticornis group" after the best known species representing each stock. Megaceroides algericus, according to [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF], is a terminal form of the "verticornis group" with signs of evolutionary "degeneration", such as a small body size, the extreme degree of hyperostosis, and a very marked shortening of the muzzle. Azzaroli (1953: page 48) recognized that the relationship of M. algericus with European forms is not clear, therefore he avoided using the name Megaceroides in his evolutionary model of giant deer. Nonetheless, [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] indicated some morphological characters of Megaceroides algericus, such as the flattened shape of the frontlet and traits of "stunting" in the antler morphology and overall size, which permitted to Azzaroli to include the North African cervid in his "verticornis group". [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] noticed that M. algericus coincides in some features with Sinomegaceros pachyosteus (placed by [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] in the "giganteus group") in its smaller body size, the extreme degree of hyperostosis, and the shortening of the muzzle. Ambrossetti (1967) accepted Azzaroli's opinion and placed all "verticornis-like" deer from Europe together with Algerian endemic deer in the subgenus Megaceros (Megaceroides). Later, [START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF] elevated Megaceroides to the generic rank. [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] suggestion on the morphological affinity between Megaloceros algericus and Sinomegaceros pachyosteus was supported later by [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF] and Hadjouis (1990).
Finally, [START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF][START_REF] Azzaroli | Forest Bed elks and giant deer revisited[END_REF] assumed that Megaceroides algericus and Praemegaceros dawkinsi (=Megaceroides dawkinsi according to [START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF] resulted from a similar evolutionary processes of dwarfing caused by geographical isolation in unfavorable conditions. The flat shape of the frontal bones, the similarly diminished body size, and the disproportionately thin antler beams with respect to relatively large antler burrs and robust pedicles are regarded as stunting traits shared by M. algericus and P. dawkinsi [START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF]. [START_REF] Kahlke | Die Cerviden-Reste aus dem Tonen von Voigtstedt in Thüringen[END_REF] proposed the old genus name Praemegaceros [START_REF] Portis | Elenco delle specie di Cervicorni fossili in Roma e attorno a Roma[END_REF] (substituting the genus name Orthogonoceros Kahlke, 1956 with type species Cervus verticornis Dawkins, 1872) for European deer of the "verticornis group", thus disregarding [START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF] suggestion of a close phylogenetic relationship between Megaceroides algericus and the "verticornis group". Somewhat later, [START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF] published a detailed taxonomical study of Pleistocene large-sized deer and confirmed the validity of the genus name Praemegaceros for the "verticornis group", acting as first revisers. The endemic British deer Cervus dawkinsi Newton, 1882 was designated as a type species of the genus Praemegaceros [START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF]. From that point, debates on the taxonomy of large-sized deer from Pleistocene of Western Palearctic became very confusing, since the disputed genera Praemegaceros and Megaceroides were typified by poorly known endemic and morphologically odd species Praemegaceros dawkinsi and Megaceroides algericus. Vislobokova (2012aVislobokova ( : page 687, 2012b: page 58;: page 58;2013: page 911) regards Cervus verticornis Dawkins, 1872 as the type species of Praemegaceros and granted to [START_REF] Kahlke | Die Cerviden-Reste aus dem Tonen von Voigtstedt in Thüringen[END_REF] the title of first reviewer of the genus. Nonetheless, Vislobokova (2012b: page 61;2013: page 913) in also proposes Cervus dawkinsi as a type species of the nominotypical subgenus Praemegaceros (Praemegaceros). It is necessary to keep in mind that Praemegaceros [START_REF] Portis | Elenco delle specie di Cervicorni fossili in Roma e attorno a Roma[END_REF] was originally based on Cervus dawkinsi, while Cervus verticornis together with Cervus savini Dawkins, 1887 and Cervus falconeri Dawkins, 1868 were included in Praedama [START_REF] Portis | Elenco delle specie di Cervicorni fossili in Roma e attorno a Roma[END_REF][START_REF] Portis | Elenco delle specie di Cervicorni fossili in Roma e attorno a Roma[END_REF][START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF][START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF] and, therefore, can not be used as type species for Praemegaceros. According to the Article 44 of ICZN, a genus and its nominotypical subgenus are denoted by the same type species. Hadjouis (1990) regarded Megaceroides as a subgenus of Megaceros Owen and proposed an improved neodiagnosis for Megaceroides and a synonymy list of M. algericus. In opinion of Hadjouis (1990), the morphology of the dentition (first of all, the strongly developed cingulum on upper molars) and the extremely strong mandibular pachyostosis approach M. algericus to the Asian large-sized deer Sinomegaceros pachyosteus, thus one more time supporting the [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF] previous observation. The missing posterior tine in antlers of M. algericus was regarded by Hadjouis as one of the most important characters distinguishing the African endemic deer from European giant deer. Nonetheless, the viewpoint of Hadjouis has been contested by [START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF] and [START_REF] Azzaroli | Forest Bed elks and giant deer revisited[END_REF], who put in question the taxonomical value of the cingulum in upper molars and the mandibular pachyostosis, which, according to the Italian authors, are quite variable in large-sized deer. [START_REF] Abbazzi | Remarks on validity of the generic name Praemegaceros Portis 1920, and an overview on Praemegaceros species in Italy[END_REF] pointed out the resemblance of neurocranium shape of Megaceroides algericus with Praemegaceros solilhacus (Robert, 1829) and P. dawkinsi, however, she did not discuss the phylogenetic position of M. algericus and, following the opinion of Hadjouis (1990), restricted Megaceroides to the type species. [START_REF] Gentry | Cervidae[END_REF] included Megaceroides in the synonymy of Megaloceros. [START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF]Vislobokova ( , 2012aVislobokova ( , 2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] included Megaceroides in the tribe Megacerini Viret, 1961 that contains a large number of continental and insular Late Miocene -Pleistocene cervids presumably closely related to the genera Megaloceros and Praemegaceros. Vislobokova (2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] suggests that Megaceroides is a monotypic taxon that includes a single peculiar cervid form closely related to European Praemegaceros and possibly may be included in the latter genus as a subgenus in the case if its belonging to Praemegaceros will be demonstrated. Vislobokova (2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF]) also regards Azzaroli's informal verticornis-group and giganteus-group as subtribes Praemegacerina and Megacerina within the tribe Megacerini. Since the phylogenetic relationships among the so-called "giant deer" (including also some smaller continental forms and insular dwarfs) are not well founded [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF], the new taxonomical units proposed by Vislobokova most probably are polyphyletic.
The taxonomical revision of the genus Praemegaceros and a preliminary account on systematical position, morphology, and paleoecology of Megaceroides algericus were published in our previous reports (Croitor, 4 2004(Croitor, 4 , 2006(Croitor, 4 , 2014;;[START_REF] Croitor | Étude préliminaire des cerfs du gisement Pleistocène inférieur de Ceyssaguet (Haut-Loire)[END_REF][START_REF] Croitor | On the systematic position of the large-sized deer from Apollonia, Early Pleistocene, Greece[END_REF][START_REF] Croitor | Origin and evolution of the late Pleistocene island deer Praemegaceros (Nesoleipoceros) cazioti (Depéret) from Corsica and Sardinia[END_REF]. We pointed out that the morphology of the dentition (the presence of the cingulum in upper molars, the relatively short lower premolar series, and the brachyodonty) approach the Algerian deer to Megaloceros giganteus from moderate latitudes of Central and Western Eurasia [START_REF] Croitor | Étude préliminaire des cerfs du gisement Pleistocène inférieur de Ceyssaguet (Haut-Loire)[END_REF]. Therefore, we adjoined the opinion of [START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF] on the validity of the genus name Praemegaceros for the "verticornis group". Later, a direct phyletic relationship between dwarfed Middle Pleistocene Praemegaceros dawkinsi and larger Early Pleistocene Praemegaceros obscurus was suggested [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. This point of view is supported, inter alia, by the presence of vestigial basal antler tines in P. dawkinsi, which are homologous with long and strong basal tines in P. obscurus.
Regarding the size and proportions of the braincase from Ain Tit Mellil discovered by [START_REF] Arambourg | Mammifères fossiles du Maroc[END_REF] and the pachyostotic mandibles from various North African sites, I pointed out the disproportion between the relatively broad and large braincase and the short and weak anterior part of the mandibles, presuming the mixed character of the material ascribed to Megaceroides algericus and, therefore, I proposed to exclude the African material from the taxonomical debates of European large-sized cervid forms [START_REF] Croitor | Late Neogene and Quarternary biodiversity and evolution: Regional developments and interregional correlations[END_REF]. Later, I had the opportunity to study the complete skull of Megaceroides algericus from Guyotville (figured by [START_REF] Arambourg | Note préliminaire sur un nouvelle grotte à ossements des environs d'Alger[END_REF]) that represents a poorly understood and aberrant morphological specialization for Cervidae [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. In my previous publication, only a general description and some measurements of the cranial and mandibular material of M. algericus were published, however, even that brief overview provides arguments against its use as a type species for giant and dwarfed deer arbitrarily placed in the "verticornis group" and now included in the genus Praemegaceros [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF][START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF].
Despite of the available fine cranial and dental material, antlers and postcranial bones of Megaceroides algericus are little known. [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] described and figured a damaged, but obviously very robust and relatively short cervid radius from Berrouaghia (Algeria) characterized by a comparatively broad bone shaft (the mid-shaft measurement amounts to 40 mm, exceeding the analogous measurements of Megaloceros giganteus) and two fragments of slightly compressed from the sides (latero-medially) antler tines. Hadjouis (1990) described several shed antlers of Megaceroides algericus from Phacocheres (Algeria) with the missing distal part of palmation and the anterior (middle) tine, as well as a fragment of a narrow distal palmation. These specific although very incomplete data on the antler and postcranial morphology of Megaceroides algericus suggest peculiar eco-morphological adaptations, but practically do not contribute to the understanding of paleoecology and evolution of this species.
Nonetheless, despite of long lasting debates on its systematical position and phylogenetic relationships, even the fine available cranial material of Megaceroides algericus is still rather superficially described. In the present paper, a detailed morphological description of the cranial remains and dentition of Megaceroides algericus and a discussion on its paleoecology and phylogenetic relationships are provided.
Material and methods
The described fossil material comes from the old historical collections stored in the National Museum of Natural History in Paris. All fossil remains are yielded by archaeological Paleolithic sites, however, their exact stratigraphic provenance and absolute age remain unclear. Nonetheless, the detailed morphological description of the material included in the study was never published before and represent a significant information gap that impede the advance of our knowledge of taxonomy, systematics and phylogeny of Eurasian large sized and endemic Mediterranean deer. The studied material (Tab. 1) comes from the following sites (Fig. 1 [START_REF] Arambourg | La Grotte de la Carrière Anglade à Guyotville (d'Alger)[END_REF] as a Middle Paleolithic assemblage due to the presence of Rhinoceros mercki and Hippopotamus amphibius. The better preserved antlered skull (distal portions of antlers are not preserved, no collection number) from Guyotville was excavated and briefly described by Arambourg (Arambourg, 1932: fig. 3) and has been mentioned by [START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF] with regard to its forehead shape. Hadjouis (1990) quotes briefly some cranial characters based on the specimen from Guyotville and published measurements of its dentition. Only an approximate condylobasal length of this skull could be measured [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF], since its occipital condyles and foramen magnum were destroyed, apparently, by ancient hunters who extracted the brain tissue from the braincase. The sample from Guyotville includes also two hemimandibles (Nr. 336, Nr. 337, "Collection of Arambourg"), which remained unpublished.
Ain Tit Mellil (= Tit Mellil: Vaufrey, 1955), Morocco. The exact stratigraphic origin of the fossil remains is unknown and they age was generally assumed as "the beginning of Würm glaciation" [START_REF] Vaufrey | Prehistoire de l'Afrique[END_REF]. The braincase MOC148 from Ain Tit Mellil (figured in Arambourg, 1938: pl. II, fig. 2) was briefly discussed by Abbazzi (2004: fig. 6) and Vislobokova (2013: fig. 56, a, b).
Grotte de la Madeleine (= Taza 1: Fernandez et al., 2015), Algeria. The Paleolithic site Taza 1 includes three layers dated from >39 000 to 13 800 130 y. BP (uncalibrated: [START_REF] Medig | Les grottes paléolithiques de Taza[END_REF]. Therefore, Late Pleistocene age is assumed for historical collection of fossils yielded by this site [START_REF] Fernandez | The last occurrence of Megaceroides algericus Lydekker, 1890 (Mammalia, Cervidae) during the middle Holocene in the cave of Bizmoune (Morocco, Essaouria region)[END_REF]. The studied material includes two fragmented mandibles: the well preserved right hemimandible figured in [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF] fig. 2 A-B; no collection number) and another specimen with a malformation in the area of processus angularis (no collection number). The museum label provides the following information: "Cervus algericus -figuré: Pl. IV, Fig. 4", however, this label does not contain any bibliographic information. Filfila, Algeria. A Würmian age was assumed for the fauna from Filfila [START_REF] Ginsburg | Une faune würmienne dans un remplissage de fente du massif du Filfila (littoral nordconstantinois, Algérie)[END_REF]. The sample of Megaceroides algericus from Filfila, Algeria [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF] includes a fragment of right upper jaw FIL169 with M 2 and M 3 and three hemimandibles (FIL166, FIL167, and juvenile FIL160). Only the better preserved specimen FIL166 was figured by [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF] and Abbazzi (2004: fig. 7).
The specific character of fossil material (fragmentary skeletal remains, limited number of fossils) restricted the choice of methodological approach. The safest estimation of cranial and dental morphological characters of Megaceroides was possible with involving of few "typical" evolutionary and ecological cervid forms, like Dama dama (apparently, one of the closest to Megaceroides species, which maintains generalized cervid cranial morphology), Megaloceros giganteus (a giant species characterized by pachyostosis as Megaceroides algericus), Muntiacus muntjak (a tropical forest dweller, which possibly maintain the basic for Cervinae cranial morphology and proportions), and Hydropotes inermis (belongs to the subfamily Capreolinae, but represents a rare for cervids example of ecological specialization connected to the periaquatic ecological niche). The comparative craniological material includes a series of skulls and mandibles of the modern fallow deer Dama dama, Muntiacus muntjak, and Hydropotes inermis stored in the osteological collection of the Zoological Museum "La Specola" (ZMS, Florence, Italy) and in the Natural History Museum of London (NHML), red deer Cervus elaphus stored in the zoological collection of the National Museum of Natural History in Paris (NMNH), and Megaloceros giganteus from various Late Pleistocene sites of Ireland (NHML). The main measurements of the comparative material are presented in the tables 2 and 3. The statistical processing of data was not possible because of the restricted number of fossil material, but also because of quality and mixed character of the comparative osteological material, which I had at my disposal: the cranial material of some species (M. muntjak and H. inermis) was not numerous, besides that, many specimens were obtained from parks, zoological gardens, and did not represent natural populations, therefore, the statistical processing of data became senseless. Therefore, a single male skull of each species was selected for the comparative study.
The lengths of dental series are taken at the crown bases or at alveoli. The length of tooth crown is taken as a maximal measurable value. The length of tooth crown in upper cheek teeth is measured at the labial side of grinding surface. The breadth of tooth crown is measured at crown base. The terminology of dental morphology is adapted from Heintz (1971). The applied methodology of cranial measurements is adapted from [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF]. The applied terminology of antler tines follows the homology of tines according to [START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF] and [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF].The body mass estimation used here is based on dental variables according to [START_REF] Janis | Correlation of cranial and dental variables with body size in ungulates and macropodoids[END_REF]. Abbreviations used in the text: PP, premolar series; MM, molar series; L, length; H, height; D, width/breadth. Holotype: the left maxilla with P 3 -M 3 (Lydekker, 1890; figured on page 602), P 2 is completely destroyed, while M 2 and M 3 are damaged; the cast is stored at the Natural History Museum of London (Lydekker, 1890: p. 604), collection number M10647 [START_REF] Gentry | Cervidae[END_REF]. The length of upper molar series M 1 -M 3 amounts to 58.5 mm (measured from the figure). The location of the original fossil specimen is unknown. The holotype is characterized by the strong development of a basal enamel cingulum in the upper molars.
Type locality and horizon: Late Pleistocene from Hammam Meskoutin, Guelma (Algeria).
Occurrence: Late Pleistocene -Holocene (ca. 24,000 to 6641-6009 yr. BP; [START_REF] Fernandez | The last occurrence of Megaceroides algericus Lydekker, 1890 (Mammalia, Cervidae) during the middle Holocene in the cave of Bizmoune (Morocco, Essaouria region)[END_REF]. (Lydekker, 1890: page 603): Somewhat smaller than Cervus cashmirianus, with brachyodont molars, having a very large inner cingulum, and the external surface complicated by the excessive development and reflection of the lateral ridges of the outer crescents so as to form distinct pockets on this surface at the base of the ridges in question.
Original diagnosis
Emended diagnosis (this work):
A cervid species of medium size, slightly larger than modern fallow deer and smaller than red deer. The skull is very broad: the skull breadth attains more than 60% of the condylobasal length. Splanchnocranium is relatively short: the length measured from the anterior edge of the orbits to the prosthion makes is shorter than 1/2 of the condylobasal length. Skull bones with exception of zygomatic arches are very thick. Braincase is moderately flexed: the angle between parietal bones and face profile amounts to ca. 135°; parietal bones are flat. Pedicles are moderately long (their length approximately equals to their transversal diameter), deflected sideward and some-what backward. Frontal bones are flat and very broad. Orbits are comparatively large; their anterior edges lay at the level between M 2 and M 1 . Ethmoidal vacuities are completely closed. Preorbital fossae are not developed. Basioccipitale is broad and bell-shaped. Upper canines are missing. The cingulum in upper molars is variable: it may be well-developed, or almost completely missing. Lower fourth premolar (P4) is molarized: its metaconid is fused with paraconid. Mandible is very pachyostosic, with low anterior part. The transversal section of the anterior portion of the hemimandible is circular. Antlers terminate with a palmation. The proximal part of antler beam has a circular transversal section and lacks basal tines. The tine inserted on the anterior side of the beam (homologous with the middle tine in Megaloceros giganteus) is situated from the burr at a distance ca. two times exceeding the diameter of antler base.
Description:
SKULL: The cranium from Guyotville belongs to a rather aged individual with completely obliterated sutures and deeply worn upper dentition (Figs. 2,3). The area of left eye socket is damaged. The basioccipital part and anterior part of premaxillary bones are destroyed, so the condylobasal length and some other measurements of the skull are given with approximation (Tab. 4). The overall shape of cranium is atypical. The relatively short and very broad skull of Megaceroides algericus is unique among fossil and living cervids (Fig. 2). Interestingly enough, the length proportions of the cranium are modified insignificantly: the eye sockets are in normal position for a deer of such a size, the relative length of facial part before eye sockets is the shortest among deer involved in comparison (even some-what shorter than in the insular dwarf Praemegaceros cazioti), however, the difference is not significant (Fig. 4) and the length proportions may be regarded as normal for a deer of this size of subfamily Cervinae. The position of bregma between the posterior edges of pedicles and the position of nasion slightly caudally with respect to anterior edges of eye sockets are similar to the morphological condition found in Megaloceros giganteus. The orbito-frontal portion of the cranium is rather short, as in Dama and Megaloceros: the anterior edge of orbit is situated above the M 2 -M 3 border. The eye sockets are relatively large, as in Dama.
The relative length of the upper tooth row with respect to basal length of skull amounts to 29.5%, being fairly close to the ratio found in Megaloceros, Axis and Dama. Nonetheless, the position of upper cheek tooth row shifted toward the anterior represents a specific character of M. algericus (Fig. 5). The anterior displacement of the upper tooth row in Megaceroides algericus, apparently, resulted from the strong reduction of the predental length of the skull (distance between P 2 and prosthion). The parietal bones are flat. The face profile is straight. The braincase of Megaceroides algericus may be considered as rather flexed: the angle between parietal plane and the facial profile amounts to 135° and shows an intermediate condition between Dama and Megaloceros (Fig. 6).
The cranial bones are very thick, reminding the cranial hyperostosis described in Megaloceros.
However, unlike in Megaloceros, the vomer apparently is not affected by hyperossification (Fig. 3). The zygomatic arches are markedly thin and feeble, contrasting with overall robustness of the skull.
The pedicles are rather long, set obliquely on the skull and some-what deflected toward the rear and the sides. The pedicles are slightly compressed in the antero-posterior direction, however this compression is not as strong as in advanced species of Praemegaceros (P. verticornis, P. dawkinsi, and P. solilhacus). The frontal bones are very broad (corresponding The area for the musculus masseter attachment on the upper maxilla is situated above the anterior edge of M 1 and posterior edge of P 4 . The predental portion of the skull (anterior parts of maxillae and praemaxillary bones) is very broad and relatively short.
The braincase MOC148 from Ain Tit Mellil is similar in morphology and proportions to the previous specimen, but is characterized by slightly smaller size and by a more convex profile of the forehead (Figs. 7A,8). The basioccipital bone in MOC148 is broad and bell-shaped (Fig. 7B), with a transversal extension in the area of the pharyngial tubercles (the tubercles for the attachment of the Musculus rectus capitus ventralis major). The breadth of basioccipitale at tubercles amounts to 52.1 mm. The preserved left bulla tympani is rather large, rounded, projecting outside (as in Dama), compressed in the medio-lateral direction, with the following dimensions: 35.0 × 20.1 mm. The anterior bony thorn of bulla tympani is not present in Megaceroides, unlike some Cervinae (Cervus, Rucervus). The foramina ovale are comparatively small, with irregular shape approaching to a triangular outline. The dimensions of foramina ovale are the following: 6.6 × 6.0 mm (sin); 7.0 × 5.4 mm (dx). The nasal bones are extended behind the line connecting the anterior edges of orbits (Fig. 9). I did not have an opportunity to make a direct comparison of crania of Megaceroides algericus and Sinomegaceros pachyosteus from China, nonetheless, it is useful to compare at least the general shape of Megaceroides algericus with skull of Sinomegaceros from Choukoutien figured by [START_REF] Young | On the Artiodactyla from the Sinantropus site at Choukoutien[END_REF]. It seems that the skull of Sinomegaceros pachyosteus is broadest at the level of orbits reminding Megaceroides algericus, thou its broadening is not so extreme as we can see in the African deer. One can notice that the skull is broadest in Sinomegaceros pachyosteus at the posterior edge of the orbits, while the skull of Megaceroides algericus is broader at the anterior edge of the orbits. This difference is conditioned, apparently, by the orientation of orbits, which are more forward oriented in Sinomegaceros pachyosteus. One can assume that the noticed difference of orbit orientation represents an adaptation to forested environment in Sinomegaceros pachyosteus. Therefore, the side orientation of orbits in Megaceroides algericus should be regarded as a specific adaptation for open landscape in hoofed mammals allowing a widest possible field of view in order to escape approaching predators. M. algericus is characterized by some-what more flexed braincase than S. pachyosteus (the angle between parietal plane and face profile line measured from the specimen figured by [START_REF] Young | On the Artiodactyla from the Sinantropus site at Choukoutien[END_REF] amounts to ca. 145°), and both cervids are more advanced in this case than M. giganteus characterized by a weak flexion of braincase (see [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF] for progressive change of this character in Cervidae). S. pachyosteus shows a different position of its orbit with respect of upper tooth row: according to the figure of [START_REF] Young | On the Artiodactyla from the Sinantropus site at Choukoutien[END_REF], the anterior edge of orbit is situated above the anterior part of M 2 (not above M 2 -M 3 border as in Megaceroides algericus). Possibly, this difference is caused by relatively diminished size of teeth in M. algericus and their oral "migration". UPPER TEETH: The anterior part of the maxillas is preserved and show that there were no canines (their alveoli are not present) in the specimen from Guyotville. Cheek teeth are relatively small (Tab. 5). The relative size of upper third molar is visibly reduced; therefore M 2 is noticeably larger than M 3 . Only a moderately developed entostyle is present on the lingual side of the upper molars. The entostyle of the upper molars is flattened and well-expressed in the studied additional material. It may extend and partially edge with lingual base of tooth crown; however a continuous (antero-linguo-posterior) large cingulum is not developed. There are no hypoconal spur and other enamel folds on upper molars. The lingual side of the P 4 is not split into protocone and hypocone, not even grooved. The lingual side of P 4 is bordered with a weak cingulum-like enamel fold.
The fragment of a maxilla with M 2 -M 3 FIL-169 belongs to an older individual as indicated by the advanced stage of tooth crown wear (Fig. 10). The angle between labial and lingual walls of upper molars (Fig. 10) amounts to 37º, as in Dama dama. The hypoconal fold is present only in M 3 . Two small enamel folds are found on the external side of anterior hypoconal wing in M 2 .
It is necessary to indicate that the additional material on upper dentition described in the present paper does not fully correspond to the morphology and measurements of the holotype of Megaceroides algericus. Unlike the holotype from Hammam Meskoutin, the additional material of M. algericus represents a deer form with some-what smaller upper cheek teeth (length of M 1 -M 3 tooth series amounts to 54.1 mm in the specimen from Guyotville against 58.5 mm in the holotype of M. algericus), the cingulum in upper molars of the additional material is not developed, while M 3 is significantly reduced in size (this specific size reduction in the specimen from Hammam Meskoutin is not observable). It is not clear yet, if we observe a broad individual variation in dental morphology, or a true evolutionary process (see discussion).
LOWER MANDIBLE: The body of the lower mandible is very low and thick (Fig. 11). The symphysal portion of the mandible is high (Fig. 12, Tab. 6). The diastemal part of the mandible is relatively very short. The anterior portion of the mandible from M1 to the symphysis has a cylindrical shape. Behind the M 1 , the mandible becomes higher and more robust. The maximal thickness of mandible is behind M 3 , in the area of musculus masseter insertion. The available fossil material does not display clear sexual dimorphism of mandibular pachyostosis observed in M. giganteus. The juvenile mandible FIL 160 is already pachyostotic, although it is less thick than the mature specimens. The lower side of horizontal part of mandible is convex. The processus angularis is moderately expressed. The ascending part of mandible is sloped backward and forms with the horizontal body of mandible an angle amounting to 60º. The posterior side of the ascending ramus is concave. The coronoid processus is short and cone-shaped. The shape of articulation condyle is cylinder-like. The distance between the cranio-mandibular articulation condyle and the M 3 is relatively large if compared to the majority of deer involved in the comparative study. This morphological trait is in accordance with the forward displacement of the upper tooth rows. The lower tooth row is displaced orally due to the very short diastema and obliquely set ascending portion of mandible (Fig. 13).
LOWER TEETH: The crowns of lower cheek teeth are relatively small and rather short and broad (Tab. 7). At the initial stage of wear, protoconid and hypoconid of P 4 may not be completely fused (Fig. 11B); however, the fourth premolar usually shows a complete molarization with complete conjunction of protoconid and hypoconid at a more advanced stage of wear. The size of the crown of P 2 is much reduced, so it remains untouched even in a deeply worn dentition, as may be seen in the specimen FIL166 (Fig. 12).
The specific proportions of lower tooth row are characterized by relatively reduced size of M 3 , if compared to the larger and broader M2 and M1. The premolar series is comparatively short, however, a broad variation is observed here. The premolar/molar length ratio amounts to 60.5% in the mandible FIL166, while the same tooth series ratio in the two specimens from Phacocheres amounts to 45.0% and 52.9% (Hadjouis, 1990).
DENTAL WEAR: The dental wear in Megaceroides algericus brings interesting details that reveal some earlier overlooked anatomical and paleoecological peculiarities of this species. The entire lower tooth row is worn evenly (with exception of P 2 , which is not worn) in all studied specimens of M. algericus, unlike in the majority of deer, which normally show a more advanced wear of M 1 . The statistical processing of mesowear traits is not possible because of the poorly preserved dental material, however, some of observations are interesting and worth mentioning. The character of the tooth row wear varies suggesting a rather broad range of food habits in Megaceroides algericus. Generally, the dental cusps are very low and rounded; nonetheless, the wear surface of the enamel in the majority of specimens available for observation is finely polished, suggesting the predominated dental attrition. However, the grinding surface of the mandible FIL166 is striated by transverse traces of wear caused by a coarse forage material. The direction of wear traces forms an angle of 60º with the tooth row axis. This observation suggests a comparatively wide angle formed by hemimandibles, which apparently attained 60º (Fig. 14). Such a broad angle between hemimandibles is in accordance with the particularly broad skull. ANTLERS: The complete antlers of Megaceroides algericus are unknown. The cranium from Guyotville preserved only the proximal parts of the antlers. The left antler is broken just at few centimeters above the burr, while the right antler is broken at 20 cm above the burr. The antlers are normally developed (the beam diameter is not disproportionally thin with respect to the burr size and the diameter of pedicle) and do not show any sign of "degeneration" reported by [START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF]. The proximal portion of the right antler beam is straight and directed sideward, backward and slightly upward. The antler beam is cylinder-shaped and some-what more robust than the supporting pedicle. The antero-posterior diameter of the right antler beam above the burr amounts to 53.0 mm; the latero-medial diameter amounts to 56.0 mm. The same measurements of the left antler amount to 53.3 mm and 55.5 mm respectively. The basal tine is not present in Megaceroides algericus. The next middle (or anterior) tine is inserted on the anterior side of the beam. The cross-section of the basal part of the middle tine is ellipse-shaped (its maximal diameter amounts to 40.6 mm, minimal diameter amounts to 22.0 mm). The distance between antler burr and the base of the middle tine amounts to 96 mm. The antero-posterior diameter of the antler beam between the burr and the middle tine amounts to 42.4 mm. The height of the middle tine ramification is 140 mm. The antler becomes flattened in the area of the middle tine insertion and the above situated distal portion of antler extends into a palmation: the maximal diameter of antler above the middle tine (where the antler is broken) amounts to 59.3 mm; the minimal diameter at the same level is 41.3 mm.
Discussion
Evolutionary significance of pachyostosis
The extreme cranial pachyostosis of Megaceroides algericus requires a special discussion here. There are few examples of pachyostosis among mammals. Most of the cases are known in ruminants, and cranial and mandibular pachyostosis in cervids is one of them [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF]. The pachyostosis of limb bones recorded in the Lower Miocene giraffoid Lorancameryx pachyostoticus from Spain represents another phenomenon of bone thickening recorded in ruminants [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF]. Although the character of pachyostosis in Lorancameryx differs histologically and physiologically from the cranial bone thickening in cervids, [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF] regard both cases as different manifestations of the similar physiological and evolutionary process. [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF] noticed that the limb bone pachyostosis in Lorancameryx occurred in the same geological epoch when several groups of ruminants evolved horns and horn-like cranial appendages. Therefore, according to [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF], the pachyostosis and the bony cranial appendages represented a similar physiological response to certain environmental changes and acted as "bone sinks" where excess tissue was stored during the rich in nutrition vegetation growth seasons. According to [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF], the pachyostosis of cervids also could represent a similar secondary metabolic response to exogenic factors, primarily marked seasonality.
The inert bone tissue was deposited in Lorancameryx on the limb bone diaphysis (especially on radius and ulna) every year starting from the subadult age [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF], while in Megaloceros giganteus the mandibular pachyostosis developed through depositing of additional lamellar bone tissue during the early adult age and no visible changes in the state of pachyostosis were recorded during the subsequent adult life [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF]. According to [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF], the tissue of pachyostotic bone in Megaceroides algericus and Sinomegaceros pachyosteus shows the annual cyclic rhythm as in the case of Lorancameryx.
Therefore, it seems that the pachyostosis of Megaceroides algericus has a different physiological and ontogenetic background than the pachyostosis of Megaloceros giganteus.
Several authors repeatedly reported the development of mandibular pachyostosis in
Praemegaceros and some other large-sized cervid forms [START_REF] Kahlke | On the evolution of pachyostosis in jaw-bones of Choukoutien giant-deer Megaceros pachyosteus (Young)[END_REF][START_REF] Kahlke | Die Cerviden-Reste aus dem Tonen von Voigtstedt in Thüringen[END_REF][START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF][START_REF] Azzaroli | Forest Bed elks and giant deer revisited[END_REF][START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF]Vislobokova, , 2012aVislobokova, , 2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF], which, according to the mentioned authors, represent a specific side effect of gigantism in cervids and is regarded as an important taxonomical character distinguishing the phylogenetic branch of giant deer from other phylogenetic branches within the subfamily Cervinae. However, a simple scattered diagram of mandible proportions shows that the mandible shape in large-sized Praemegaceros is very similar to the morphological condition found in Eucladoceros and Dama [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. Van der [START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF] found signs of mandibular pachyostosis in a wide variety of cervids and assumed that this specific character evolved among cervids several times in parallel and denied its plesiomorphic significance for the phylogenetic group of giant deer. Therefore, the sporadic occurrence of mandibular pachyostosis in various cervid lineages can not be used as a meaningful taxonomic character at the tribe level. The wellexpressed cranial and mandibular pachyostosis is recorded only in very few cervid genera, such as Sinomegaceros from Eastern Asia, Megaloceros from Central and Western Eurasia, and Megaceroides from North Africa. [START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF] reports also the mandibular thickening in Late Miocene mediumsized forms of the genus Praesinomegaceros from South Siberia. [START_REF] Kahlke | On the evolution of pachyostosis in jaw-bones of Choukoutien giant-deer Megaceros pachyosteus (Young)[END_REF] studied the variation of the cross-sections of mandibles in Sinomegaceros pachyosteus from Choukoutien and suggested that the mandible thickening in this deer is a dimorphic character. [START_REF] Kahlke | On the evolution of pachyostosis in jaw-bones of Choukoutien giant-deer Megaceros pachyosteus (Young)[END_REF] also assumed that the increased mandible thickening in S. pachyosteus was a gradual evolutionary process. Nonetheless, the mandibular pachyostosis in Sinomegaceros evolved much earlier in another much smaller form with small antlers. [START_REF] Tleuberdina | Late Neogene fauna of South-East of Kazakhstan[END_REF] reported a rather small-sized Late Neogene species (the estimated body mass based on dental measurements did not exceed 50 kg) Sinomegaceros robustus from South-East of Kazakhstan. The roe-deer sized S. robustus is characterized by primitive unmolarized P 4 , small antlers with distal palmations (burr diameters amount to 18.0 and 16.2 mm), and pachyostotic lower mandible with almost circular cross-section [START_REF] Tleuberdina | Late Neogene fauna of South-East of Kazakhstan[END_REF]. According to [START_REF] Shikama | Megacerid remains from Gunma Prefecture, Japan[END_REF], a certain degree of pachyostosis is recorded also in Sinomegaceros yabei. Three mandible specimens of S. yabei, two of which certainly belong to a male, are characterized by a rather moderate degree of pachyostosis similar to the specimens of M. giganteus tentatively ascribed to females by [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF] and [START_REF] Croitor | Giant deer Megaloceros giganteus Blumenbach, 1799 (Cervidae, Mammalia) from Palaeolithic of Eastern Europe[END_REF]. [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF] supposed that pachyostosis represents an adaptation that enhanced the skeletal calcium store, related to the large size of antlers. [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF] supported this point of view, noticing that the enhanced mineral storage in head skeleton is an important physiological adaptation permitting the fast growing of large antlers during the relatively short vegetation season. Perhaps, the pachyostosis in Megaloceros giganteus was physiologically connected with such specific for giant deer morphological characters, as an ossified vomer, complete and early obliteration of cranial sutures, diminished size of foramen ovale, and the development of additional enamel folders (cingulum) at the base of molars in some evolutionary most advanced populations of giant deer [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. Sanchez-Villagra (2010) reported an exceptionally high for Cervidae number of cranial suture fusion in Megaloceros giganteus (20 cranial sutures in giant deer against 10 in modern elk Alces alces), nonetheless, he excluded the simple mechanical adaptation of the advanced bone suture fusion to large and heavy antlers. The high number of suture fusion in giant deer contrasts with the general trend of ruminants toward the diminished number of fused cranial sutures, which is not correlated with body size and apparently represent a specific bio-mechanic adaptation to rumination [START_REF] Bärmann | A Phylogenetic Study of Late Growth Events in a Mammalian Evolutionary Radiation -The Cranial Sutures of Terrestrial Artiodactyl Mammals[END_REF]. Therefore, one can assume that the high number of cranial suture fusion in Megaloceros giganteus represents another specific consequence of pachyostosis. However, Bärmann and Sanchez-Villagra (2011) report the high number of cranial suture fusion also for some other ruminant genera (Ocapia, Tragelaphus, Kobus, and Antilocapra), seeking the explanation in biomechanical factors. Van der [START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF] remarked that the function of temporary storage of minerals should be followed by signs of resorption in pachyostotic mandibles. Actually, even non-pachyostotic bones represent a dynamic system constantly undergoing resorption and deposition of minerals and no particular "scars" of resorption on bone tissue could be seen, taking apart the cases of pathology [START_REF] Alberts | The molecular biology of the cell[END_REF]. Vislobokoba (2009, 2012b, 2013) regards the cranial pachyostosis of Megaloceros giganteus as a mechanical adaptation (comparable to cranial pneumatization in Rangifer and Bison) correlated with large and heavy antlers and reports a comparatively weak development of cranial pachyostosis in females of giant deer. This hypothesis is questionable for several reasons. The lower mandible is a pending structure that actually is not exposed to the weight load of antlers and can not have any function of weight support in the skull. It is not clear in this case, which biomechanical advantage could bring a pachyostotic mandible, since the low-crowned and relatively small cheek teeth, the low corpus mandibulae and the relatively small area of insertion of musculus massetter in Megaloceros giganteus and Megaceroides algericus suggest that their thick lower mandibles can not represent any particular mechanical reinforcement advantage [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. Besides that, the pachyostosis is recorded also in the small-sized cervid with tiny antlers Sinomegaceros robustus.
It seems that pachyostosis and accessory cranial bone structures in ruminants (here we can mention horns and horn-like cranial appendages) changed their functional significance during the large-scale evolutionary process [START_REF] Janis | Evolution of horns in ungulates: ecology and palaeoecology[END_REF]. As cervid antlers, pachyostosis originally could serve as "bone sinks" where the excess bone tissue was stored [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF]. It is interesting that all known pachyostotic cervids belong to the subfamily Cervinae and evolved and lived in the most peripheral and extreme parts of the area of distribution of the subfamily: this is the case of the periglacial cursorial open landscape giant Megaloceros giganteus, the forest dwellers of the genus Sinomegaceros that must be susceptible to the repeatedly advancing arid zones of Central Asia, and Megaceroides algericus that evolved in very unusual for cervids environments (will be discussed below), which strongly modified its skeletal morphology. Those species were exposed to the most stressing seasonal environmental conditions among Cervinae, supporting therefore the hypothesis of [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF]. Another argument in the favour of the hypothesis of the similar physiological-evolutionary origin of pachyostosis and antlers in cervids may be sought in the comparison of subfamilies Cervinae and Capreolinae. It seems that the pachyostosis is a peculiar physiological property of the subfamily Cervinae (not tribe Megacerini), which sporadically appears in some specific environmental conditions. Pachyostosis is not known among Capreolinae, however, some members of this subfamily are known to have antlers in females. The best known example is Rangifer tarandus inhabiting the most extreme environmental conditions among Capreolinae, but the occasional presence of small antlers in normally developed females also was reported for Odocoileus and Capreolus [START_REF] Wislocki | Antlers in Female Deer, with a report of Three Cases in Odocoileus[END_REF].
Therefore, the cranial pachyostosis in Megaceroides algericus initially represented a specific physiological mechanism characteristic of some lineages of Cervinae that evolved in extreme seasonal environments. The cervid pachyostosis is not correlated with body mass and size of antlers, since it is recorded in smallsized (Sinomegaceros robustus), medium-sized (Megaceroides algericus), and large-sized (Megaloceros giganteus) cervids. However, all known cases of pachyostosis are combined with palmated antlers (the opposite affirmation is not true: many cervids with palmated antlers have no pachyostosis), suggesting that palmated antlers have not only social evolutionary significance, as it was suggested by [START_REF] Geist | Deer of the World: Their Evolution[END_REF], but also a specific environmental and physiological background.
Paleoecology of Megaceroides algericus
Regarding the pachyostosis of Megaceroides algericus, I would like to point out some of its peculiar traits distinguishing from the pachyostosis in Megaloceros. Not all parts of skull in Megaceroides are equally pachyostotic: zygomatic arches and anterior part of mandible are comparatively weak and not reinforced by pachyostosis. The weak (or better to say, normal) zygomatic arches are needed to ensure the movements of lower jaw and probably that is why they are not affected by distorting pachyostosis. The bony rim of orbits and very broad forehead are particularly pachyostotic and ensure protective shelter for weak zygomatic arches. Other part of the skull (rostrum, braincase) and the posterior part of mandible are also strongly pachyostotic and create a sort of bony helmet. At present, it is difficult to affirm if the cranial pachyostosis in Megaceroides algericus is a matter of sexual dimorphism. The only two known wellpreserved skulls belong to males. The rather small series of available lower mandibles does not show any visible dimorphism. Ontogenetically, the pachyostosis of Megaceroides algericus is also specific. Taking into account the juvenile mandible FIL160 from Filfila and the pachyostotic mandible with deciduous teeth from the grotto of Chenoua (Algeria) figured by [START_REF] Arambourg | Mammifères fossiles du Maroc[END_REF] fig. 8A), the additional bone tissue depositing started in Megaceroides algericus from the early juvenile age, unlike in Megaloceros giganteus. As it already was mentioned above, the tissue of pachyostotic bone in Megaceroides algericus shows an annual cyclic rhythm and therefore increases with age at least during some period of the animal's life [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF].
Whatever was the initial significance of the cranial pachyostosis, it seems that it acquired a new function in Megaceroides algericus. Apparently, the unusual cranial specialization of M. algericus is a result of adaptation to an ecological niche, which was unavailable for ecologically highly competitive (fide Geist, 1998) African bovids. There are no morphological analogies among modern species of deer or other ruminants that could be helpful in the paleoecological interpretations of M. algericus. The anterior part of the mandible, the zygomatic arches and the dentition remain comparatively weak, contrasting with the pachyostotic bones of cranium and the posterior part of the mandible. The processus coronoideus of the lower jaw is short and cone-shaped and the area of insertion of musculus masseter is rather small. Taking in account the studies of cranio-dental adaptations in ruminants [START_REF] Caloi | Functional aspects and ecological implications in Pleistocene endemic cervids of Sardinia, Sicily and Crete[END_REF][START_REF] Janis | Correlations between craniodental morphology and feeding behavior in ungulates: reciprocal illumination between living ans fossil taxa[END_REF][START_REF] Palombo | Food habits of "Praemegaceros" cazioti (Depéret, 1897) from Dragonara Cave (NW Sardinia, Italy) inferred from cranial morphology and dental wear. Proceedings of the International Symposium "Insular Vertebrate Evolution: the Palaeontological Approach[END_REF], the enlisted characters suggest that M. algericus had quite low mastication abilities and was not adapted to process a hard fibrous forage material. The rather weak mastication ability is suggested also by the oral (toward the anterior) shift of cheek tooth row that results a decrease of power moment. The flat and broad skull with broad muzzle of M. algericus vaguely reminiscent of specific cranial shape of such semiaquatic herbivorous mammals as hippopotamuses. The weak mastication abilities and small lowcrowned cheek teeth suggest the semiaquatic or periaquatic habits of M. algericus and specialization to forage on soft water herbage. The reduced preorbital fossae could be another adaptation to a periaquatic habitat, since the peorbital fossae are very small in modern Chinese water deer Hydropotes inermis [START_REF] Flerov | Musk deer and deer. The Fauna of USSR 1(2), Mammals[END_REF]. Another specific morphological character of Megaceroides algericus may be found in the images of this animal from the Paleolithic art. [START_REF] Camps | Le cerf en Afrique du Nord[END_REF] published several Paleolithic images of this deer that show unusual for a cervid very long tail. Among modern cervids of similar body size, the relatively long tail is characteristic of Elaphurus davidianus specialized to humid swamp habitats [START_REF] Flerov | Musk deer and deer. The Fauna of USSR 1(2), Mammals[END_REF].
The assumption of semiaquatic or periaquatic habits of M. algericus is supported by the finely polished by attrition [START_REF] Fortelius | Functional characterization of ungulate molars using abrasion attrition wear gradient[END_REF] grinding surfaces of cheek teeth in the majority of studied specimens. However, the low dental cusps and the grinding surface of lower mandible FIL166 striated by transversal traces of wearing caused by a coarse forage material suggest that the animals were regularly exposed to a stressing shortage of forage, apparently, during the dry seasons, as is was shown, for instance, for modern plains zebra Equus burchelli [START_REF] Kaiser | Tooth wear gradients in zebras as an environmental proxy-a pilot study[END_REF]. The regular exposure to unfavorable seasonal conditions is also supported by the cyclic rhythm of pachyostosis development noticed by [START_REF] Morales | Pachyostosis in a Lower Miocene giraffoid from Spain, Lorancameryx pachyostoticus nov. gen. nov. sp. and its bearing on the evolution of bony appendages in artiodactyls[END_REF].
Cranial helmet-like pachyostosis in M. algericus could have a function of passive defense against such water predators as, for instance, crocodiles, which represent the most frequent danger in African periaquatic biotopes [START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. Crocodiles were present in the area of distribution of Megaceroides algericus in the past and survived in the region until the middle of XX century [START_REF] Brito | Crocodiles in the Sahara desert: an update of distribution, habitats and population status for conservation planning in Mauritania[END_REF]. The head of foraging deer is the important part of body most exposed to predator's attacks. The early ontogenetic development of pachyostosis in M. algericus supports this hypothesis, since this character must be vitally important for juveniles too. The thick cranial and mandibular bones must protect animal's head from deep lethal wounding and therefore increase chances of animal to escape from a predator. The cranial pachyostosis inherited from forerunners by Megaceroides should be regarded here as a good example of preadaptation that was maintained by natural selection in new conditions as the adaptation of passive defense. The robust radius and ulna described by [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] could be a part of such adaptation permitting to the deer to resist the attempts of a crocodile to drag its prey under the water. [START_REF] Abbazzi | Remarks on validity of the generic name Praemegaceros Portis 1920, and an overview on Praemegaceros species in Italy[END_REF], taking into account the brachyodont dentition and shallow mandibular body combined with sideward oriented antlers and large tympanic bullae, arrived to a rather contradictory conclusion that the small-sized M. algericus was an open-landscape browser (see Janis, 1995 and references therein). Although [START_REF] Abbazzi | Remarks on validity of the generic name Praemegaceros Portis 1920, and an overview on Praemegaceros species in Italy[END_REF] reports the large size for upper and lower teeth, this is not the case: the cheek teeth of M. algericus are relatively small, marked by particular size reduction of premolars and third upper and lower molars (M 3 and M 3 ). Nonetheless, the brachyodont first and the second upper and lower molars (M 1 , M 2 , M 1 , and M 2 ) are relatively broad, representing, in my opinion, a sort of grinding millstones for soft water plants.
The composition of faunas associated with M. algericus ecologically is quite heterogenous and may be regarded as a mammal assemblage that inhabited an ecotone near water body. The faunas from Ain Tit Mellil (Morocco) and Filfila (Alger) contain semiaquatic species like Hippopotamus amphibius, forest dwellers like Sus scrofa algeriensis, woodland species like Bos primigenius, Taurotragus, and open landscape species Connochoetes taurinus, Crocuta spelaea, Camelus sp. [START_REF] Arambourg | La Faune Fossile de l'Ain Tit Mellil (Maroc)[END_REF][START_REF] Hadjiouis | Megaceroides algericus (Lydekker, 1890), du gisement des Phacochères (Alger, Algérie). Etude critique de la position systematique de Megaceroides[END_REF].
Systematic position of the genus Megaceroides
The systematical place of such an odd and very specialized species as Megaceroides algericus within Cervinae is not easy task. However, even if the cranial morphology of M. algericus shows some highly specialized traits, the correct assessment of plesiomorphic and apomorphic characteristics allows for revealing the phylogenetic relationships of the North African endemic lineage and its systematical position.
Megaceroides shows the most significant morphological differences from the genus Cervus and allied forms. Unlike deer of the Cervus group (genera Cervus, Hyelaphus, Rusa, and Pannolia), M. algericus is characterized by a broad bell-shaped basioccipitale (as modern Axis, Rucervus, Dama, and the majority of extinct genera of Western Eurasia), relatively large and rounded bulla tympani (as Dama), missing upper canines (as Dama and Megaloceros), and long nasal bones that extend behind the anterior edges of orbits. Excluding pachyostosis as a peculiar specialization, one can notice that Megaceroides possess an advanced cranial morphology if compared to Cervus and allied to Cervus genera of Southern and Eastern Asia. The cranio-dental morphological differences suggest that the lineages of Megaceroides and Cervus diverged as early as Late Miocene.
Obviously, Megaceroides does not belong to the phylogenetic branch of the genus Praemegaceros as it was suggested by [START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF] and [START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF]. Unlike Praemegaceros, Megaceroides is characterized by the relatively longer braincase (a primitive character), the cylinder-shaped pedicles (not compressed antero-posteriorly, or dorso-ventrally, if we take in consideration their strong inclination toward posterior and sidewards on the skull, as in some advanced species of Praemegaceros), the cranial and mandibular pachyostosis, and the long nasal bones [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. The long nasal bones of Megaceroides extended behind the imaginary line connecting anterior edges of orbits represent a good taxonomic character, but are of little interest for the systematical and phylogenetic study, since, probably, they represent here an advanced apomorphic character. However, the relatively long braincase of Megaceroides is an important primitive character that rules out a direct phyletic relationship with geologically older, but more advanced in this case Praemegaceros. The cylindrical shape of pedicles in Megaceroides also suggests that this genus is not related with the advanced Middle Pleistocene species of Praemegaceros (P. verticornis and P. solilhacus), which evolved the bio-mechanically more advantageous dorso-ventrally compressed and latero-medially extended pedicles that acted as a reinforced support with increased area of cross-section for large and heavy antlers [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. The peculiar dorso-ventrally compressed shape of pedicles is maintained even in the secondary dwarfed P. dawkinsi with diminished antlers.
The cranial and dental morphology of Megaceroides and Dama shows some more resemblances (Tab. 8). Megaceroides shares with Dama the broad bell-shaped basioccipitale, the large orbits, the large rounded bulla tympani, the flexed braincase, the long nasal bones (however, this character is also apomorphic in Dama: [START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF], the missing upper canines, and the similar proportions of lower tooth row: PP/MM in Megaceroides varies between 45.0% and 60.5%, while in modern Dama dama varies between 46.0% and 61.6% [START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. Unlike Dama, the braincase of Megaceroides is some-what less flexed; the parietal and frontal bones are flat; the pachyostosis is strongly pronounced (it is completely absent in Dama); the rather long pedicles are set obliquely on the skull (not short and vertically oriented as in Dama); the ethmoidal vacuities are completely closed; and upper molars are supplemented with a variable lingual cingulum. Such cranial characters as the less flexed braincase and obliquely set frontal pedicles define M. algericus as a more primitive cervid form than fallow deer. The closed ethmoidal vacuities in M. algericus, apparently, resulted from the pachyostosis of face bones.
Megaceroides and Megaloceros share the developed cranial pachyostosis, the bell-shaped basioccipitale, the missing upper canines, the presence of a variable cingulum in upper molars, and long nasal bones extended behind the imaginary line connecting the anterior edges of orbits (Tab. 8). The shape and relative length of the braincase, the position of the antler pedicles, developed cingula in upper molars, and the cranial hyperostosis of Megaceroides algericus suggest its greater affinity with Megaloceros giganteus. Unlike Megaloceros, Megaceroides is characterized by enlarged rounded bulla tympani, a flexed braincase, a lacking basal tine, and relatively larger orbits with respect to the condylo-basal length. However, the relatively large orbits may be a secondary effect caused by the shortened splanchnocranium. It seems that proportions of the lower tooth series in Megaceroides (PP/MM -45.0%, 52.9%, 60.5%: Croitor, 2014) tend to be more advanced than in Megaloceros giganteus from Ireland (PP/MM -53.6-61.1%, based on the sample stored in NHML), and significantly more advanced (according to evolutionary trends in Cervidae described by [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF] than in the primitive form of Megaloceros giganteus from Bisnik, Poland (PP/MM -61.9-65.6%: [START_REF] Croitor | Giant deer Megaloceros giganteus Blumenbach, 1799 (Cervidae, Mammalia) from Palaeolithic of Eastern Europe[END_REF]. The presented here analysis of cranial characters of various Eurasian cervids confirms the old assumption of [START_REF] Joleaud | Sur le Cervus (Megaceroides) algericus Lydekker, 1890[END_REF] on intermediary morphological and systematic position of Megaceroides between Megaloceros and Dama. The peculiar combination of cranial and dental characteristics of the African deer confirms the reasonability of the elevation of Megaceroides at the generic level proposed by [START_REF] Arambourg | Note préliminaire sur un nouvelle grotte à ossements des environs d'Alger[END_REF][START_REF] Arambourg | Mammifères fossiles du Maroc[END_REF].
Molecular phylogenetic study has revealed that the fallow deer is the closest to Megaloceros giganteus modern cervid, although the evolutionary divergence between Megaloceros and Dama has occurred very early, 4-5 Myr [START_REF] Lister | The phylogenetic position of the 'giant deer' Megaloceros giganteus[END_REF] or even 10.7 Myr ago [START_REF] Hughes | Molecular phylogeny of the extinct giant deer, Megaloceros giganteus[END_REF]. The close phylogenetical relationship between M. giganteus and D. dama is also supported by some shared characteristics of the cranial morphology: both species share a relatively long braincase, long nasal bones (synapomorphy), a relatively short orbito-frontal portion of skull (the anterior edge of the orbit is situated above M 2 ), missing upper canines, and a similar shape of the broadened basioccipitale at the level of the pharyngial tuberosities. [START_REF] Pfeiffer | Die Stellung von Dama (Cervidae, Mammalia) im System plesiometacarpaler Hirsche des Pleistozäns -Phylogenetische Rekonstruktion -Metrische Analyse[END_REF] suggestion on close relationship between the giant deer and the red deer based on the postcranial morphology analysis is disputable. The use of postcranial morphology in phylogenetic reconstructions is unsafe since limb bones are greatly influenced by various environmental and biomechanical factors like landscape character, ground surface character, type of locomotion, body mass, social behavior, etc. [START_REF] Köhler | Skeleton and Habitat of recent and fossil Ruminants[END_REF]. The missing studies analyzing plesiomorphic and apomorphic characters of postcraial skeleton represent the main methodological problem in this case. Therefore, a cluster analysis of postcranial morphology rather reveals ecological and biomechanical similarity than a genuine phylogenetic relationship. The molecular phylogeny research carried out by [START_REF] Kuehn | Molecular phylogeny of Megaloceros giganteus -the Giant Deer or just a Giant Red Deer?[END_REF] suggests a close relationship between giant deer and red deer, but, according to [START_REF] Hughes | Molecular phylogeny of the extinct giant deer, Megaloceros giganteus[END_REF], these results come from a wrong determination of fossil specimens or from contamination during the DNA amplification process. The obtained by [START_REF] Kuehn | Molecular phylogeny of Megaloceros giganteus -the Giant Deer or just a Giant Red Deer?[END_REF] genetic evidence on close relationship between giant deer and red deer is "almost certainly the result of contamination" (A. Lister, pers. comm.).
The palmed antlers of Megaceroides algericus with the lacking basal tine (Fig. 15) are of minor significance for the present study, since the palmations may have evolved independently in each phyletic lineage, while the basal tines have tend to reduce in cervid forms with secondary reduced body size, as for instance in Praemegaceros dawkinsi and numerous dwarfed insular species [START_REF] Azzaroli | Il nanismo nei cervi insulari[END_REF][START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF][START_REF] Caloi | Il Cervo Pleistocenico di Sardegna[END_REF][START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF][START_REF] Croitor | Origin and evolution of the late Pleistocene island deer Praemegaceros (Nesoleipoceros) cazioti (Depéret) from Corsica and Sardinia[END_REF]. [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF], [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF] and [START_REF] Hadjiouis | Megaceroides algericus (Lydekker, 1890), du gisement des Phacochères (Alger, Algérie). Etude critique de la position systematique de Megaceroides[END_REF] noticed that the shape of the lower mandible and skull of Megaceroides algericus is similar to the mandibles of Simomegaceros pachyosteus from Choukoutien (China). At least superficially, the skull shape of Sinomegaceros pachyosteus with its little flexed braincase, broad orbito-frontal part, and short splanchnocranium seems to be a less accentuated version of Megaceroides algericus. Megaceroides is characterized by the presence of a well-developed middle tine, which normally is lacking in Sinomegaceros [START_REF] Shikama | Quaternary cave and fissure deposits and their fossils in Akiyosi District[END_REF][START_REF] Shikama | Megacerid remains from Gunma Prefecture, Japan[END_REF][START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF]. Van der [START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF] regard the absence of the middle tine as an essential diagnostic character of Sinomegaceros that is shared with Arvernoceros. However, a small knob-like vestige of the middle tine is present in some specimens of S. pachyosteus [START_REF] Kahlke | On the evolution of pachyostosis in jaw-bones of Choukoutien giant-deer Megaceros pachyosteus (Young)[END_REF], therefore, caution is needed. The area of distribution of the genus Sinomegaceros in Eastern Asia was latitudinally limited to between 50° and 25°. According to Vislobokova (2012aVislobokova ( , 2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] Sinomegaceros was a southern ecological counterpart of Alces and, therefore, could not disperse to North and West because of the ecological competition with Alces. Van der [START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF], taking in account the combination of dental, antler, cranial, and postcranial characters in large-sized deer from West and East of Eurasia, exclude the possibility of migrations of large-sized deer from the eastern part of Asia to the West. Considering this, the cranial shape affinity between Sinomegaceros and Megaceroides could be a curious example of convergence. Ultimately, only an extensive morpho-functional study of the cranial and whole skeletal morphology of Sinomegaceros may clarify the nature of this convergence.
Taxonomical context of Megaceroides algericus
The taxonomical significance of rather variable dental morphology of Megaceroides algericus, such as the varying cingulum in upper molars and the broad variation of tooth row proportions, is still unclear. Possibly, we are dealing in this case with chronological forms of the endemic North African deer: an older larger form from Hammam Meskoutin with strong cingulum and normally developed M 3 , and a more specialized descent form with smaller dentition, reduced cingulum and marked reduction of M 3 and M 3 . One can assume that the thick-jawed deer described by [START_REF] Pomel | Sur deux Ruminants de l'époque néolithique en Algérie : Cervus pachygenys et Antilope maupasi[END_REF] and figured in the later publication [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] represents a late form distinguished by relatively smaller M3 (Tab. 7). If this is true, [START_REF] Pomel | Sur deux Ruminants de l'époque néolithique en Algérie : Cervus pachygenys et Antilope maupasi[END_REF][START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] name Cervus pachygenys proposed for the African Neolithic cervid form with pachyostotic mandibles and upper molars without cingulum could be used at list as a subspecies Megaceroides algericus pachygenys [START_REF] Pomel | Sur deux Ruminants de l'époque néolithique en Algérie : Cervus pachygenys et Antilope maupasi[END_REF]. In this case, the lower mandible from Berrouaghia (Algeria) described and figured by [START_REF] Pomel | Sur deux Ruminants de l'époque néolithique en Algérie : Cervus pachygenys et Antilope maupasi[END_REF][START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] pl. VII-VIII) should be regarded as the holotype of M. algericus pachygenys.
The taxonomical position of the genus Megaceroides needs to be clarified too. [START_REF] Viret | Artiodactyla. Traite de Paleontologie[END_REF] proposed to place the giant deer in a separate tribe Megacerini based on the genus Megaceros Owen, 1844. This point of view is accepted and defended by [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF]Vislobokova ( , 2012aVislobokova ( , 2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] and di [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF]Petronio (2000-2002). According to [START_REF] Lister | Case 2606: Megaloceros Brookes, 1828 (Mammalia, Artiodactyla): proposed emendation of the original spelling[END_REF], the genus name Megaloceros Brookes, 1828 has priority over Megaceros Owen, 1844 and this viewpoint was accepted uncritically. Somewhat later, [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF] substituted Megacerini Viret with Megalocerini and supposed that Dama could be the only survived genus of this tribe. [START_REF] Abbazzi | Megaceroides solilhacus and other deer from the middle Pleistocene site of Isernia La pineta (Molise, Italy)[END_REF] quoted Lister's tribe name as Megalocerini [START_REF] Viret | Artiodactyla. Traite de Paleontologie[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF] accepts the genus name Megaloceros, however she continues to use the tribe name Megacerini based on Megaceros. [START_REF] Grubb | Valid and invalid nomenclature of living and fossil deer[END_REF] pointed out that Megacerini Viret, 1961 is a junior synonym of Megalocerotinae Brookes, 1828. Therefore, the correct name of the tribe should be Megalocerotini Brookes, 1828. According to [START_REF] Grubb | Valid and invalid nomenclature of living and fossil deer[END_REF], the genera Praemegaceros, Megaceroides, Megaloceros, and Sinomegaceros belong to the tribe Cervini Goldfuss, 1820, therefore Megalocerotini Brookes, 1828 is a synonym of Cervini Goldfuss, 1820. [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF]Vislobokova ( , 2012aVislobokova ( , 2012b;;[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] considers that the tribe of giant deer includes eight genera: Megaloceros, Praemegaceros, Sinomegaceros, Praesinomegaceros, Praedama, Orchonoceros, Arvernoceros, Neomegaloceros. In my opinion [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF], the quoted list of genera represents a rather eclectic group including forms that belong to several different phylogenetical stocks, as well as poorly known cervid forms, such as Neomegaloceros gracilis and Praedama savini (see the discussion below). [START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF] and Azzaroli andMazza, (1992, 1993) regard Eucladoceros as a primitive forerunner of Praemegaceros, seeking support for this hypothesis in the homologous general construction of antlers. Moreover, Praemegaceros shares essential cranial characteristics and antler morphology with Eucladoceros and the direct phylogenetical relationship between those two genera is accepted by many authors [START_REF] Radulesco | Sur un nouveau cerf megacerin du pleistocene moyen de la depression de Brasov (Roumanie)[END_REF][START_REF] Azzaroli | Critical Remarks on some Giant Deer (genus Megaceros Owen) from the Pleistocene of Europe[END_REF][START_REF] Azzaroli | Large early Pleistocene deer from Pietrafitta lignite mine, Central Italy[END_REF][START_REF] Abbazzi | Remarks on validity of the generic name Praemegaceros Portis 1920, and an overview on Praemegaceros species in Italy[END_REF][START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF][START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. Earlier, I proposed to include Praemegaceros and Eucladoceros together with Orchonoceros in the tribe Eucladocerini [START_REF] Croitor | The Plio-Pleistocene deer of the Republic of Moldova. Their biostratigraphic and paleogeographic significance[END_REF], but, according to the current state of knowledge, this taxon also falls under synonymy of Cervini Goldfuss, 1820.
The assumed direct phylogenetical relationship between Megaloceros and Praedama is based on a single character, the flattened basal tine [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF][START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF], which could be just an apomorphic character. The fine series of antler material from Forest Bed (stored in NHML) and the complete antler from Suessenborn, Germany (Kahlke, 1969: tab. XXXIV) show a series of characters that suggests a significant morphological distance between Praedama savini and Megaloceros giganteus: unlike giant deer, the antler base in Praedama savini is characterized by a specific quadrangular cross-section, while the compressed from the sides whole antler and the dichotomous pattern of bifurcation of crown tines rather reminds Eucladoceros [START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. The flattened proximal part of basal tine of Praedama savini (the complete basal tines are unknown) that is regarded as a good argument for close relationship between Megaloceros and Praedama, is characteristic of Eucladoceros dicranios too. The dental and mandibular remains of Praedama from Cueva Victoria do not show any trace of cingulum in upper molars and have no clearly expressed mandibular pachyostosis (van der [START_REF] Made | The latest Early Pleistocene giant deer Megaloceros novocarthaginiensis n. sp. and the fallow deer Dama cf. vallonnetensis from Cueva Victoria (Murcia, Spain). In: Geología y Paleontología de Cueva Victoria[END_REF]. The skull morphology of Praedama is unknown; therefore, well-founded arguments on phylogenetic relationships of this endemic European genus for the present moment are missing. We need more fossils and better arguments that will help to reveal the systematic position and phylogenetic relationships of Praedama.
Arvernoceros ardei is another species, which was regarded as a forerunner of M. giganteus because of its palmated basal tine and associated with its antlers isolated upper molars with cingulum [START_REF] Heintz | Les Cervidés Villafranchiens de Franse et d'Espagne[END_REF]. The very large-sized deer Arvernoceros verestchagini from the Early Pleistocene of Eastern Europe and Greece maintained a rather simple antler construction, a simple unmolarized lower fourth premolar, and lacks a cingulum, hyperostosis, and other characters that undoubtedly could indicate a relationship with Megaloceros [START_REF] Croitor | On the systematic position of the large-sized deer from Apollonia, Early Pleistocene, Greece[END_REF]. The general pattern of antler construction of Arvernoceros recalls that of Sinomegaceros (van der [START_REF] Made | Phylogeny of the giant deer with palmate brow tines Megaloceros from west and Sinomegaceros from east Eurasia[END_REF] and modern Rucervus [START_REF] Croitor | Systematical position and evolution of the genus Arvernoceros (Cervidae, Mammalia) from Plio-Pleistocene of Eurasia. Oltenia[END_REF]. According to genetic data, Rucervus duvaucelii has a detached position among modern Cervinae, having diverged very early (Late Miocene) from the main group of Old World deer together with Axis axis [START_REF] Pitra | Evolution and phylogeny of old world deer[END_REF].
The last problematic deer, Neomegaloceros gracilis from the Late Miocene of Ukraine was proposed by [START_REF] Korotkevich | A new deer form from Neogene deposits of Southern Ukraine[END_REF] as a forerunner of Praemegaceros verticornis since its antler is characterized by a distal palmation and an additional tine, which was interpreted as homology to the posterior tine in Praemegaceros. The first tine of Neomegaloceros is situated very high on the anterior side of the beam. The antler beam in Neomegaloceros is not curved in the areas of first and posterior tines as in Praemegaceros. The general antler shape of Neomegaloceros does not show any similarity with antlers of Praemegaceros. The distal palmation of the antler is regarded by [START_REF] Korotkevich | A new deer form from Neogene deposits of Southern Ukraine[END_REF] as an important character proving the direct phyletic relationship between Neomegaloceros and Praemegaceros. However, the palmed antlers appear only in the most advanced forms of Praemegaceros, while earlier forms Praemegaceros obscurus, P. pliotarandoides, and P. verticornis dendroceros bear antlers without palmations (Azzaroli andMazza, 1992, 1993;[START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF]. The so-called "posterior tine" appears in the several cervid lineages, represented by such genera as Rangifer, Megaloceros, Praedama, and some Sinomegaceros. It seems that the "posterior tine" developed several times independently in cervid forms adapted to open environments and, possibly, had a function of removing the flying parasites from the back in the rutting males, thus increasing their combat capacities. Neomegaloceros rather is a junior synonym of Cervavitus and belongs to the subfamily Capreolinae Brookes, 1828 [START_REF] Croitor | Late Neogene and Quaternary biodiversity and evolution: Regional developments and international correlations. Volume I[END_REF][START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. Therefore, the tribe of so-called giant deer sensu lato proposed by [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF][START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF]) is a polyphyletic group that includes various lineages not only from the subfamily Cervini, but also from the subfamily Capreolini. Most of the cervids included by [START_REF] Vislobokova | The fossil deer of Eurasia[END_REF][START_REF] Vislobokova | A new species of Megacerini (Cervidae, Artiodactyla) from the late miocene Taralyk-Cher, Tuva (Russia), and remarks on the relationships of the group[END_REF]Vislobokova ( , 2012aVislobokova ( , 2012b;;[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] in the tribe of giant deer represent the peculiar eco-evolutionary type or "constellation" (if we apply the term used by [START_REF] Geist | Deer of the World: Their Evolution[END_REF] of open landscape giants with very large antlers (Megaloceros, Praemegaceros) and forest/woodland giants with smaller antlers (Arvernoceros, Sinomegaceros) approaching to the ecoevolutionary type of modern Alces alces.
The tribe Megalocerotini Brookes, 1828 sensu stricto should be restricted only to the genera Megaloceros, Megaceroides, and Dama, however, it is difficult in this case to propose a reliable differential diagnosis of the tribe. [START_REF] Grubb | Valid and invalid nomenclature of living and fossil deer[END_REF] grouped all Old World deer with small or missing upper canines and large complicated antlers into the single tribe Cervini Goldfuss, 1820. Possibly, the tribe Cervini Goldfuss, 1820 could be restricted to the phylogenetic branch of Cervus and related genera (or subgenera) Hyelaphus, Rusa, Panolia, and Przewalskium, since this cervid group shares similar cranial and dental characteristics (the presence of small upper canines, the narrow triangular basioccipitale) and genetical analysis revealed their monophyly [START_REF] Pitra | Evolution and phylogeny of old world deer[END_REF]. However, the craniodental morphology of Plio-Pleistocene Eurasian Cervinae is still imperfectly studied and data for the classification of the subfamily Cervinae at the tribal level are insufficient.
Zoogeographic context and origin
The area of distribution of Megaceroides algericus limited by Atlas Mountains from the South has a refugial character, as it was reported for many other African species of Mediterranean affinity [START_REF] Brito | Crocodiles in the Sahara desert: an update of distribution, habitats and population status for conservation planning in Mauritania[END_REF]. [START_REF] Joleaud | Cervus (Megaceroides) algericus Leydekker, 1890[END_REF] assumed that Megaceroides algericus dispersed into Northern Africa through the Strait of Gibraltar. According to [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF], the most plausible migration path for African deer is the "Libyan-Egyptian" way, i. e. via the south and south-east coast of Mediterranean sea. [START_REF] Fernandez | The last occurrence of Megaceroides algericus Lydekker, 1890 (Mammalia, Cervidae) during the middle Holocene in the cave of Bizmoune (Morocco, Essaouria region)[END_REF] consider that the way of dispersal across the Strait of Gibraltar seems to be more probable than the hypothetical arrival from the Libyan-Egyptian or Sicilian-Tunisian routes, which requires more evidences. The remains of Megaloceros, the most probable forerunner of Megaceroides algericus, are not known southern Italy [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF]. The undeniable remains of Megaloceros giganteus are reported only from northern part of Iberian Peninsula [START_REF] Lister | The evolution of the giant deer, Megaloceros giganteus (Blumenbach)[END_REF]. The presence of giant deer in the area of Madrid (Sesé andSoto, 2000, 2002: page 332, fig. 16) and in the fauna of Bolomor Cave near Valencia (Peris et al., 1997: page 26) based on poorly diagnostic material that needs a confirmation. Van der Made (2014) described a new species Megaloceros novocarthaginiesis from the end of Early Pleistocene of Cueva Victoria, Spain. The new species is very close to Praedama savini (Dawkins, 1887) and differs from the latter only by somewhat larger size and higher position of the basal tine, therefore I prefer to include this species, if its taxonomic status will be confirmed, in the genus Praedama. Among noteworthy characters, the absence of any trace of cingulum in upper molars should be mentioned (van der [START_REF] Made | The latest Early Pleistocene giant deer Megaloceros novocarthaginiensis n. sp. and the fallow deer Dama cf. vallonnetensis from Cueva Victoria (Murcia, Spain). In: Geología y Paleontología de Cueva Victoria[END_REF].
The closer phylogenetic relationships between Megaloceros, Megaceroides, and Dama are supported also by paleobiogeographic data. The Mediterranean basin is the area of evolutionary radiation of the genus Dama (up to 7 fossil and modern species, including present day Dama dama from Anatolia and Dama mesopotamica from Near East), which is known only from Western Eurasia [START_REF] Croitor | Deer from Late Miocene to Pleistocene of Western Palearctic: matching fossil record and molecular phylogeny data[END_REF]. Until now, the highest taxonomical diversity of the genus Megaloceros is described from Western Europe [START_REF] Azzaroli | The Deer of the Weybourn Crag and Forest Bed of Norfolk[END_REF]Vislobokova, 2012aVislobokova, , 2012b[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF]. Therefore, Megaceroides algericus from the zoogeographic point of view is a part of evolutionary radiation of the Megaloceros-Dama lineage that took place in Western Eurasia and Mediterranean area. The medium-sized cervid from the Middle Pleistocene of Near East described by di [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF] as Dama clactoniana mugarensis is of special interest for the present discussion. This cervid form is characterized by a flattened antler beam, a flattened basal tine terminated by a bifurcation and a well-developed cingulum in upper molars [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF]. The lower premolar series seems to be relatively long (66%, measured from the photo in di Stefano, 1996) and represents a primitive condition similar to Megaloceros giganteus from Bisnik (Poland). The relatively robust radius of di Stefano's deer (radius length is ca. 215-220 mm, mid-shaft breadth is ca. 29 mm: di Stefano, 1996: fig. 8) is another character that distinguishes this cervid from the equal-sized representatives of the genus Dama and reminds the morphological condition described by [START_REF] Pomel | Caméliens et Cervidés. Carte géologique de l'Algerie[END_REF] in Megaceroides. Taking in account such specific characters, as cingulum in upper molars, low positioned flattened and bifurcated first antler tine, and flattened antler beam, one can assume that the medium-sized cervid from the Near East is a primitive or dwarfed form of giant deer Megaloceros mugarensis [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF] and may represent a transitional form between Megaloceros and Megaceroides.
The medium-sized Megaloceros mugarensis [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF] from the Middle Pleistocene of Near East is the most probable forerunner of Megaceroides algericus. The indirect support for this assumption is provided by [START_REF] Thomas | La faune quaternaire d'Algerie[END_REF], who reports the earliest scanty and poorly preserved fossil cervid remains from the Middle Pleistocene of North Africa.
Conclusions
Megaceroides algericus from the Late Pleistocene -Early Holocene of Northern Africa is a highly specialized cervid that evolved in the most extreme for the family Cervidae geographical area and environmental conditions. The genus Megaloceros from the Middle -Late Pleistocene of the boreal latitudes of Eurasia is phylogenetically nearest to Megaceroides. The medium-sized Megaloceros mugarensis [START_REF] Di Stefano | The mesopotamian fallow deer (Dama, Artiodactyla) in the Middle East Pleistocene[END_REF] from the Middle Pleistocene of the Near East is the most probable forerunner of Megaceroides algericus and the linking form between Megaloceros giganteus and Megaceroides algericus. The most probable way of dispersal of Megaceroides algericus to Africa is the south-east coast of Mediterranean sea, the so-called "Libyan-Egyptian" way.
The unusual cranial morphology of Megaceroides algericus is regarded here as a combination of specific ancestral morphological and physiological characteristics (first of all, cranial pachyostosis) and new apomorphic characters (relatively diminished dentition with reduced size of P 2 , P 2 , M 3 , and M 3 , weak zygomatic arches and anterior part of mandible, short and broad splanchnocranium with shifted orally cheek tooth rows) that represent an adaptation to the new ecological niche, which permitted to avoid a direct competition with ecologically highly competitive bovids: this is a niche of a periaquatic or semiaqualic herbivore that feed on soft water plants. This explains the generally weak dentition and other cranial structures that are involved in the forage processing. The unusually thick helmet-like cranial bones acquired the new function of passive defense against the predators that usually wait for their prey at watering place. Unlike Megaloceros, the pachyostosis in Megaceroides ontogenetically evolved earlier in juvenile individuals with deciduous cheek teeth and had several cycles of seasonal growth. Phylogenetically, Megaceroides belongs to the Megaloceros-Dama branch and stands more close to Megaloceros.
The taxonomy of the subfamily Cervinae is poorly developed at the tribal level. The tribe Megalocerotini Brookes, 1828 sensu lato (= Megacerini [START_REF] Viret | Artiodactyla. Traite de Paleontologie[END_REF]fide Vislobokova, 1990, 2009, 2012a, b;[START_REF] Vislobokova | Morphology, Taxonomy, and Phylogeny of Megacerines (Megacerini, Cervidae, Artiodactyla)[END_REF] is a polyphyletic group that includes specific large-sized eco-evolutionary type of cervids, which evolved independently in several lineages, and some poorly known forms that have similar apomorphic antler characters. The tribe Megalocerotini Brookes, 1828 sensu stricto with only genera Megaloceros, Megaceroides, and Dama represent a real phylogenetic branch, however, it is difficult to give an adequate taxonomical definition for this restricted group containing extremely specialized forms representing various eco-evolutionary types, first of all because the similar cervid eco-evolutionary types can be found in other phylogenetic branches of the subfamily Cervinae Goldfuss, 1820 [START_REF] Geist | Deer of the World: Their Evolution[END_REF].
The associated with Mousterian industry local fauna from Guyotville was characterized by
Figure 1 .
1 Figure 1. Fossiliferous sites considered in the present study: 1, Ain Tit Mellil (Morocco); 2, Berrouaghia (Algeria), the type locality of Cervus pachygenys Pomel, 1892; 3, Guyotville (Algeria); 4, Phacocheres (Algeria); 5, Grotte de la Madeleine (Algeria); 6, Filfila (Algeria); 7, Hammam Meskoutin (Algeria), the type locality of Cervus algericus Lydekker, 1890.
Figure 2 .
2 Figure 2. Megaceroides algericus (Lydekker, 1890): the male skull from Guyotville (now Ain-Benian, Algeria) stored in Paris (NMNH, "Collection of Arambourg", no number): A, side view; B, frontal view; C, palatal view. Scale bars: 5 cm.
Figure 3 .
3 Figure 3. Megaceroides algericus (Lydekker, 1890): the semischematic drawing of the palatal view of the male skull from Guyotville showing the damaged parts (shaded). Scale bar: 5 cm.
Figure 4 .
4 Figure 4. The ratio between the length of face (measured from the anterior edge of orbit to prosthion) to the condylo-basal length of the skull of Megaceroides algericus from Guyotville compared to Dama dama (47.1.1.4, NHML), Praemegaceros obscurus (IGF4024, adapted from Croitor, 2014), Praemegaceros cazioti (adapted from Caloi and Malatesta, 1974), Megaloceros giganteus (M28968, NHML), and Cervus elaphus (Nr. 1927-58, NMNH).
Figure 5 .
5 Figure 5. The position of the upper tooth row in Megaceroides algericus from Guyotville compared to large-sized deer (Megaloceros giganteus and Praemegaceros obscurus), an insular dwarfed deer (Praemegaceros cazioti), and medium-sized continental deer (Dama dama and Cervus elaphus). The provenance of specimens involved in the comparison is indicated in the Figure 4.
Figure 6 .
6 Figure 6. The angle between facial and neural parts of skull in Megaceroides algericus from Guyotville (A) compared to Megaloceros giganteus ruffi from Bruhl, Germany (Stuttgart Museum, adapted from Vislobokova, 2012b) and Dama dama (ZMS, coll. 451, c.12058). Scale bars: 5 cm.
Figure 7 .
7 Figure 7. Megaceroides algericus (Lydekker, 1890): the braincase MOC148 (NMNH) from Ain Tit Mellil (Morocco); A, side view; B, basal view. Scale bar: 5 cm.
Figure 8 .
8 Figure 8. Megaceroides algericus (Lydekker, 1890): the semi-schematic drawing of the specimen MOC148 (NMNH) from Ain Tit Mellil (Morocco) showing damaged (shaded) and missing (dashed line) parts from the side view. Scale bar: 5 cm.
Figure 9 .
9 Figure 9. Megaceroides algericus (Lydekker, 1890): the frontal view of the skull fragment MOC148 (NMNH) from Ain Tit Mellil (Morocco); nas., posterior parts of nasal bones. Scale bar: 5 cm.
Figure 10 .
10 Figure 10. Megaceroides algericus (Lydekker, 1890): fragment of right upper jaw FIL169 (NMNH) with M 2 and M 3 from Filfila (Algeria). Scale bar: 3 cm.
Figure 11 .
11 Figure 11. Megaceroides algericus (Lydekker, 1890): the lower mandible (dx, no number, NMNH) from Grotte de la Madeleine (Algeria); A, lateral view of mandible with transversal crosssections taken in front of P 4 and behind of M 3 ; B, occlusion view of P4. Scale bar: 5 cm.
Figure 12 .
12 Figure 12. Megaceroides algericus (Lydekker, 1890): the lower mandible (sin, FIL166, NMNH) from Filfila (Algeria), lateral view and dental grinding surface. Scale bar: 5 cm.
Figure 13 .
13 Figure 13. Proportions of lower mandible (FIL166, MNMH) of Megaceroides algericus compared to Muntiacus muntjak (ZMS, c.780), Hydropotes inermis (ZMS, c.1441), Dama dama (ZMS, c.12061), and Praemegaceros cazioti (COS19040, adapted from Croitor et al., 2006); M3-art., the distance between M3 and the mandibular articulation; M1M3, the length of lower molar series; P2P4, the length of lower premolar series; C-P2, the length of diastema (distance measured between lower canine and P2).
Figure 14 .
14 Figure 14. The reconstruction of angle between hemimandibles of Megaceroides algericus (Lydekker, 1890) based on the specimen FIL166 (MNMH). The arrows indicate the direction of the wearing traces caused by coarse forage.
Figure 15 .
15 Figure 15. Comparison of antler morphology of giant cervids and their endemic small-sized relatives: A, Megaceroides algericus (Allo. 61.12) from Late Pleistocene of Phacochères (Algeria; reversed image adapted from Hadjiouis, 1990;); B, Megaloceros giganteus from Lough Gur, Limerick (Ireland; adapted from Reynolds, 1929); C, Praemegaceros dawkinsi from Middle Pleistocene of Mundesley, Norfolk (Great Britain; M18706, NHML, reversed); D, Praemegaceros obscurus from Early Pleistocene of Salcia (Moldova, Institute of Zoology of the Academy of Sciences of Moldova, no number); b., basal tine; sb., sub-basal tine; ds., dorsal tine; m., middle tine; p., posterior tine; cr., crown tine; pl., palmation. Scale bars: 10 cm.
Table 1 .
1 Fossil material of Megaceroides algericus(Lydekker, 1890) from National Museum of Natural History in Paris studied in the present work.No number; here is indicated as Fil/nn. The fragment of mandible with P 4 Filfila unpublished
Collection number and additional Specimen Site Original citation
information
No number; labeled as "Cervus The right hemimandible with P 4 - Grotte de la Described in Croitor (2006: Fig.
algericus, figuré: Pl. IV, Fig. 4", no M3 Madeleine 2 A-B, p. 94) as Megaceroides
bibliographic reference. Here is algericus (Lydekker)
indicated as GM/1.
No number; here is indicated as GM/2. The hemimandible with M2-M3, Grotte de la unpublished
showing a pathologic Madeleine
malformation on processus
angularis
No number, "Collection of The almost complete skull with Guyotville Cervus (Megaceroides)
Arambourg". Here is indicated as the proximal parts of antlers algericus Lydekker (Arambourg,
"skull from Guyotville". 1932: Fig. 3, p. 137)
Nr. 336, "Collection of Arambourg" The left hemimendible with M2 Guyotville unpublished
and M 3
Nr. 337, "Collection of Arambourg" The left hemimandible with P4- Guyotville unpublished
M3
MOC148, "Mission Arambourg" The damaged neurocranium with Ain Tit Mellil Cervus (Megaceroides)
frontal bones and a right basal algericus Lydekker (Arambourg,
part of antler 1938: Pl. II, Figs. 2, 2a)
Fil-160 The juvenile hemimandible Filfila unpublished
Fil-166 The left complete hemimandible Filfila Megaceroides algericus
with P 2 -M 3 (Lydekker) (Thomas, 1979;
figured)
Fil-167 The fragment of hemimandible Filfila unpublished
with M 2 and M 3
Fil-169 The right maxilla with M 2 and M 3 Filfila unpublished
Table 2 .
2 Cranial measurements of the modern and fossil deer involved in the present comparative study. CBL, condylo-basal length; P 2 -M 3 , length of upper cheek tooth row; M 3 -oc., distance between M 3 and posterior edge of occipital condyle; M 1 -M 3 , length of upper molar series; P2-P4, length of upper premolar series; P 2 -pr., distance between P 2 and prosthion; or-pr., distance between orbit and prosthion; Dor., horizontal diameter of orbit; or-oc., distance between orbit and posterior edge of occipital condyle.
species source/collection CBL P 2 M 3 M 3 oc M 1 M 3 P 2 P 4 P 2 pr. or-pr. Dor. or-oc.
Praemegaceros Caloi and 300.0 101.0 132.7 62.0 45.0 73.5 149.0 44.0
cazioti Malatesta (1974)
Praemegaceros IGF4024, Croitor 470.0 141.8 200.0 84.0 61.3 143.0 270.0 140.0
obscurus (2014)
Dama dama 47.1.1.4 (NHML) 270.0 77.0 116.8 47.8 31.0 77.0 144.0 52.8 93.0
Dama dama c.12058 (ZMS) 270.0 81.7 112.8 49.8 34.2 76.0 145.7 42.8 84.7
Megaloceros M28968 (NHML) 505.0 150.0 223.0 90.0 60.0 137.4 290.0 56.1 185.2
giganteus
Cervus elaphus 1927-58 (MNHN) 353.0 109.8 131.4 68.7 45.2 111.8 207.0
Table 3 .
3 Mandibular measurements of the modern and fossil deer involved in the present comparative study. C-P2, length of diastema (distance between C and P 2 ); P 2 -P 4 , length of lower premolar series; M 1 -M 3 , length of lower molar series; M 3 -art., distance between M3 and mandibular articulation; gn.-M1, distance between gnation and M1; M1-art., distance between M1 and mandibular articulation; art.-gn., distance between mandibular articulation and gnation. -art. gn.-M 1 M 1 -art. art.-gn.
species M 3 Muntiacus muntjak source/collection C-P 2 P 2 P 4 M 1 M 3 c.780 (ZMS) 40.0 24.5 38.0 34.4 67.3 64.3 135.6
Hydropotes inermis c.1441 (ZMS) 42.0 20.2 32.3 32.8 70.3 57.1 126.4
Dama dama c.12061 (ZMS) 56.5 34.3 56.0 73.7 105.0 120.0 222.0
Praemegaceros cazioti COS19040, Croitor et al. (2006) 50.4 37.6 62.0 80.0 115.0 126.2 235.0
Table 6 .
6 Megaceroides algericus(Lydekker, 1890): measurements of lower mandibles; GM/1, unnumbered specimen from Grotte de la Madeleine with P 4 -M 3 ; GM/2, unnumbered specimen from Grotte de la Madeleine with malformation.
FIL166 GM/1 GM/2 GTV337 GTV336 FIL167 FIL160
Measurements
sin dx dx sin sin
L P 2 -M 3 95.2
L P2-P4 35.0
L M1-M3 57.5 65.8 61.6
L M 2 -M 3 42.1 45.2 41.0 42.8 46.0 43.7
L horizontal ramus 215.+
L diastema 45.+
L P 2 -for. mentale 20.3
H at ½ diastema 20.0
H under P2 18.7
D under P 2 15.4
H under M1 21.5 19.0 20.3 19.8
D under M1 21.8 22.8 20.4 21.2 19.0
H under M 2 /M 3 30.8 34.6 33.4 35.0 34.3 32.5
D under M2/M3 27.6 31.1 33.8 29.3 32.1 29.2 23.0
D maximal 37.2 34.3 35.5 36.3 39.0 36.2
H ascending ramus 100.6 96.0 108.4 109.2
L articulation -gnation 230.0
L articulation -M3 88.3 75.2 92.7 84.5
Table 7 .
7 Megaceroides algericus(Lydekker, 1890): measurements of lower cheek teeth.
Tooth Fil-166 Fil-167 Fil/nn GM/1 GM/2 Nr.336 Nr.337 Pomel
measurements (1892)
P3 L 13.8
D 9.3
P4 L 13.1 13.6 15.5 13.7
D 11.4 10.2 11.7 10.4
M 1 L 18.2 17.2 19.2
D 14.3 13.9 14.0
M2 L 18.4 19.2 19.8 18.0 20.2 19.0 20.0
D 13.8 14.0 15.0 14.1 14.0 12.9 15.0
M3 L 23.9 24.1 23.0 23.0 25.8 23.0 22.0
D 11.7 11.7 13.2 12.1 14.0 12.4 10.0
Table 8 .
8 The comparative account of cranial characters and proportions (with respect to condylo-basal length, CBL) of Megaceroides algericus, Megaloceros giganteus and Dama dama.
Characters Megaceroides algericus Megaloceros giganteus Dama dama
1. Length of face before quite short (48.4 % of CBL) relatively long (57.4 % of CBL) moderately short (53.3 % of
orbits CBL)
2. Relative breadth of 62.4% of CBL 39.6-43.1% of CBL 35.7-41.9 % of CBL
skull
3. Angle between axes 135º ( moderately flexed 155º (little flexed 120º (flexed neurocranium)
of face and braincase neurocranium) neurocranium)
4. Relative size of orbits relatively large (18.6 % of CBL) relatively small (11.1 % of CBL) relatively large (19.6 % of CBL)
5. Development of Pachyostosis of cranial bones Pachyostosis of cranial bones No pachyostosis
pachyostosis and lower mandible strongly and lower mandible strongly
developed developed
6. Shape of parietal Flattened parietal bones Flattened parietal bones Convex parietal bones
bones
7. Shape of frontal Flattened frontal bones Concave frontal bones Convex frontal bones
bones
8. orientation of Pedicles deflected caudally and Pedicles deflected caudally and Vertically oriented pedicles
pedicles sideward sideward
9. Position of nasal Posterior edge of nasal bones Posterior edge of nasal bones Posterior edge of nasal bones
bones extends behind the anterior extends behind the anterior extends behind the anterior
line of orbits line of orbits line of orbits
10. Position of orbits. Anterior edge of orbit situated Anterior edge of orbit situated Anterior edge of orbit situated
above M 2 above M 3 above M 2
11. Length of naso- Naso-premaxillary suture is Naso-premaxillary suture is Naso-premaxillary suture is
premaxillary suture long long short
12. Development of reduced reduced or well-developed well-developed
preorbital fossae
13. Development of closed reduced or completely closed very large
ethmoidal orifice
14. Size and shape of rather large and rounded intermediate size very large and rounded
bulla tympani
15. Shape of mandible Ascending part of mandible is Ascending part of mandible is Ascending part of mandible is
sloped backward set vertically set vertically
16. Development of A variing cingulum is present A variing cingulum is present No cingulum
cingulum
17. Angle between 37º 45º 37º
lingual and labial sides
of M 2
Méditerranéenne des Sciences de l'Homme, Aix-en-Provence) for the provided missing references and suggestions on age of the fossiliferous sites considered in the paper.
Acknowledgements:
This research was supported by CNRS (2003) and Aix-Marseille University (2006). I thank Dr. Jean-Philippe Brugal (Maison Méditerranéenne des Sciences de l'Homme, Aix-en-Provence) for the provided possibility to carry out this study. Many thanks to Prof. Pascal Tassy (National Museum of Natural History in Paris) for the provided access to the fossil material, Dr. Paola Jenkins (Natural History Museum of London) and Dr. Paolo Agnelli (Zoological Museum "La Specola", University of Florence) for access to the osteological material stored in the collections under their care. I also thank Dr. Gertrud Rößner and Dr. Jan van der Made for their valuable comments and criticism that helped to improve the article. I am grateful to Dr. Philippe Fernandez (Maison |
01764226 | en | [
"phys.phys.phys-space-ph",
"phys.phys.phys-ins-det"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764226/file/2018-058.pdf | Carole Lecoutre-Chabot
Samuel Marre
Yves Garrabos
Daniel Beysens
Inseob Hahn
C Lecoutre
email: [email protected]
Near-critical density filling of the SF6 fluid cell for the ALI-R-DECLIC experiment in weightlessness
Keywords: slightly off-critical sulfur-hexafluoride, liquid-gas density diameter, liquid-gas coexisting densities
Introduction
Thermodynamic and transport properties show singularities asymptotically close to the critical points of many different systems. The current theoretical paradigm on critical phenomena using renormalization group (RG) approach [START_REF] Wilson | The renormalization group: Critical phenomena and the Kondo problem[END_REF] has ordered these systems in well-defined universality classes [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF] and has characterized the asymptotic singularities in terms of power laws of only two relevant scaling fields [START_REF] Fisher | Correlation Functions and the Critical Region of Simple Fluids[END_REF] in order to be conform to the scaling hypothesis. Simple fluids are then assumed similar [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] to the O(1) symmetric (Φ 2 ) 2 field theory and the N=1vector model of three-dimensional (3D) Ising-like systems ( [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF], [START_REF] Barmatz | Critical phenomena in microgravity: Past, present, and future[END_REF]). Their study in weightlessness condition is well-recommended to test the two-scalefactor universality approaching their critical point. However, for the case of the gas-liquid critical point of simple fluids, some additional difficulties can occur as the order parameter -the fluctuating local densityshows a noticeable asymmetry, as for instance the well-known rectilinear diameter form of the liquid-gas coexisting density curve first evidenced by Cailletet and Mathias [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF]. This linear asymmetry was largely confirmed in the subsequent literature (see for instance Ref. [START_REF] Singh | Rectilinear diameters and extended corresponding states theory[END_REF]). Such asymmetrical effects cannot be accounted for from the symmetrical uniaxial 3D Ising model and its induced standard fluid-like version, i.e., the symmetrical lattice-gas model.
An alternative theoretical way to introduce the fluid asymmetry nature in the scaling approach consists in extending the number of the physical fields contributing explicitly to the relevant scaling fields, the so-called complete scaling phenomenological hypothesis ( [START_REF] Fisher | The Yang-Yang anomaly in fluid criticality: experiment and scaling theory[END_REF]- [START_REF] Wang | Nature of vapor-liquid asymmetry in fluid criticality[END_REF]). For example, in a recent work [START_REF] Cerdeirina | Soluble model fluids with complete scaling and Yang-Yang features[END_REF], Yang-Yang and singular diameter critical anomalies arise in exactly soluble compressible cell gas models where complete scaling includes pressure mixing. The predictions of complete scaling have been tested against experiments and simulations at finite distance from the critical point, increasing the complexity in the fundamental quest of a true asymptotic fluid behavior. The latter remains a conundrum to the scientists who have for objective to check it by performing an experiment closer and closer to the critical point with the required precision. De facto, the asymmetrical contributions, the analytical backgrounds, and the classical-to-critical crossover behavior due to the mean-field-like critical point, further hindered the test of the asymptotic Ising-like fluid behavior. Such difficulties are intrinsically ineludible, even along the true critical paths where the crossover contribution due to one additional non-relevant field [START_REF] Wegner | Corrections to scaling laws[END_REF] can be accounted for correctly in the field theory framework ([15]- [START_REF] Garrabos | Master crossover functions for one-component fluids[END_REF]).
Moreover, the experiments are never exactly on these critical paths, adding paradoxically a new opportunity to investigate the theoretical expectations related to the non-symmetrical behaviors. Indeed, even though the temperature can be made very close to 𝑇𝑇 𝑐𝑐 , the mean density of the fluid cell is never at its exact critical density value [START_REF] Lecoutre | Weightless experiments to probe universality of fluid critical behavior[END_REF]. The error-bar related to this latter critical parameter was never contributing to the discussion of the Earth's based results in terms of true experimental distance to the critical point. Nevertheless, from the above experimental facts and the theoretical expectations, it appears that the related non-symmetrical effects can be unambiguously viewed in a slightly off-critical (liquid-like) cell. Indeed, in such a closed liquid-like cell, it is expected that the meniscus position crosses the median volumetric plane at a single finite temperature distance below the coexistence temperature. From the symmetrical lattice-gas model, we recall that the meniscus of any liquid-like cell is expected to be visible always above this median volumetric plane in the two-phase temperature range.
Therefore, an academic interest to use gravity field acceleration to horizontally stabilize the position of the liquid gas meniscus inside eight different cell positions is precisely investigated during the pre-flight determination of the off-critical mean density of a fluid cell before its use under weightlessness environment. More specifically, we would like to check if SF6 remains well similar -or not -to the 1974 standard SF6 fluid ([18]- [START_REF] Ley-Koo | Revised and extended scaling for coexisting densities of SF6[END_REF]) which support fluid asymmetry resulting from complete scaling hypothesis ([10]- [START_REF] Wang | Nature of vapor-liquid asymmetry in fluid criticality[END_REF]). Our experimental challenge is then to detect the previously observed significant hook (of 0.5% amplitude) in the rectilinear density diameter when the relative uncertainty in the filling density value is controlled with 0.1% precision along a non-critical path which exceeds from ∼0.2% the exact critical isochore. This experimental challenge is illustrated in Fig. 1. The high optical and thermal performances of the ALI-R insert used in the DECLIC facility Engineering Model allow the observation of the meniscus position behavior precisely due to the density diameter behavior, as the temperature of highly symmetrical test cells is changed. Each of these test cells consists in a quasi-perfect disk-shaped cylindrical fluid volume observed in light transmission, surrounded by two opposite, small, and similar dead volumes. The latter volumes define the single remaining transverse (non-cylindrical) axis of the fluid volume due to the cell in-line filling setup. Here, the selected test cell ALIR5 [START_REF]ALIR5 test cell was selected among a series of 10 identical ALIR n cell (with n=1 to 10). The series have provided statistical evaluation of the fluid volume and fluid mass uncertainties (0.05% and 0.1%, respectively)[END_REF] was filled at a liquid-like mean density 〈𝛿𝛿𝜌𝜌 �〉 = (𝜌𝜌 𝜌𝜌 𝑐𝑐 ⁄ ) -1 very close to the critical density ρc of SF6. The relative off-critical density 〈𝛿𝛿𝜌𝜌 �〉 = +0.20 -0.04 +0.04 % of ALIR5 was measured with great accuracy from our Earth's-based filling and checking processes (see below § 6 and Ref. [START_REF] Morteau | Proceedings of 2nd European Symposium Fluids in Space[END_REF]). The fluid under study is SF6 of electronic quality, corresponding to 99.995% purity (from Alpha Gaz -Air Liquide). The meniscus behavior could be analyzed in eight cell configurations. Such analyses provide an accurate experimental evaluation of the relative effects of (i) the complete cell design, (ii) the cell displacement in front of the CCD camera, (iii) the meniscus optical observations through gravitational stratification and liquid wettability, (iv) the cell filling mean density, and finally, (v) the coexisting density diameter behavior. Only the evaluations of (iv) and (v) are treated hereafter.
Experimental set-up and methods
Highly-symmetrical cell design
The essential characteristic of the ALIR5 cell (see Fig. 2(a)) is its highly symmetrical design with respect to any median plane of the observed cylindrical fluid volume. The main part of the fluid sample consists in a fluid layer of thickness 𝑒𝑒 𝑓𝑓 = (2.510 ± 0.002) mm and diameter 𝑑𝑑 𝑓𝑓 = 2𝑅𝑅 = (10.606 ± 0.005) mm. This fluid layer is confined between two flat, parallel, and transparent sapphire windows of thickness 𝑒𝑒 𝑤𝑤 = 8.995 mm and external diameter 𝑑𝑑 𝑤𝑤 = 12 mm. An engraved circle of 10 mm diameter, 30 μm thickness, is deposited on each sapphire external surface. Such a pancake cell design [START_REF] Zappoli | Heat Transfers and Related Effects in Supercritical Fluids[END_REF] The experiment is then performed in four cell directions 𝜃𝜃 = {-23.2°; 0°; +22.9°; +90°} regarding the above single fill-line positions with respect to two reverse orientations of the earth's gravity vector. That permits to analyze potential systematic errors associated with the cell dead volume. Throughout the paper, each cell configuration is labeled ⟦𝑖𝑖, 𝑋𝑋⟧ where the digit i represents two reverse gravity orientations (g ↓ for i=1 and g ↑ for i=2) and the letter X describes four directions of the fill-line axis of the fluid cell (X=H for 𝜃𝜃 = 0 °, X=V for 𝜃𝜃 = +90°, X=T for 𝜃𝜃 = 22.9°, and X=Z for 𝜃𝜃 = -23.2°). The corresponding cross-sectional shape of the cell is schematically pictured in Fig. 2(b), illustrating the relative positions of the meniscus and the dead fluid volumes with respect to the earth's gravity vector.
Phase transition temperature.
The laser light transmission measurements of the EM-DECLIC facility [24] and the wide field-of-view observation of the fluid sample are combined to observe the phase separation process during the cell cooling down. Each temperature quench step crossing the transition temperature (noted 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ) is -1mK. The exact value of 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 is not essential for the following discussion. The temperature results for each experimental configuration are then reported from reference to the lowest temperature (noted 𝑇𝑇 1𝜑𝜑 ) of the monophasic range. The resulting true SF6 coexistence temperature 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 is such as 0 < 𝑇𝑇 1𝜑𝜑 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 < 1 mK, noting in addition the high reproducibility (2mK range) of 𝑇𝑇 1𝜑𝜑 (here from 318.721 K to 318.723 K) for the eight experimental runs. Moreover, the 𝑇𝑇 𝑐𝑐 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ≃ 0.4 µK shift [START_REF]For ALIR5, 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% and[END_REF] due to the off-density criticality of the test-cell is neglected, i.e., 𝑇𝑇 𝑐𝑐 ~𝑇𝑇𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 . Finally, wide field-of-view imaging of each meniscus position data is recorded when thermal equilibration is achieved at each temperature difference (𝑇𝑇 1𝜑𝜑 -𝑇𝑇)~(𝑇𝑇 𝑐𝑐 -𝑇𝑇). Then 𝑇𝑇 𝑐𝑐 -𝑇𝑇 follows a logarithmic-like scale to cover the experimental temperature range 0 < 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 15000 m°C (here with 𝑇𝑇 1𝜑𝜑 >𝑇𝑇 𝑐𝑐 = 45573 ± 1 m°C).
Cell imaging and Image processing
Cell imaging of the meniscus position.
The liquid-gas meniscus is observed from optical transmission imaging through the cell, using LED illumination and cell view observation with a CCD camera (1024×1024 pixels). The pixel size corresponds to 12 μm in the wide field-of-view imaging, well controlled from the two engraved circles on the external surface of each sapphire window. Additional small field-of-view (microscopy) imaging with 1.0 μm -pixel resolution of a typical object area 1×1 mm 2 in the central part of the fluid sample are also performed but not reported here. The cell images are also made by tuning the focal plane between the two window internal surfaces to control small optical perturbative effects related to any nonlinear light bending situations, e.g., due to a nonparallelism between tilted windows, wetting layerlensing effects, compressibility effects, or displacement of the optical axis of imaging lenses versus the exact center axis of the cylindrical fluid cell volume.
Image processing of the cell position.
Before the determination of the meniscus position, the image processing needs the exact pixel coordinates of the viewed fluid cell volume to be determined inside the images recorded for each ⟦𝑖𝑖, 𝑋𝑋⟧ configuration. The picture given in Fig. 3 for the ⟦1, 𝑉𝑉⟧ configuration at 𝑇𝑇 = 45473 m°C is chosen to briefly summarize the method that uses the line profile analysis provided by the NI Vision Assistant 2012 software. Each pixel point is characterized by its x (horizontal)-y (vertical) raw coordinates where the axis origin takes place on the top-left corner of the picture. Therefore, the line profiles provide the x-y coordinates of the selected borderline points between the fluid and body cell (see A, B, C, T, & R points in Fig. 3). The resulting position of the cell borderline (here a quasi-circle of ∼10.380-10.464 mm diameter, i.e., ∼865-872 pixels) can be controlled by the comparison with the position of the two engraved circles (10±0.01 mm or 833.3/833.4 pixels of diameter) on the external surface of the input and output windows. As an essential result, the (horizontal and vertical) pixel coordinates of the apparent center point O are the intrinsic characteristics parameters of the fluid volume position whatever each cell picture. The resulting estimation of the maximum error on the absolute position of any characteristic point of each profile line of each picture is ± 0.5 pixels. The last step optimizes the matching of the selected characteristic points for two reversed similar configuration (⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ for the chosen case). Indeed, the changes of the facility positions under the Earth's gravitational acceleration field induce small mechanical relative displacements due to the intrinsic clearance between the different (optical and mechanical) components. The present concern involves the cell (housed in the insert) in front of the video camera (located in the optical box of the facility). Therefore, this cell image matching step leads to the determination of the (horizontal and vertical) pixel shifts (∼2 to 6 pixels, typically) between two reversed images of the viewed fluid cell volume.
Image processing of the meniscus position.
For each ⟦𝑖𝑖, 𝑋𝑋⟧ case, the line profile analyses are then applied to the horizontal (or vertical) lines that are closer to the related O point (see for example the line DE in Fig. 3). The details of these analyses are not reported here, and only are illustrated in Fig. 4 the main results of these line profile analyses (for ⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ configurations). The line profiles along DE give access to the position and shape of the meniscus at each temperature. Taking then reference from the x-y position of a characteristic point of the viewed cell volume (such as the point B in the selected case of Fig. 3), the bare pixel distance of the meniscus position can be estimated. The temperature dependences of these bare distances are reported in Fig. 4 which illustrates (i) the well-defined crossing of the meniscus at finite temperature distance from the transition temperature, and (ii) the well-defined position of the volumetric median plane of the fluid cell.
Additional important results are also obtained, as the amplitude and shape of the capillary rising effect, the symmetrical matching of the meniscus position by fine tuning (±0.1 pixel, typically) of the apparent median plane for the two (slightly shifted) apparent cells, and the resulting noticeable symmetrical behavior of the capillary rising on the complete temperature range. Finally, the essential feature of the image analyses reflects the combination of the highly symmetrical cell design, the small off-density criticality of the cell filling and the wide field-of-view cell imaging. Such a combination leads to estimate the absolute position of the volumetric median plane of the cell and the meniscus position with the ±0.5 pixel (i.e., ±6 μm) resolution. Such a resolution is obtained whatever the ⟦𝑖𝑖, 𝑋𝑋⟧ configurations, thanks to the similarity of the meniscus behavior and temperature crossing for two reverse positions under the gravity field acceleration. One noticeable remark concerns the (gas or liquid) filling of one half part of the dead volume (i.e., 7.0 mm 3 ). Such a non-viewed fluid volume is thus similar to a viewed fluid median layer of 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 = (1 2 ⁄ )𝑉𝑉 𝑓𝑓𝑓𝑓 𝐴𝐴 𝑓𝑓𝑓𝑓 ≃ 263 ⁄ µm thickness (i.e., ≃ 21.91 pixels). The possible non-symmetrical effects related to the phase behavior in each windowless fluid volume can then easily be detected from the related viewed change of the meniscus position (see § 4), while its minimum ±0.5 pixel variation only corresponds to ±0.0675% of the total fluid volume, conform to the density precision requirements.
Results
Figure 4 shows that the pixel coordinate of the symmetrized meniscus positions, i.e., one half part noted ℎ 𝑖𝑖,𝑋𝑋 of the differences between the related bare pixel distances, can be estimated from reference to the volumetric median plane of the cell in the selected configurations. The temperature behaviors of ℎ 𝑖𝑖,𝑋𝑋 are reported in Fig. 5 for the eight ⟦𝑖𝑖, 𝑋𝑋⟧ configurations. Except for both ⟦𝑖𝑖, 𝑇𝑇⟧ and ⟦𝑖𝑖, 𝐻𝐻⟧ cases (see below), the temperature crossing of the volumetric median plane of the cell occurs in the range 44673 ≤ 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 (m°C) ≤ 44973 (i.e., 𝑇𝑇 = 𝑇𝑇 𝑐𝑐 -{600; 900} m°C (accounting for ±0.5 pixel ~�𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ ±6 µm uncertainty). The meniscus behaviour for ⟦𝑖𝑖, 𝑇𝑇⟧ cases is clearly affected by a significant non-symmetrical effect of wetting liquid phase inside the dead cell volume. Indeed that is the only configuration where the expected gas-like dead volume appears on a position which can easily be connected to the liquid side of the cell by the capillary effects. Accounting for the above remark about 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 , it seems that 1/3 of this gas-like dead volume can be filled by liquid around 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 . Obviously, this over liquid trapping decreases with temperature since the meniscus position is more and more lowered below the corresponding fill-in channel.
Such plausible non-symmetrical liquid wetting effects can also occur for ⟦𝑖𝑖, 𝐻𝐻⟧ cases, particularly observed at low temperatures ( 𝑇𝑇 ≃ 43573 m°C) where capillary condensation can exist in the small fill-in channel. Conversely only a very small part (2/10) of the dead volumes seems to be responsible of the ℎ 𝑖𝑖,𝐻𝐻 differences compared to the ⟦𝑖𝑖, 𝑉𝑉⟧ or the ⟦𝑖𝑖, 𝑍𝑍⟧ configuration cases.
Modeling
The following modeling starts from the initial result given in Ref. [START_REF] Morteau | Proceedings of 2nd European Symposium Fluids in Space[END_REF] for an ideal constant cylindrical volume of the fluid sample with radius 𝑅𝑅 filled at a small liquid-like off-critical density 〈𝛿𝛿𝜌𝜌 �〉 > 0 . The horizontal position ℎ ≪ 𝑅𝑅 of the liquid-gas meniscus from reference to the horizontal cell median plane is written as follows
ℎ 𝑅𝑅 = 𝜋𝜋 4 〈𝛿𝛿𝜌𝜌 �〉-∆𝜌𝜌 � 𝑑𝑑 (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (1)
where
∆𝜌𝜌 � 𝑑𝑑 = 𝜌𝜌 𝐿𝐿 +𝜌𝜌 𝐿𝐿 2𝜌𝜌 𝑐𝑐 -1 (2)
(∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 = 𝜌𝜌 𝐿𝐿 -𝜌𝜌 𝐿𝐿 2𝜌𝜌 𝑐𝑐 (3)
𝜌𝜌 𝐿𝐿 and 𝜌𝜌 𝐿𝐿 are the coexisting liquid and vapor densities at temperature 𝑇𝑇 < 𝑇𝑇 𝑐𝑐 , respectively. In this ideal cylindrical cell, the fluid compressibility and capillary effects are neglected, while only simple geometrical considerations are used to define the liquid-vapor distribution which results from the fluid mass conservation at any 𝑇𝑇.
For the present ALIR5 case, the additional Γ-like symmetrical windowless fluid volume is accounted for rewriting the total volume 𝑉𝑉 𝑓𝑓 as 𝑉𝑉 𝑓𝑓 = 𝜋𝜋𝑅𝑅 2 𝑒𝑒 𝑓𝑓 (1 + 𝑥𝑥) , with 𝑥𝑥 = 𝑉𝑉 𝑓𝑓𝑓𝑓 𝑉𝑉 𝑓𝑓𝑓𝑓 ⁄ ≃ 0.063 . As a direct consequence, the fluid (gas or liquid) filling of the half part of the dead volume is measured by the ratio 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 𝑅𝑅 ⁄ = ±𝜋𝜋𝑥𝑥 4 ⁄ ≃ ±0.0496 . 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 is the above viewed change of the meniscus position around the fluid median plane (i.e., ≃±21.91 pixels or ≃±263 μm).
The thermal effects are accounted for by exchanging the 〈𝛿𝛿𝜌𝜌 �〉 term of Eq. ( 1) by
〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 = 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 (1 + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * ) + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * (4)
The above temperature dependence of 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 is obtained from a linear change of the cell mean density written as 𝜌𝜌 𝑇𝑇 = 〈𝜌𝜌〉 𝑇𝑇 𝑐𝑐 (1 + 3𝛼𝛼 𝑇𝑇 𝑇𝑇 𝑐𝑐 ∆𝜏𝜏 * ) . 𝛼𝛼 𝑇𝑇 = 1.8 × 10 -6 K -1 is the thermal dilatation coefficient of the CuCo2Be alloy. In the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 2 K, the cell thermal dilatation effect is lower than 5.5%. These effects reach 29% at 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≃ 10 K. Such effects need to be accounted for in the cell filling process at the laboratory temperature (they are ≃60 % in the temperature range at 𝑇𝑇 𝑙𝑙𝑙𝑙𝑓𝑓 ≃ 20 -25°C.
The liquid wettability effects are estimated as the form of an equivalent height down 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 of the meniscus position
〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 ∝ 𝑙𝑙 𝑐𝑐𝑙𝑙 2 �1 - 𝜋𝜋 4 � 2𝑅𝑅+𝑐𝑐 𝑓𝑓 𝑅𝑅𝑐𝑐 𝑓𝑓 (5)
〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 corresponds to the thickness of a horizontal liquid planar layer having similar volume to the total wetted liquid volume on sapphire and cell body. To derive Eq. ( 5), it is assumed an ellipsoidal shape for the meniscus capillary rising such that the product of characteristic size parameters are proportional to the squared capillary length 𝑙𝑙 𝑐𝑐𝑙𝑙 2 (so-called the capillary constant 𝑎𝑎 2 ) i.e., 𝑙𝑙 𝑐𝑐𝑙𝑙 2 = 𝑙𝑙 0 2 |∆𝜏𝜏 * | 2ν-𝛽𝛽 , with 2ν -𝛽𝛽 = 0.935 and asymptotic amplitude 𝑙𝑙 0 2 ≃ 3.84 mm 2 [START_REF] Garrabos | Master singular behavior for the Sugden factor of one-component fluids near their gas-liquid critical point[END_REF]. The 𝑙𝑙 𝑐𝑐𝑙𝑙 2 behavior compares well with the effective singular behavior 𝑎𝑎 2 = (3.94 𝑚𝑚𝑚𝑚 2 )|∆𝜏𝜏 * | 0.944 [27] [START_REF] Moldover | Capillary rise, wetting layers, and critical phenomena in confined geometry[END_REF]. The validity of 𝑙𝑙 𝑐𝑐𝑙𝑙 2 ∝ 𝑎𝑎 2 is controlled from the apparent size of the meniscus thickness due to the meniscus capillary rising. Finally only the proportional amplitude (of value 1.44, see § 6) of Eq. ( 5) remains as the adjustable parameter at large temperature distances from 𝑇𝑇 𝑐𝑐 . However, in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 3 K, it is noticeable that 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 remains lower than one half pixel (<6 µm), i.e., 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 𝑅𝑅 < 1.2 × 10 -3 ⁄ . In such a case, it is also important to note the large value of the ratio 〈𝛿𝛿ℎ〉 𝑓𝑓𝑓𝑓 〈𝛿𝛿ℎ〉 𝑐𝑐𝑙𝑙 ⁄ ≃ 50. The final functional form of ℎ then writes as follows (1 + 𝑥𝑥) -〈𝛿𝛿ℎ〉 𝑐𝑐𝑐𝑐 𝑅𝑅 [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] where only the fluid compressibility effects still remain neglected. These latter effects can be observed from the grid deformation and the related local turbidity on the both sides of the vapor-liquid meniscus and are only noticeable in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 5 m°C.
Discussion
When the capillary rise effects are negligible (i.e. in the temperature range 𝑇𝑇 𝑐𝑐 -𝑇𝑇 ≤ 3 K), Eq. [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] shows that the meniscus behavior in a cell with finite positive value of 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 crosses the cell median plane at a single temperature 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 where ℎ = 0 , i.e., 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = ∆𝜌𝜌 � 𝑑𝑑 . A first approach considers the linear functional form of 𝜌𝜌 � 𝑑𝑑 such as
𝜌𝜌 � 𝑑𝑑 = 1 + 𝑎𝑎 𝑑𝑑 |∆𝜏𝜏 * | (7)
where the value 𝑎𝑎 𝑑𝑑 = 0.84 ± 0.015 of the slope of rectilinear diameter results from the coexisting density data on the complete two-phase range. The above central estimation of 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 44823 m°C then leads to 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ≅ 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% . Earth's-based visualization of the meniscus behavior in the ALIR5 cell is well an academic benchmark experiment where the resolution in the image processing at the pixel level is of prime interest for accurate determination of the mean filling density of the fluid cell. This experiment authorizes a preliminary checking (without accounting for the compressibility effects) of the validity of the expected singular top shape of the coexisting density curve and its related singular diameter presumably satisfying the different theoretical functional forms issued from the literature. The singular top-shape of the coexistence curve (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (|∆𝜏𝜏 * |) for |∆𝜏𝜏 * | ≤ 10 -2 can be predicted without adjustable parameter [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] from the theoretical master crossover functions estimated from the massive renormalization scheme. Nevertheless, any other effective power laws to describe (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 (|∆𝜏𝜏 * |) of SF6 (such as for instance (∆𝜌𝜌 �) 𝐿𝐿𝐿𝐿 = 1.7147|∆𝜏𝜏 * | 0.3271 + 0.8203|∆𝜏𝜏 * | 0.8215 -1.4396|∆𝜏𝜏 * | 1.2989 from Ref. [START_REF] Ley-Koo | Revised and extended scaling for coexisting densities of SF6[END_REF]) do not modify the following analysis, especially considering the temperature range 0.03 ≤ 𝑇𝑇 𝑐𝑐 -𝑇𝑇 < 3 K where �𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ 100 µm (i.e., �𝛿𝛿ℎ 𝑖𝑖,𝑋𝑋 � ≤ 8 pixels.
The second approach thus introduces the singular functional forms of ∆𝜌𝜌 � 𝑑𝑑 as follows
∆𝜌𝜌 � 𝑑𝑑 = 𝐴𝐴 𝛽𝛽 |∆𝜏𝜏 * | 2𝛽𝛽 +𝐴𝐴 𝛼𝛼 |∆𝜏𝜏 * | 1-𝛼𝛼 +𝐴𝐴 1 ∆𝜏𝜏 * +𝐴𝐴 ∆ |∆𝜏𝜏 * | 𝑥𝑥 ∆ 1+𝑙𝑙 ∆ |∆𝜏𝜏 * | ∆ (8)
Equation ( 8) results from the various complete field mixing (CFM) models predicting the singular asymmetry with adjustable amplitudes. The amplitude sets obtained from the Weiner's data fitting [START_REF] Kim | Singular coexistence-curve diameters: Experiments and simulations[END_REF] given in Table 1 with 𝛼𝛼 = 0.109 , 𝛽𝛽 = 0.326 , ∆= 0.52, and 𝑥𝑥 ∆ = 1 -𝛼𝛼 + ∆. Any adjustment of (at least three) free amplitudes appears Weiner-like compatible whatever the used additive forms of the power laws and exponents involved in Eq. ( 8). The corresponding estimations of (ℎ 𝑅𝑅 ⁄ ) as a function of 𝑇𝑇 𝑐𝑐 -𝑇𝑇 are illustrated in Fig. [START_REF] Cailletet | Recherches sur les densités des gaz liquéfiés et de leurs vapeurs saturées[END_REF] where the value 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20% is fixed. Only the experimental results for the ⟦𝑖𝑖, 𝑉𝑉⟧ (full blue circles) and ⟦𝑖𝑖, 𝑍𝑍⟧ (full green triangles) configurations are used. For the rectilinear diameter case, the dotted and full blue curves are for the use of Eqs. ( 6)-( 7), without or with capillary correction term, where are introduced the corresponding predictions of Ref. [START_REF] Garrabos | Crossover equation of state models applied to the critical behavior of xenon[END_REF] (see above). For the critical hook case, the brown, green, and pink full curves are for the use of Eqs. ( 6) and ( 8) with parameters of columns 2, 3, and 5 (without visible difference between the green curve (column 3 case) and green circles (column 4 case). In addition, published 𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ and 𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ Weiner's data (Table X of [START_REF] Weiner | Breakdown of the law of rectilinear diameter[END_REF]), earlier supporting the diameter deviation on SF6, can directly be used to estimate the various terms of Eq. ( 6). In Fig. 6, the orange full diamonds are for the corresponding meniscus positions at the Weiner's experimental 𝑇𝑇 𝑐𝑐 -𝑇𝑇 values. Clearly, only the (ℎ 𝑅𝑅 ⁄ ) calculations for the linear density diameter case are in good agreement with the experimental data, especially in the two temperature decades 25 ≤ 𝑇𝑇 𝑐𝑐 -𝑇𝑇 < 2500 mK of prime interest regarding the neglected effects. In addition to an intrinsic questioning of the Weiner's measurements of the SF6 diameter deviation [START_REF] Moldover | Capillary rise, wetting layers, and critical phenomena in confined geometry[END_REF], the noticeable inconsistency observed in Fig. 6 can be attributed to a non-realistic estimation of the uncertainty on (𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ ) + (𝜌𝜌 𝐿𝐿 𝜌𝜌 𝑐𝑐 ⁄ ) Weiner values (at least onedecade larger than the maximum amplitude (0.5%) of the hook-like deviation), especially close to critical temperature (see Fig. 1). More generally, the systematic large data dispersion on the complete temperature range of the Weiner's data mainly seems due to the Weiner's values of the critical parameters 𝜌𝜌 𝑐𝑐 (density), 𝜀𝜀 𝑐𝑐 (dielectric constant), and then 𝐶𝐶𝐶𝐶 𝑐𝑐 = (1 𝜌𝜌 𝑐𝑐 ⁄ ) (𝜀𝜀 𝑐𝑐 + 1) (𝜀𝜀 𝑐𝑐 + 2) ⁄ (Clausius-Mossotti constant), which are significantly different ( -1.5% , -10.9% , and -3.3% , respectively) from the literature ones [START_REF]Weiner's values are 𝜌𝜌 𝑐𝑐 = (0.731 ± 0.001) g.cm -3 , 𝜀𝜀 𝑐𝑐 = 0.262 ± 0.010[END_REF].
Conclusions
The (ℎ 𝑅𝑅 ⁄ ) modeling from Eqs. ( 6) and ( 7) is comparable (in amplitude and uncertainty) with the Earth's-based measurements. Along the off-critical thermodynamic path of 〈𝛿𝛿𝜌𝜌 �〉 = +0.20 -0.04 +0.04 % , the careful imaging analysis of the SF6 two-phase domain appears well understood without the supplementary addition of any singular hook-shaped deviation in the rectilinear density diameter. The main part of the uncertainty in the rectilinear density diameter remains due to the actual level of precision (0.21%) for the SF6 critical density value. In such an uncertainty range, the cell thermal dilatation, the fluid compressibility, the fluid coexisting densities and the liquid wettability effects can be well-controlled from the highly symmetrical ALIR5 cell design. The slope of the SF6 linear density diameter seems the only remaining adjustable parameter, leading to questionable applicability of the complete field mixing to the simple fluid case. Future modeling approaches [START_REF] Garrabos | Liquid-vapor rectilinear diameter revisited[END_REF] will be focused on the estimation of the fluid compressibility effects using an upgraded version of the universal parametric equation of state. Moreover ongoing experimental works will be performed to account for the eventual contribution of the SF6 purity on the critical parameters before to close in a sure manner the debating situation about the density diameter behavior close to its critical point.
Fig. 1 .
1 Fig. 1. Experimental critical hook of the linear density diameter of SF6 reported in [10] and related temperature 𝑇𝑇 𝑐𝑐 -𝑇𝑇 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 where the meniscus position must be collapsed on the volumetric median plane of a cylindrical cell filled at the liquid-like off-critical density of 0.20%. Green lines: Expected from the singular hook of the density diameter. Red lines: expected from the rectilinear density diameter
leads to the ALIR5 viewed cylindrical fluid volume of 𝑉𝑉 𝑓𝑓𝑓𝑓 = 𝜋𝜋𝑅𝑅 2 𝑒𝑒 𝑓𝑓 = (221.7 -0.70 +0.20 ) mm 3 and the ALIR5 cross-sectional area of 𝐴𝐴 𝑓𝑓𝑓𝑓 = 2𝑅𝑅𝑒𝑒 𝑓𝑓 = (26.621 ± 0.020) mm 2 for any viewed median volumetric plane, except around the direction of the fill-line setup, as detailed below. The cell body is made of a machined CuCo2Be parallelepipedic block of external dimensions 𝐿𝐿(= 25) × 𝑙𝑙(= 27) × ℎ(= 24) mm 3 . This body contains two similar fill-line dead volumes -each one in the Γ-like form of two perpendicularly crossed cylindrical holes -located in the median plane of the fluid thickness. These two Γ-like volumes are symmetrical from the central optical axis used as the rotation axis of the cell. The resulting windowless fluid volume is (1 2 ⁄ )𝑉𝑉 𝑓𝑓𝑓𝑓 = (7.0 ± 0.2) mm 3 on each side of the observed cylindrical fluid volume. Therefore, 𝑉𝑉 𝑓𝑓 = 𝑉𝑉 𝑓𝑓𝑓𝑓 + 𝑉𝑉 𝑓𝑓𝑓𝑓 = (235.70 -1.0 +0.5 ) mm 3 is the total fluid sample volume. All the above dimensional values are from mechanical measurements performed at 20°C. The common axis of the two small opposite cylindrical holes opened in the main cylindrical fluid volume defines the single particular direction of the common median plane. In this latter plane occurs the maximum fluid area ( 𝐴𝐴 𝑓𝑓𝑓𝑓 + 𝐴𝐴 𝑓𝑓𝑓𝑓 = 𝐴𝐴 𝑓𝑓𝑓𝑓 + (17.8 ± 1.0) mm 2 ) crossing the complete fluid volume. The horizontal position of this median plane is chosen as the zero angle ( 𝜃𝜃 = 0 °) of the cell rotation (or 𝜃𝜃 = 180 °, equivalently, for the opposite configuration of the cell versus the direction of the gravity vector). From reference to this cell direction, the maximum tilted angle 𝜃𝜃 𝑚𝑚 that overlaps the Γ-like configuration of the dead volume is 𝜃𝜃 𝑚𝑚 ≳ 28°. The ±𝜃𝜃 𝑚𝑚 -directions are not equivalent versus the liquid (or gas) gravity positionning inside each dead volume (see below § 6).
Fig. 2 .
2 Fig. 2. (a) Picture of the ALIR5 cell. (b) Schematic cross section of the ALIR5 cell where are illustrated the four relative directions of the meniscus 𝜃𝜃 = {-23.2°; 0°; +22.9°; +90°} and the direction 𝜃𝜃 𝑚𝑚 ≳ 28° that overlaps the Γ-like forms of the two symmetrical dead volumes associated to the fill-line direction. Red area: fluid volume; blue area: diffusion windows, filling screws and stoppers; green area: cell body; external circle : dimensional scale .
Fig. 3 .
3 Fig. 3. Video picture of the ALIR5 cell for the ⟦1, 𝑉𝑉⟧ configuration at temperature 𝑇𝑇 = 45473 m°C. Selected borderline points A, B, C, T, & R between the fluid and cell body are used in the image processing to define the fluid sample cell position (especially the apparent center point O) inside the picture. Vertical line DE is used to analyse the meniscus position and shape as functions of temperature. The next step compares similar line profiles obtained at different temperatures to probe the absence of thermal effect at the pixel level during the temperature timeline on each facility configuration.The last step optimizes the matching of the selected characteristic points for two reversed similar configuration (⟦1, 𝑉𝑉⟧ and ⟦2, 𝑉𝑉⟧ for the chosen case). Indeed, the changes of the facility positions under the Earth's gravitational acceleration field induce small mechanical relative displacements due to the intrinsic clearance between the different (optical and mechanical) components. The present concern involves the cell (housed in the insert) in front of the video camera (located in the optical box of the facility). Therefore, this cell image matching step leads to the determination of the (horizontal and vertical) pixel shifts (∼2 to 6 pixels, typically) between two reversed images of the viewed fluid cell volume.
Fig. 4 .
4 Fig. 4. Bare pixel distances of the meniscus as functions of the temperature for the ⟦1, 𝑉𝑉⟧ (open red circles) and ⟦2, 𝑉𝑉⟧ (open blue circles) configurations. Related bare pixel distance of the volumetric median plane of the cell. From reference to the y pixel coordinate of the B point in Fig. 3.
Fig. 5 .
5 Fig. 5. Temperature dependence of the symmetrical pixel shift of the meniscus position from reference to the corresponding volumetric median plane, for the eight ⟦𝑖𝑖, 𝑋𝑋⟧ configurations. 1pixel = 12µm.
Fig. 6 .
6 Fig. 6. Comparison between experimental and modelling results of (ℎ 𝑅𝑅 ⁄ ) as functions of 𝑇𝑇 𝑐𝑐 -𝑇𝑇 , fixing 〈𝛿𝛿𝜌𝜌 �〉 𝑇𝑇 𝑐𝑐 = 0.20%.
Table 1 .
1
SF6 parameters for Eq. (8)
[7] [7] [7] [9]
Aβ 0 1.124 1.0864 0.46028392
Aα 6.365 -9.042 -7.990 -0.6778981
A1 -10.13 11.37 9.770 0.13516245
A∆ 8.080 -3.354 0 0
a∆ 0 0 3.318 0
Acknowledgements
We thank all the CNES, CNES-CADMOS, NASA, and associated industrial teams involved the DECLIC facility project. CL, SM, YG, DB, are grateful to CNES for the financial support. They are also grateful to Philippe Bioulez and Hervé Burger for their operational support at CADMOS. The research of I.H. was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. |
01695557 | en | [
"chim.poly",
"chim.cata"
] | 2024/03/05 22:32:13 | 2017 | https://univ-rennes.hal.science/hal-01695557/file/Experimental%20and%20Computational%20Investigations%20on%20Highly%20Syndioselective_accepted.pdf | Elisa Louyriac
Eva Laur
Alexandre Welle
Aurélien Vantomme
Olivier Miserque
Jean-Michel Brusson
Laurent Maron
email: [email protected]
Jean-François Carpentier
email: [email protected]
Evgueni Kirillov
email: [email protected]
Experimental and Computational Investigations on Highly Syndioselective Styrene-Ethylene Copolymerization Catalyzed by Allyl ansa-Lanthanidocenes
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Syndiotactic polystyrene (sPS) is an attractive engineering plastic potentially usable for many industrial applications due to its fast crystallization rate, low permeability to gases, low dielectric constant and good chemical and temperature resistance. [START_REF] Ishihara | Stereospecific Polymerization of Styrene Giving the Syndiotactic Polymer[END_REF][START_REF] Malanga | Syndiotactic Polystyrene Materials[END_REF][START_REF] Schellenberg | Syndiotactic Polystyrene: Process and Applications[END_REF] However, its high melting point (270 °C) and its brittleness are the two main drawbacks limiting its processability. To tackle this issue, several strategies have been envisaged: blending or postmodification of sPS, polymerization of functionalized styrene derivatives, or copolymerization of styrene with other monomers. [START_REF] Zinck | Functionalization of Syndiotactic Polystyrene[END_REF][START_REF] Jaymand | Recent Progress in the Chemical Modification of Syndiotactic Polystyrene[END_REF] The latter approach was found effective and versatile to fine-tune the properties of sPS, [START_REF] Laur | Engineering of Syndiotactic and Isotactic Polystyrene-Based Copolymers via Stereoselective Catalytic Polymerization[END_REF] more particularly via syndioselective copolymerization of styrene with ethylene. [START_REF] Rodrigues | Groups 3 and 4 Single-Site Catalysts for Styrene-Ethylene and Styrene-α-olefin Copolymerization[END_REF] The copolymerization of those two monomers is quite challenging due to their strikingly different reactivity. As a result, most of the group 4 catalysts active for sPS production only provided "ethylene-styrene interpolymers" (ESI), featuring no stereoregularity and amounts of incorporated styrene below 50mol%. Those issues were overcome by the development of group 3 catalysts, independently disclosed by our group [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] and by Hou and co-workers. [START_REF] Luo | Scandium Half-Metallocene-Catalyzed Syndiospecific Styrene Polymerization and Styrene-Ethylene Copolymerization: Unprecedented Incorporation of Syndiotactic Styrene-Styrene Sequences in Styrene-Ethylene Copolymers[END_REF] Yet, the number of effective catalytic systems for sPSE synthesis remains quite limited to date. [START_REF] Li | Aluminum Effects in the Syndiospecific A c c e p t e d m a n u s c r i p t 21[END_REF] Very recently, we reported on the synthesis and catalytic investigations of a new series of neutral ansa-lanthanidocene catalysts for the production of sPS; 11 a thorough DFT study of these systems highlighted the different factors governing the formation of sPS. 11 In this new contribution, we describe the syndioselective copolymerization of styrene with ethylene using this latter series of complexes and demonstrate that some of them feature improved catalytic performances as compared to the current state-of-the-art (Scheme 1). For the first time, the parameters that control syndioselective styrene-ethylene copolymerization were investigated also by DFT computations. These calculations contributed to a better understanding of the
Results and Discussion
Styrene-Ethylene Copolymerizations Catalyzed by Allyl Ansa-Lanthanidocenes.
Styrene/ethylene copolymerizations catalyzed by complexes 1-Nd-K-allyl, 2-Nd7-Nd, 2-Sc, 2-La, 2-Sm and 2-Pr were first screened under similar conditions (Table 1, entries 111). As already described for styrene homopolymerization, 11 the reactions were best conducted using a few equiv of (nBu)2Mg as scavenger, to prevent catalyst decomposition by trace impurities, especially at low catalyst loading and high temperature (vide infra). This dialkylmagnesium appeared to be a poor chain transfer agent under those conditions and did not affect the reaction mechanism nor the properties of the produced sPSE copolymers. showed lower productivities and afforded sPSE copolymers with a higher ethylene content (thus affecting the calculation of syndioselectivity which appeared, at first sight, lower due to more abundant St-E enchainments) (entries 2 and 67). The latter observation suggests that introduction of substituents bulkier than tBu, namely cumyl or Ph2MeC-, at the 2,7-positions of the fluorenyl ligand favors insertion of a small monomer ethylene rather than styrene.
The nature of the metal center played also a key role. Complex 2-Sc was nearly inactive whereas 2-Pr and 2-La afforded sPSE copolymer with productivities of ca. 300 kg•mol(Ln) 1 •h 1 . Under those non-discriminating conditions, full styrene conversion was reached when using complex 2-Sm, as observed with its neodymium analogue 2-Nd. Substantial improvement of the productivity values was obtained under more forcing and demanding copolymerization conditions (entries 1217). Increasing both the temperature of polymerization up to 140 °C and the monomer-to-catalyst ratio up to 40 000 allowed to reach productivities above 1,000 kg(sPSE)•mol(Ln) of the copolymer significantly dropped ([r] 5 = 3235%). Such a marked discrepancy between the stereoselectivity of 2-Nd and 2-Sm was not observed for the copolymerizations performed at 60 °C (compare entries 2, 10 and 11 with entries 1217). Overall, these results are in line with those already described for syndioselective styrene homopolymerization, 11 and highlight the remarkable stability of 2-Ln catalytic systems under such drastic conditions (thanks to (nBu)2Mg as scavenger). The productivities of these systems are comparable with those of the most active cationic scandium-based systems reported for syndiospecific styrene/ethylene copolymerization. [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF][START_REF] Li | Aluminum Effects in the Syndiospecific A c c e p t e d m a n u s c r i p t 21[END_REF] The most productive and syndioselective catalyst, 2-Nd, was tested on a 10-fold larger production scale (i.e., on a half-kg styrene) in bulk conditions at 100 °C in a closed reactor;
five different experiments with variable amounts of ethylene (vide infra) were conducted and returned improved productivities in the range 2 7305 430 kg(sPSE)•mol(Nd) 1 •h 1 (entries 1822). Under these bulk conditions, molecular weights of the resulting copolymers were somewhat higher than those obtained at a lower (bench) scale in 50:50 v/v mixtures of styrene/hydrocarbon solvent (Mn = 43 00062 000 g•mol 1 vs. Mn = 33 000 g•mol 1 , respectively) and the polydispersities were also narrower (ÐM = 1.42.5). These data highlight the significant impact of the process conditions on both the catalytic system productivities and characteristics of the polymers.
A c c e p t e d m a n u s c r i p t The initial styrene-to-ethylene ratio was also varied by changing the amount of ethylene introduced at the beginning of the polymerization (entries 18-22; see Experimental part). The ethylene content in the copolymer can be hence easily tuned, allowing the Those 13 C{ 1 H} NMR spectra were recorded using an inverse-gated-decoupling sequence in order to accurately determine the amount of ethylene incorporated. As only isolated units of ethylene were detected, the amount of ethylene incorporated was determined integrating the signal of the ipso carbon (polystyrene sequences, δ 145.8 ppm) and the signals at 3738 ppm corresponding to the secondary carbons Sαγ.
Prod. b [kg• mol 1 •h 1 ] C2 inc. c [mol%] Tm d [° C] Tc d [° C] Tg d [° C] Hm d [J•g 1 ] Mn×10 3 [g•mol 1 ] e ÐM e [r]
The analysis of the spectra area corresponding to the resonance of the secondary carbon Sα of PS sequences also allowed quantifying the syndiotacticity at the hexad level (Figure 3). The relative intensity of the rrrrr hexad signal was obtained after deconvolution and integration of all the signals in this area. This means that not only the presence of others hexads mrrrr, rmrrr and rrmrr but also the presence of other unassigned sequences (in particular, those that are the consequence of S-E junctions, and presumably as well hexads with meso diads) were considered for the calculation of [r] [START_REF] Jaymand | Recent Progress in the Chemical Modification of Syndiotactic Polystyrene[END_REF] . The values measured in the present case ([r] 5 = 3278%) are similar to those previously reported in the case of sPSE materials obtained with I (Pr > 0.81, [r] 5 > 35%; depending on the ethylene content). [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] A c c e p t e d m a n u s c r i p t down-si and up-re.
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(C3H5) (I).
Complex I, which is highly effective to copolymerize styrene with ethylene while maintaining a high syndiotacticity, [START_REF] Rodrigues | Allyl ansa-lanthanidocenes: Single-Component, Single-Site Ccatalysts for Controlled Syndiospecific Styrene and Styrene-Ethylene (Co)Polymerization[END_REF] was selected as a benchmark for our theoretical study. Subsequently, it will allow us to highlight the influence of catalyst substituents present on the allyl and fluorenyl ligands on styrene-ethylene copolymerization.
i) First styrene vs. ethylene insertion. Energy profiles were computed for the first ethylene (3-E) and the 2,1-down-re (3d-re) styrene 11 insertions (Figure 4). From a kinetic point of view, transition state 3-E is more stable (by 3.3 kcal mol 1 ) than 3d-re, but this is included within the error range of the method. 14,15 The first insertion is likely more thermodynamically controlled and in favor of styrene insertion by 4.6 kcal mol 1 . The energy difference between 3d-re/3-E is mainly due to the steric hindrance around the metal center (Figure S9). Product 4-E obtained following the intrinsic reaction coordinate is extra stabilized by a resulting interaction between the terminal double bond of the allyl ligand and the metal center (Figure S10). The relaxation of the polymer chain leads to an endothermic product 4-Erelaxed (by 0.2 kcal mol 1 ), which is consistent with the fact that formation of an alkyl-from an allyl-complex is thermodynamically unfavorable.
A c c e p t e d m a n u s c r i p t
ii) Second styrene vs. ethylene insertion. As in the first step, 2,1-down-re styrene insertion is thermodynamically favored, second insertions were computed from the product 4d-re. The energy profiles were calculated for the stationary (6-E) and migratory (6-E') ethylene insertions and for 2,1-up-si (6u-si) stationary styrene insertions (Figure 5). Chart 2. Numbering used for carbon atoms in the allyl ligand.
As regards the 2 nd insertion products, the presence of a π-coordination between the phenyl ring of the first styrene inserted and the metal center further stabilizes the migratory insertion ethylene product 7-E' by 5.0 kcal mol 1 than for stationary insertion product 7-E (Figure S12). Hence, at the second insertion stage, the ethylene monomer will be inserted preferentially, according to the "migratory" mechanism.
ii) Third vs. ethylene insertion. Energy profiles were for the (9-E) ethylene insertion and for 2,1-down-re (9d-re) and 2,1-up-si (9u-si) styrene insertions (Figure 6). At the third insertion step, there is a slight kinetic preference for insertion of ethylene (9d-re/9-E = 4.1 kcal mol 1 ) (dark-green), probably related to the decrease of the steric hindrance around the metal center (Figure S13). In all products, the growing chains feature the same orientation, which may explain the same range of their energies (Figure S14).
To obtain further information about the nature of the resulting copolymer, it was crucial to investigate reactional pathways after insertion of two ethylene units and, more generally, after the two units of the same monomer were consecutively inserted.
Ethylene-Ethylene-Ethylene (E-E-E) vs. Ethylene-Ethylene-Styrene (E-E-S).
The energy profiles were computed for the third ethylene (9-E) and 2,1-down-re (9d-re) styrene
A c c e p t e d m a n u s c r i p t insertions, in case where two ethylene monomers were inserted according to the "migratory" insertion mechanism (Figure 7). At this stage, there is no significant kinetic preference between styrene and ethylene insertions (9d-re/9-E = 3.2 kcal mol 1 ). The energy difference between the both insertion products 10-E and 10d-re is 2.5 kcal mol 1 , which is also within the error range of the method. This is reflected in the product structures in which the growing chains are similarly oriented (Figure S16).
7.
Energetic profiles for the third ethylene insertion in {Me2C(C5H4)(Flu)}Nd(C3H5) (I), after two ethylene insertions. The third 2,1-down-re styrene insertion is plotted in blue.
A c c e p t e d m a n u s c r i p t
similarly performed for the third 2,1-down-re (9d-re) styrene and ethylene (9-E) insertions, after insertion of two styrene monomers (Figure 8). The results match those obtained for the above E-E-E vs. E-E-S study: (i) the energy difference 9d-re/9-E (1.3 kcal mol 1 ) is included within the error range of the method, and (ii) the growing chains appear to be similar in the transition state and product structures (Figures S17 andS18). This is confirmed by the lack of thermodynamic preference between the two monomers (10d-re/10-E = 0.5 kcal mol 1 ).
Overall, the above calculations indicate that, once two units of the same monomer have been inserted, there will be no selectivity at the next stage. In other words, I tends to form random styrene-ethylene copolymers.
A c c e p t e d m a n u s c r i p t (I), after two styrene insertions according to the "stationary" mode. The third 2,1-down-re styrene insertion (the most stable found in homopolymerization case) is plotted in blue.
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(1,3-C3H3(SiMe3)2) (1-Nd).
In order to obtain information on the influence of SiMe3 substituents of the allyl ligand on styrene-ethylene copolymerization as well as on the nature of the sPSE copolymer obtained, the same study as that for I was carried out for the putative 1-Nd catalyst. The computational results are similar to those highlighted for the non-substituted catalyst I (all reaction profiles and structures are available in the Supporting Information; Figures S19-S33): (i) at the first step, a 2,1-down-re styrene insertion is preferred, followed by an ethylene insertion, and then, a slight preference for this latter monomer at the third step; (ii) after insertion of two same monomer units, there is no clear kinetic or thermodynamic preference between the two monomers.
Hence, the above calculations indicate that the presence of the bulky substituents in the allyl initiating group does not affect the chemistry and the nature of the obtained copolymer: the 1-Nd catalyst tends also to form random styrene-ethylene copolymers. This is consistent with an initiating group which is progressively rejected at the end of the growing polymer chain. It should be noted, however, that the bulky substituents on the allyl ligand induce an increase in the energy of the first insertion barriers (for example 24.5 vs. 14.5 kcal mol 1 for the first ethylene insertion), as this has already been observed for styrene homopolymerization. 11 This is again due to charge localization on the "wrong" carbon atom of the allyl ligand, that is the one that ensures the interaction with the metal center and therefore provides the nucleophilic assistance, rather than the one that is involved in the CC coupling.
A c c e p t e d m a n u s c r i p t
Furthermore, it is noteworthy that at the first ethylene insertion step in 1-Nd, the alkyl product 4-E-relaxed is thermodynamically favorable (by 10.3 kcal mol 1 ). This is not consistent with the usual trend that the formation of an alkyl from an allyl compound is disfavored thermodynamically. A charge analysis at the NBO level was then carried out in order to obtain some information about the nature of the allyl ligand in Therefore, this is not a standard allyl in 1-Nd but rather a masked alkyl, explaining why formation of 4-E-relaxed is thermodynamically favorable (10.3 kcal mol 1 vs. +0.2 kcal mol 1 in the case of the (C3H5) allyl in I).
DFT investigation of styrene-ethylene copolymerization catalyzed by {Me2C(C5H4)(2,7-tBu2Flu)}Nd(1,3-C3H3(SiMe3)2) (2-Nd).
It has been experimentally found that complex 2-Nd with tBu groups in 2,7-positions of the fluorenyl ligand exhibits a high productivity of up to 5,430 kg(sPSE)•mol(Nd) 1 •h 1 for styrene-ethylene copolymerization.
The microstructure of the sPSE copolymers shares the same features as those observed for copolymers obtained with I. DFT calculations were performed to rationalize this influence of the 2,7-tBu2 groups on the Flu ligand on the reactivity and on the copolymer obtained.
The first and second styrene insertion were computed (see ESI figures S34 and S35) and it was found that, unlike complex I, the migratory styrene insertion is preferred for the 2-Nd catalyst over the stationary insertion found for I. This can be attributed to the presence of bulky substituents on the allyl ligand that leads to a change in the polymerization mechanism in order to minimize steric repulsion. From a kinetic point of view, there is a clear preference for 3-E which is more stable by 6.8 kcal mol -1 than 3d-re. This energy difference is due to a repulsion between the tBu groups and the Ph ring of the incoming styrene which tends to destabilize 3d-re compared to between the tBu and the SiMe3 groups and cannot ensure the nucleophilic assistance. The C 1 (allyl) carries the negative charge in order to induce a reaction with the carbon atom of the ethylene monomer and maintains the interaction with the metal center (nucleophilic assistance). This implies that ethylene moves away from the metal center and, thus has a CC double bond less activated (CC = 1.40 Å vs.1.42 Å in 1-Nd and in I). This charge localization effect allows decreasing the activation barrier compared to the case of complex 1-
Nd.
In terms of thermodynamics, the alkyl product 4-E-relaxed is favorable (by 12.5 kcal mol 1 ) which, as pointed out above for 1-Nd, is related to the charge of the carbon atoms in the allyl ligand [C 1 (allyl) (1.03), C 2 (allyl) (0.23), C 3 (allyl) (1.15)] in 2-Nd. In this case, 4-E-relaxed is more stable by 4.2 kcal mol 1 compared to 4d-re-relaxed; therefore, ethylene would be preferentially inserted.
ii) Second styrene vs. ethylene insertion. Second insertions were computed after a 2,1-downre styrene insertion. The corresponding energy profiles were calculated for the stationary (6-E) and migratory (6-E') ethylene insertions and for 2,1-up-re (6u'-re) migratory styrene insertion (Figure 10). The results for the second insertion with 2-Nd are similar to those obtained for the two previous catalysts. Indeed, after a 2,1-down-re styrene insertion, migratory ethylene insertion is kinetically preferred (6u'-re/6-E' = 10.4 and 6-E/6-E' = 9.5 kcal mol -1 ).
These results are quite similar to those obtained for the 1-Nd and I catalysts, suggesting the formation of random copolymers. This conclusion is further strengthened by the results obtained for the third steps (Figures S40 andS41), as no selectivity was found, in line with the formation of random styrene-ethylene copolymers.
In summary, DFT calculations allowed to rationalize the nature of the copolymer obtained as well as the influence of the substituents of the catalyst. The 1,3-trimethylsilyl substituents on the allyl ligand cause (i) a modification of the distribution of the charges on A c c e p t e d m a n u s c r i p t the allylic carbon atoms, which makes the first ethylene insertion product thermodynamically favorable, and (ii) an increase in the insertion barriers, related to the steric hindrance and the charge distribution. On the other hand, bulky 2,7-tert-butyl groups on the fluorenyl ligand tend to promote ethylene insertion for the second insertion. This is also related to a charge localization effect.
Finally, for the three catalytic systems studied, no modification in the nature of the obtained copolymer is observed, that is the formation of random styrene-ethylene copolymers with a high syndiotacticity in the PS sequences.
Conclusions
The performance of a series of allyl ansa-lanthanidocenes of the general formula {R2C(C5H4)(R'R'Flu)}Ln(1,3-C3H3(SiMe3)2)(THF)x was assessed in styrene-ethylene copolymerization. By using forcing copolymerization conditions, that is a low catalyst loading and relatively high temperature, a high productivity of 5,430 kg(sPSE)•mol(Nd) 1 •h 1 was achieved with 2-Nd on a half-kilogram scale, which is comparable with the most active scandium half-sandwich complexes. [START_REF] Luo | Scandium Half-Metallocene-Catalyzed Syndiospecific Styrene Polymerization and Styrene-Ethylene Copolymerization: Unprecedented Incorporation of Syndiotactic Styrene-Styrene Sequences in Styrene-Ethylene Copolymers[END_REF] The sPSE copolymers thus obtained feature a random microstructure with single ethylene units distributed in highly syndiotactic PS sequences. The ethylene content and thus the thermal properties of the materials can be tuned by the initial comonomer feed.
Theoretical DFT studies allowed rationalizing the random nature of the obtained styrene-ethylene copolymers catalyzed by complexes I, 1-Nd and 2-Nd. The calculations showed that: (i) SiMe3 substituents on the allyl ligand have an influence on the nature of the first insertion product and notably on the stability of the ethylenic product, and (ii) those on the fluorenyl ligand either make the catalyst more ethylene reactive at the second insertion Typical procedure for bench-scale styrene-ethylene copolymerization. In a typical experiment (Table 1, entry 1), a 300 mL glass high-pressure reactor (TOP-Industrie) was charged with 50 mL of solvent (cyclohexane or n-dodecane) under argon flash and heated at the appropriate temperature by circulating water or oil in a double mantle. Under an ethylene A c c e p t e d m a n u s c r i p t flow, styrene (50 mL), a solution of (nBu)2Mg (0.5 mL of a 1.0 M solution in heptane) and a solution of pre-catalyst in toluene (ca. 43 mg in 2 mL) were introduced. The gas pressure in the reactor was set at 2 atm and kept constant with a back regulator, and the reaction media was mechanically stirred. At the end of the polymerization, the reaction was cooled, vented, and the copolymer was precipitated in methanol (ca. 500 mL); after filtration, it was washed with methanol and dried under vacuum at 60 °C until constant weight.
Typical procedure for half-kg-scale styrene-ethylene copolymerizations in a closed reactor. In a typical experiment (Table 1, entry 18), a 1 L high-pressure reactor was charged with 500 mL of styrene (degassed under nitrogen, stored in the fridge on 13X molecular sieves and eluted through an alumina column prior to use) under nitrogen flush and heated at the appropriate temperature by circulating oil in a double mantle. An exact amount of ethylene was introduced in one shot in the reactor using an injecting system equipped with a pressure gauge, followed by a solution of (nBu)2Mg (2.5 mL of a 1.0 M solution in heptane)
and the pre-catalyst (ca. 45 mg). The reactor was closed and the reaction mixture was mechanically stirred. At the end of the polymerization, the reaction mixture was cooled, vented, and the copolymer was precipitated in isopropanol (ca. 2 L); after filtration, it was washed with isopropanol. Polymer samples were dried in under vacuum in an oven heated at 200 °C.
Computational Details. The calculations were performed at the DFT level of theory using the hybrid functional B3PW91. 16,17 Neodymium was treated with a large-core 19,20 Toluene was chosen as solvent. The model that was used to take into account solvent effects is the SMD solvation model. The solvation energies are evaluated by a self-consistent reaction field (SCRF) approach based on accurate numerical solutions of the Poisson-Boltzmann equation. 21 All the calculations were carried out with the Gaussian 09 program. 22 Electronic energies and enthalpies were calculated at T = 298 K. Geometry optimizations were computed without any symmetry constraints and analytical frequency calculations was used to assess the nature of the extrema.
The connectivity of the optimized transition states was determined by performing Intrinsic Reaction Coordinates (IRC) calculations. Activation barriers ΔH # are defined depending on the sign of ΔHcoord (see Figure 11). 13 Electronic charges were obtained by using Natural Population Analysis (NPA) analysis. [START_REF] Reed | Intermolecular interactions from a natural bond orbital, donor-acceptor viewpoint[END_REF] NBO analysis [START_REF] Reed | Intermolecular interactions from a natural bond orbital, donor-acceptor viewpoint[END_REF]
AScheme 1 .
1 Scheme 1. Allyl {Cp/Flu} ansa-lanthanidocenes used as single-component catalysts for
11
A
c c e p t e d m a n u s c r i p t As we have demonstrated in the previous study, 11 substitution on the fluorenyl moiety of the ligand has a strong influence on copolymerization productivities. Complexes 3-Nd and 5-Nd, bearing bulky substituents at the 3,6-positions of the fluorenyl ring, were not or poorly active. 1-Nd-K-allyl and 4-Nd, which bear no substituents on the fluorenyl ring, exhibited moderate productivities and 2-Nd, which holds tert-butyl substituents on remote 2,7positions, proved to be the most active within the Nd series (non-optimized productivity > 400 kg(sPSE)•mol(Nd) 1 •h 1 , entry 2). Compared to 2-Nd, complexes 6-Nd and 7-Nd
A c c e p t e d m a n u s c r i p t
A c c e p t e d m a n u s c r i p t production of a
range of sPSE materials containing from 1.1 to 10 mol% of ethylene. DSC measurements showed that the melting transition temperature and the glass transition of those materials are closely related to the quantity of ethylene incorporated, decreasing almost linearly with the quantity of ethylene incorporated (Figure1).
Figure 1 .2457-Nd and 2 -
12 Figure 1. Melting (Tm) and glass (Tg) transition temperatures of sPSE materials prepared in
Figure 2 .
2 Figure 2. Aliphatic region of the 13 C{ 1 H} NMR spectra (125 MHz, 130 °C, C6H3Cl3/C6D6)
Chart 1 .
1 Computational studies. In the previous study,11 DFT calculations including solvent model in the styrene homopolymerization catalyzed by {Me2C(C5H4)(Flu)}Nd(C3H5) (I), the putative {Me2C(C5H4)(Flu)}Nd(1,3-C3H3(SiMe3)2) (1-Nd) and the most effective [{Me2C (C5H4)(2,7-tBu2Flu)}Nd(1,3-C3H3(SiMe3)2)] (2-Nd) allowed to identify the factors which influence the styrene insertion according to the 2,1-pathway (which is the most favored mode). By using Castro et al.13 method, styrene and ethylene insertions were computed in order to evaluate the effectiveness of catalysts I, 1-Nd and 2-Nd in styrene-ethylene copolymerization and the topology of the obtained sPSE copolymer. At each step, the preference between ethylene and styrene insertions has been examined. Moreover, two chainend stereocontrol mechanims were also considered computationally. For the sake of clarity, the following definitions are considered: insertions that occur on the same enantiotopic site of coordination are denoted as "stationary" mechanism whereas "migratory" insertions refer to the switch of coordination site at each step (Chart 1). Nomenclature and orientation modes used for styrene insertion with respect to the ancillary ligand. In this representation, only down-re and up-si styrene coordination modes are depicted, corresponding to the enantiomer of the metal catalyst used for "stationary" A c c e p t e d m a n u s c r i p t insertions. The opposite configurations have been employed for "migratory" insertions, viz.
A c c e p t e d m a n u s c r i p t
Figure 4 .
4 Figure 4. Energetic profiles for the first ethylene (black) and 2,1-down-re styrene (blue)
Figure 5 .
5 Figure 5. Energetic profiles for the second ethylene (stationary, black, and migratory, red)
A c c e p t e d m a n u s c r i p t 6 .
6 Energetic profiles for the third insertions in {Me2C(C5H4)(Flu)}Nd(C3H5) (I), after a 2,1-down-re styrene first insertion and a migratory ethylene second insertion.
Figure 8 .
8 Figure 8. Energetic profiles for the third ethylene insertion in {Me2C(C5H4)(Flu)}Nd(C3H5)
1 -
1 Nd. The charges on the carbon atoms in the (1,3-C3H3(SiMe3)2) allyl ligand are [C 1 (allyl) (1.05), C 2 (allyl) (0.23), C 3 (allyl) (1.11)] in 1-Nd, whereas those obtained in the case of the unsubstituted allyl (C3H5) in I are [C 1 (allyl) (0.79), C 2 (allyl) (0.26), C 3 (allyl) (0.81)]. Thus, the sterically hindered allyl leads to a charge relocalization at the C 3 (allyl) carbon atom.
A c c e p t e d m a n u s c r i p tFigure 9 .
9 Figure 9. Energetic profiles for the first ethylene (black) and 2,1-down-re styrene (blue)
3 -
3 E. Moreover, the ethylene insertion barrier is intermediate (ΔH # = 20.4 kcal mol 1 ) to those A c c e p t e d m a n u s c r i p t calculated for I and 1-Nd (ΔH # = 14.5 and 24.5 kcal mol 1 , respectively). Indeed, the incorporation of tBu substituents counteracts the effect of the SiMe3 on the allyl ligand, which reduces the activation barrier and makes the catalyst more reactive towards ethylene. This is reflected in the NBO charge analysis. The charges on the carbon atoms in the allyl ligand in 3-E are [C 1 (allyl) (0.98), C 2 (allyl) (0.29), C 3 (allyl) (0.83)] for 2-Nd, [C 1 (allyl) (0.89), C 2 (allyl) (0.23), C 3 (allyl) (0.95)] for 1-Nd and [C 1 (allyl) (0.66), C 2 (allyl) (0.23), C 3 (allyl) (0.61)] for I. In complex 2-Nd, the carbon C 3 (allyl) is repulsed by an interaction
A c c e p t e d m a n u s c r i p tFigure 10 .
10 Figure 10. Energetic profiles for the second ethylene (stationary, black, and migratory, red)
A c c e p t e d m a n u s c r i p t ( 2 , 2 -Y, 2 -La, 2 -
2222 7 substitution) or block the reactivity(3,6 substitution). This last point is essential to explain the good productivity of catalyst 2-Nd for styrene-ethylene copolymerization.Experimental SectionGeneral considerations. All experiments were performed under a dry argon atmosphere, using a glovebox or standard Schlenk techniques. Complexes 1-Nd-K-allyl, 27-Nd, 2-Sc, Pr and 2-Sm were synthesized as reported before. 11 Cyclohexane and n-dodecane were distillated from CaH2 and stored over 3 Ǻ MS. Styrene (Fisher Chemical, general purpose grade, stabilized with 1015 ppm of tert-butylcatechol) was eluted through neutral alumina, stirred and heated over CaH2, vacuum-distilled and stored over 3Ǻ MS at 30 °C under argon. The (nBu)2Mg solution (1.0 M in heptane, Sigma-Aldrich) was used as received. Ethylene (Air Liquide, N35) was used without further purification.Instruments and measurements.13 C{ 1 H} NMR and GPC analyses of sPSE samples were performed at the research center of Total Raffinage-Chimie in Feluy (Belgium). 13 C{ 1 H} NMR analyses were run on a Bruker Avance III 500 MHz equipped with a cryoprobe HTDUL in 10 mm tubes (1,2,4-trichlorobenzene/C6D6, 2:0.5 v/v). GPC analyses were performed in 1,2,4-trichlorobenzene at 135 °C using PS standards for calibration. Differential scanning calorimetry (DSC) analyses were performed on a Setaram DSC 131 apparatus, under continuous flow of helium and using aluminum capsules. Crystallization temperatures were measured during the first cooling cycle (10 °C/min), and glass and melting transition temperatures were measured during the second heating cycle (10 °C/min).
Stuttgart-
Dresden relativistic effective core potential (RECP) where the 4f electrons are included in core. The RECP was used in combination with its adapted basis set augmented by a set of f polarization function (α = 1.000). 18 A 6-31+G(d,p) double -ζ quality basis set was used for carbon and hydrogen atoms. The Si atoms were described with a Stuttgart-Dresden relativistic effective core potential in combination with its optimized basis set with the A c c e p t e d m a n u s c r i p t addition of a d polarization function (α = 0.284).
Figure 11 .
11 Figure 11. Definition of ΔH # depending on the sign of ΔHcoord. 13
A c c e p t e d m a n u s c r i p t introduced to scavenge
the reaction medium, uncontrolled, radical (thermally selfinitiated) polymerization can take place (see ref 11;
A c c e p t e d m a n u s c r i p t
1
•h 1 .
2-Nd gave 1 4001 700
kg(sPSE)•mol(Nd) 1 •h 1 , affording a highly syndiotactic copolymer ([r] 5 = 5461%) with a
relatively narrow dispersity value (ÐM = 2.4), despite the elevated polymerization temperature
(entries 12 and 13). Similar results were observed using 2-Pr, even though it appeared to be somewhat less active and stereoselective than its Nd analogue. Better productivities in the range 1 8672 265 kg(sPSE)•mol(Sm) 1 •h 1 were observed with 2-Sm but the syndiotacticity
Table 1 .
1 Styrene-ethylene copolymerizations catalyzed by 1
-Nd-K-allyl, 27-Nd and 2- Sc,La,Sm,Pr a
Entry Complex [St]0 [M] [St]0/ [Ln] [Mg] /[Ln] Tpolym (Tmax) [° C] Time [min] Ethylene (bar or g)
Table
Figure 3. Methylene region of the 13 C{ 1 H} NMR spectrum (125 MHz, 130 °C, C6H3Cl3/C6D6) of a sPSE copolymer (98 mol% styrene; Table 1, entry 9).
S αα T ββ
(SSSS) S αα (SSSE) (SSSS) (SSSE)
T βδ
T δδ (SSSE)
(SES)
rrrrr
rrmrr rrrrm rrrmr
1, entry 2), (bottom) 98 mol% styrene (entry 9).
S αγ (SES) S ββ (SES)
A c c e p t e d m a n u s c r i p t
of the neodymium system was done by applying Clark et al. method.[START_REF] Clark | DFT study of tris(bis(trimethylsilyl)methyl)lanthanum and samarium[END_REF]
Table 3
3 , entries 12-13). 13 Castro, L.; Kirillov, E.; Miserque, O.; Welle, A.; Haspeslagh, L.; Carpentier, J.-F.; Maron, L. Are solvent and dispersion effects crucial in olefin polymerization DFT calculations? Some insights from propylene coordination and insertion reactions with group 3 and 4 metallocenes. ACS Catal. 2015, 5, 416-425. 14 Schultz, N. E.; Zhao, Y.; Truhlar, D. G. Benchmarking approximate density functional theory for s/d excitation energies in 3d transition, metal cations. J. Comput. Chem. 2008, Ab initio energy-adjusted pseudopotentials for elements of groups 13-17. Mol. Phys. 1993, 80, 1431-1441.
29, 185-189.
15 Zhao, Y.; Truhlar, D. G. Density functionals with broad applicability in chemistry. Acc. Chem. Res. 2008, 41, 157-167. 16 Becke, A. D. Density-functional thermochemistry. III. The role of exact exchange. J. Chem. Phys. 1993, 98, 5648-5652. 17 Burke, K.; Perdew, J. P.; Wang, Y. In Electronic Density Functional Theory: Recent Progress and New Directions; Dobson, J. F., Vignale, G., Das, M. P., Eds.; Plenum: New York, 1998. 18 Dolg, M.; Stoll, H.; Savin, A.; Preuss, H. Energy-adjusted pseudopotentials for the rare earth elements. Theor. Chem. Acc. Theory Comput. Model. Theor. Chim. Acta 1989, 75, 173-194. 19 Bergner, A.; Dolg, M.; Küchle, W.; Stoll, H.; Preuss, H. pseudo-potential basis sets of the main group elements Al-Bi and f-type polarization functions for Zn, Cd, Hg. Chem. Phys. Lett. 1993, 208, 237-240.
The presence of atactic PS is likely a result of thermally self-initiated polymerization (Mayo's mechanism). Some of us have previously investigated this process in presence (or absence) of dialkylmagnesium reagents (see: Bogaert, S.; Carpentier, J.-F.; Chenal, T.; Mortreux, A.; Ricart, G. Macromol. Chem. Phys., 2000, 201, 1813-1822) under conditions which are comparable to those reported in the current manuscript, in particular with styrene purified the same way (it is noteworthy that thoroughly purified styrene is much less prone to radical polymerization as there is no "residual" initiator) and reactions performed in bulk styrene. We observed that, at 105 °C, only 18-20% of atactic PS formed after 5 h; with 5 mmol of MgR2 for 175 mmol of styrene, the resulting aPS had a high molecular weight (typically Mn = 150 kg.mol 1 , PDI = 2.4). The reactions reported in the current manuscript were conducted at higher temperatures, but over shorter reactions times. We hence did not expect the formation of significant amounts of atactic PS. This is corroborated by GPC analyses which showed monomodal traces with Mn values in the typical range 12-45 kg/mol; no significant presence of high MW PS was observed (see the Supp. Info.). Yet, the possible presence of a few % of aPS in the essentially sPS(E) materials cannot be discarded. Note that during homopolymerization of styrene with the same neodymocene catalysts, it was noted that if MgR2 is not |
01766236 | en | [
"info",
"info.info-ai",
"info.info-ds",
"info.info-lg",
"info.info-lo",
"info.info-ne"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766236/file/ILP_2017_paper_60.pdf | Yin Jun Phua
email: [email protected]
Tony Ribeiro
email: [email protected]
Sophie Tourret
email: [email protected]
Katsumi Inoue
email: [email protected]
Learning Logic Program Representation for Delayed Systems With Limited Training Data
Keywords: dynamical systems, Boolean networks, attractors, learning from interpretation transition, delayed systems
Understanding the influences between components of dynamical systems such as biological networks, cellular automata or social networks provides insights to their dynamics. Influences of such dynamical systems can be represented by logic programs with delays. Logical methods that learn logic programs from observations have been developed, but their practical use is limited since they cannot handle noisy input and need a huge amount of data to give accurate results. In this paper, we present a method that learns to distinguish different dynamical systems with delays based on Recurrent Neural Network (RNN). This method relies on Long Short-Term Memory (LSTM) to extract and encode features from input sequences of time series data. We show that the produced high dimensional encoding can be used to distinguish different dynamical systems and reproduce their specific behaviors.
Introduction
Being able to learn the dynamics of an environment purely by observing has many applications. For example, in multi-agent systems where learning other agents' behavior without direct access to their internal state can be crucial for decision making [START_REF] Jennings | A roadmap of agent research and development[END_REF]. In system biology, learning the interaction between genes can greatly help in the creation of drugs to treat sicknesses [START_REF] Ribeiro | Learning multi-valued biological models with delayed influence from time-series observations[END_REF].
Problem Statement
Having an understanding of the dynamics of a system allows us to produce predictions of the system's behavior. Being able to produce predictions means that we can weigh between different options and evaluate their outcome from a given state without taking any action. In this way, learning about the dynamics of a system can aid in planning [START_REF] Martínez | Relational reinforcement learning for planning with exogenous effects[END_REF].
In most real world systems, we do not have direct access to the rules that govern the systems. What we do have, however, is the observation of the systems' state at a certain time step, or a series of observations if we look long enough. Therefore, the problem is to learn the dynamics of systems purely from the observations that we are able to obtain.
Several learning algorithms have been proposed, that learn rules for a system, provided that the observations given cover every case that can happen within the system. However, most real world systems, particularly in system biology, obtaining data for even a short amount of time is difficult, time consuming and expensive. Therefore most current learning algorithms, while complete, are not practical in the biology setting. In addition to that, most real world observations that can be obtained are often full of random noise. Therefore, dealing with noise is also an integral part in solving this problem. The focus of this paper is therefore on being able to learn the rules despite some of the rules not having manifested in the observation. We also consider the setting in which actions from past states are able to have a delayed influence on the current state. In addition, our proposed model can also deal with noise within the data, that no previous approaches dealt with, as shown in the experiments section.
Proposed Approach
In this paper, we propose an approach to this problem utilizing Recurrent Neural Networks (RNN) to learn a logic program representation from a series of boolean state transitions. Our method is based on a framework called Learning from Interpretation Transition (LFIT) [START_REF] Inoue | Learning from interpretation transition[END_REF]. LFIT is an unsupervised learning algorithm, that can learn logic programs describing fully the dynamics of the system, purely by observing state transitions. In our approach, we construct two neural networks, one for encoding the observed state transitions, and another one of which to produce the logic program representation for the system. The idea behind this is that given a series of state transitions with a large enough length, it should be possible to uniquely identify the system. Therefore we can transform this into a classification problem, in which we attempt to classify which logic program a specific series of state transition belongs to. Neural networks are known to be good at performing classification, which makes them suitable tools for our proposed approach.
Our proposed approach works well even with a limited amount of data. This is possible because the neural network used in our model is not trained to model the dynamical system itself, but rather to output a classification of different systems. Therefore, it can be trained on artificial data prior to being applied to real data. Thus it is easy to see that the amount of data obtained has no direct relation with the performance of our model.
The rest of the paper is organized as follows. We cover some of the prior researches in Section 2, following by introducing the logical and neural network background required in Section 3. Then we present the RNN-LFIT approach in Section 4. We pursue by presenting an experimental evaluation demonstrating the validity of an approach in Section 5 before concluding the paper in Section 6.
Related Work
Standard LFIT
One way of implementing the LFIT algorithm is by relying on a purely logical method. In [START_REF] Inoue | Learning from interpretation transition[END_REF], such an algorithm is introduced. It constructs an NLP by doing bottom-up generalization for all the positive examples provided in the input state transition. An improved version of this algorithm, utilizing binary decision diagrams as internal data structures, was introduced in [START_REF] Ribeiro | A BDD-based algorithm for learning from interpretation transition[END_REF]. These methods, while proven to be theoritically correct, generate rules from every positive examples. The resulting NLP has been proven to be non-minimal, and thus not very humanfriendly. To allow practical use of the resulting NLP, a method for learning minimal NLP was introduced in [START_REF] Ribeiro | Learning prime implicant conditions from interpretation transition[END_REF]. In [START_REF] Ribeiro | Learning delayed influences of biological systems[END_REF], an algorithm that learns delayed influences, that is cause/effect relationship that may be dependent on the previous k time steps, is introduced. Another recent development in the prolongation of the logical approach to LFIT is the introduction of an algorithm which deals with continuous values [START_REF] Ribeiro | Inductive learning from state transitions over continuous domains[END_REF].
This class of algorithms that utilizes logical methods, are proven to be complete and sound, however a huge disadvantage with these methods is that the resulting NLP is only representable of the observations that have been fed to the algorithm thus far. Any observations that did not appear in the input, will be predicted as either to be always true or always false depending on the algorithm used.
NN-LFIT
To deal with the shortcomings stated in the previous paragraph, an algorithm that utilizes neural networks (NN) was proposed [START_REF] Gentet | Learning from interpretation transition using feed-forward neural network[END_REF]. This method starts by training a feed-forward NN to model the system that is being observed. The NN, when fully trained, should predict the next state of the system when provided with the current state observation. Then, there is a pruning phase where weak connections inside the NN are removed in a manner that doesn't affect the prediction accuracy. After the pruning phase, the algorithm extracts rules from the network based on the remaining connections within the NN. To do so, a truth table is constructed for each variable. The truth table contains variables only based on observing the connections from the outputs to the inputs of the trained and pruned NN. A simplified rule is then constructed from each truth table. In [START_REF] Gentet | Learning from interpretation transition using feed-forward neural network[END_REF], it is shown that despite reducing the amount of training data, the resulting NLP is still surprisingly accurate and representative of the observed system. However, this approach does not deal with systems that have inherent delays.
Other NN-based Approaches
There are also several other approaches attempting to tie NNs with logic programming [START_REF] Garcez | Symbolic knowledge extraction from trained neural networks: A sound approach[END_REF][START_REF] Garcez | The connectionist inductive learning and logic programming system[END_REF]. In [START_REF] Garcez | Symbolic knowledge extraction from trained neural networks: A sound approach[END_REF], the authors propose a method to extract logical rules from trained NNs. The method proposed deals directly with the NN model, and thus imposes some restrictions on the NN architecture. In particular, it was not made to handle delayed influences in the system. In [START_REF] Garcez | The connectionist inductive learning and logic programming system[END_REF], a method for constructing NNs from logic program is proposed, along with a method for constructing RNNs. However this approach requires background knowledge, or a certain level of knowledge about the observed system (such as an initial NLP to improve on) before being applicable.
In [START_REF] Khan | Construction of gene regulatory networks using recurrent neural networks and swarm intelligence[END_REF], the authors proposed a method for constructing models of dynamical systems using RNNs. However, this approach suffers from its important need of training data, which increases exponentially as the number of variables grow. This is a well-known computational problem called the curse of dimensionality [START_REF] Donoho | High-dimensional data analysis: The curses and blessings of dimensionality[END_REF].
In contrast to these methods, the method proposed in this paper does not assume there exists a direct relation between the trained RNN model and the observed system. Our model aims at classifying a series of state transition to the system that generated it, whereas each of the NN based approaches listed above aims to train a NN model that predicts the next state of the observed system.
Background
LFIT
The main goal of LFIT is to learn a normal logic program (NLP) describing the dynamics of the observed system. NLP is a set of rules of the form
A ← A 1 ∧ A 2 • • • ∧ A m ∧ ¬A m+1 ∧ • • • ∧ ¬A n (1)
where A and A i are propositional atoms, n ≥ m ≥ 0. ¬ and ∧ are the symbols for logical negation and conjunction. For any rule R of the form 1, the atom A is called the head of R and is denoted as h(R). The conjunction to the right of ← is called the body of R. We represent the set of literals in the body of R as b(R) = {A 1 , . . . , A m , ¬A m+1 , . . . , ¬A n }. The set of all propositional atoms that appear in a particular Boolean system is denoted as the Herbrand base B.
An Herbrand interpretation I is a subset of B. For a logic program P and an Herbrand interpretation I, the immediate consequence operator (or T P operator) is the mapping T P : 2 B → 2 B :
T P (I) = {h(R) | R ∈ P, b + (R) ⊆ I, b -(R) ∩ I = ∅}. (2)
Given a set of Herbrand interpretations E and {T P (I) | I ∈ E}, the LFIT algorithm outputs a logic program P which completely represents the dynamics of E.
In the case of Markov(k) systems (i.e. systems with delayed effects of at most k time steps), we can define the timed Herbrand base of a logic program P , denoted by B k , as follows:
B k = k i=1 {v t-i | v ∈ B} ( 3
)
where t is a constant term which represents the current time step. Given a Markov(k) system S, if all rules R ∈ S are such that h(R) ∈ B and b(R) ∈ B k , then we represent S as a logic program P with Herbrand base B k . A trace of execution T of S is a finite sequence of states of S. We can define T as T = (x 0 , . . . ,
x n ), n ≥ 1, x i ∈ 2 B .
Thus a k-step interpretation transition is (I, J) where I ⊆ B k , J ⊆ B.
Neural Network
A multi-layer perceptron (MLP) is a type of feed-forward neural network. An MLP usually consists of one input layer, one or more hidden layer and an output layer. Each layer is fully connected, and the output layer is activated by a non-linear function. MLPs can be trained using backpropagation by gradient descent. The other neural network that we use to learn the system's dynamics is Long Short-Term Memory (LSTM) [START_REF] Hochreiter | Long short-term memory[END_REF]. LSTM is a form of RNN that, contrary to earlier RNNs, can learn long term dependencies and do not suffer from the vanishing gradient problem. It has been popular in many sequence to sequence mapping application such as machine translation [START_REF] Sutskever | Sequence to sequence learning with neural networks[END_REF]. An LSTM consists of a memory cell for each time step, and each memory cell has an input gate i t , an output gate o t and a forget gate f t . When a sequence of n X time steps X = {x 1 , x 2 , . . . , x n X } is given as input, LSTM calculates the following for each time step:
i t f t o t l t = σ σ σ tanh W • h t-1 x t c t = f t • c t-1 + i t • l t h t = o t • c t
where W is a weight matrix, h t is the output of each memory cell, c t is the hidden state of each memory cell and l t is the input to each memory cell. σ is the sigmoid function. The input gate decides how much of the input influences the hidden state. The forget gate decides how much of the past hidden state influences the current hidden state. The output gate is responsible for deciding how much of the current hidden state influences the output. A visual illustration of a single LSTM memory cell is shown in Figure 1.
LSTM networks can be trained by performing backpropagation through time (BPTT) [START_REF] Graves | Framewise phoneme classification with bidirectional lstm and other neural network architectures[END_REF]. In BPTT, the LSTM is trained by unfolding across time steps, and then performing gradient descent to update the weights, as illustrated in Figure 2. A direct consequence of BPTT is that the LSTM can only be trained on fixed-length data. One way of overcoming this is by using truncated BPTT [START_REF] Williams | An efficient gradient-based algorithm for on-line training of recurrent network trajectories[END_REF]. In truncated BPTT, the sequence is truncated into subsequences, and backpropagation is performed on the subsequences.
It can easily be seen that the connections in an LSTM model are complex, and it can be very complicated to attempt to extract or derive relations from the inner architecture of the network. Therefore we forgo the approach of extracting rules from the model, and propose a different method which instead utilizes the LSTM to classify the different inputs depending on the system that generated them.
Model
In this section, we propose an architecture for performing LFIT. It consists of an encoder and decoder for the state transitions, and a neural network for performing LFIT. A visualization of the architecture is shown in Figure 3 and4. The input for the whole model is the sequence of state transitions obtained from observing the target system. The output of the model is an encoding of an approximation of the logic program representation in ... , where an encoder LSTM will receive a series of state vectors and encode them into a single vector, then we will multiply the logic program representation matrix and produce a vector, which will be decoded by an MLP to produce the predicted state. matrix form. However, we will be performing evaluation of the performance of the model based on the predicted state.
Given a series of state transitions X T = (x 1 , x 2 , . . . , x T ), where x t ∈ [0, 1] represents the state of the system at time t, our goal is to predict x T +1 . Note that to be able to deal with noise and continuous values, we are not restricting the domain of x t to Z 2 . If we obtain a representation of X T in the form of a vector x, we can learn a matrix P, with which we can perform matrix multiplication as Px = x T +1 . This can be thought of as performing the T P operator in algebraic space.
The training objective function of the model is defined as:
min W 1 n n i=1 (x (i) T +1 -y (i) T +1 ) 2 + λ W 2 2 ( 4
)
where W is the set of neural network weights, x T +1 is the prediction of the model, y T +1 is the truth state, W 2 2 is the weight decay regularization [START_REF] Krogh | A simple weight decay can improve generalization[END_REF] with hyperparameter λ.
The input state transition is fed to both the encoder and the LFIT model, as can be seen in the figure. We describe the responsibilities of the three neural network models in the following sections. 4: A visualization of the LSTM model that is responsible for performing the LFIT. It receives as input a matrix p 0 , a series of state vectors x 1 , . . . , x t and outputs a matrix.
Autoencoder
The autoencoder for the input sequences is responsible for encoding discrete time series into a feature vector that can later be manipulated by the neural network. This sequence of vectors is then encoded into one feature vector of dimension 2 × k a × l a , where k a denotes the number of memory cell units in the autoencoder LSTM and l a denotes the number of LSTM layers. This amount is doubled because both c and h, which represent the state of the memory cell, are considered.
LFIT Network
This LSTM network can be thought of as performing LFIT. This network takes as input the state transitions and an initial program encoding and outputs a program encoding that is consistent with the observations, which is the same as the definition of the LFIT algorithm. Although in practice, this network is responsible for classifying the series of state transition to the corresponding logic program representation.
The produced output is the representation of the normal logic program. The observations are the same input sequence as that given to the autoencoder. The dimensions of the matrix output by this network is
(2 × l l × k l , 2 × l a × k a ),
where k l denotes the number of memory cell units in this network and l l denotes the number of layers.
In this work, the initial program is always set to ∅ and the LSTM network is trained to produce the complete normal logic program representation. In future work, it could be easily extended so as to accept background knowledge.
Decoder
The decoder is responsible for mapping the product of the NLP matrix and the state transition vector into a state vector that represents the predicted following state. The decoder can theoritically be any function that maps a continuous vector into a binary vector. We detail the model used in Section 4.4.
The goal of the architecture is to produce an encoding of past states, and an encoding of a normal logic program, that can then be multiplied together to predict the next state transition. This multiplication is a matrix × vector multiplication and produces a vector of R n where n is the number of features in the logic program representation. This can be thought of as performing the T p operator within linear geometric space. A MLP then decodes this vector into the desired boolean state vector.
With the encoding of the state transition and an initial program, the LFIT network learns to produce an encoded program based on the observed state transitions. This encoded program can then be used for prediction, and in future work we plan to decode it into a normal logic program thus making it possible to reason with it.
Model Details
In our experiment, the autoencoder takes a series of 10 state transitions, where each state is a 10 dimensional vector which represents the state of each variable within the system. The autoencoder LSTM model we trained has 2 layers, each with 512 memory cell units. The produced state representation is then multiplied by a (2 × 2 × 512, 128) matrix, to produce a 128 dimension feature vector that represents the series of state transitions.
The LFIT model takes the same input as the encoder model, but the LSTM model has 4 layers, 4 being the dimension of the resulting feature vector for the predicted state, and has 1,024 hidden units which is twice the number of hidden units of the autoencoder model. The produced logic program representation is then transformed into (4, 128) matrix by multiplying it with a (2 × 4 × 1024, 4 × 128) matrix and then reshaping.
The decoder model takes the resulting feature vector for the predicted state, which is a vector of 4 dimensions, and outputs a vector of 10 dimensions with each dimension representing the state of the variables within the system. The decoder model consists of a MLP with 1 hidden layer, and each layer has 8 hidden units. Each hidden layer is activated by ReLU (Rectified Linear Unit), which is a function that outputs 0 for all input less than 0, and is linear when the input is larger than 0. The final output layer is activated by a sigmoid function, which is defined as σ(x) = 1/(1 + exp(-x)). The sigmoid function has a range of [0, 1], which is suitable for our use where we want the MLP to output a boolean vector, with noise. The decoder model is simple, this is to avoid the decoder overfitting and thus preventing the LFIT model and the encoder model from learning.
Evaluation
We applied our model to learn the dynamics of Boolean networks from continuous time series. The Boolean network used in this experiment is adapted from Dubrova and Teslenko [START_REF] Dubrova | A sat-based algorithm for finding attractors in synchronous boolean networks[END_REF] and represents the cell cycle regulation of mammalians. The Boolean network is first encoded as a logic program. Each dataset represents a time series generated from an initial state vector of continuous values. The performance of the model is measured by taking the root mean-squared error (RMSE) between the predicted state and the true subsequent state. RMSE is defined as following:
RMSE = 1 n n i=1 (ŷ i -y i ) 2 (5)
where ŷi denotes the predicted value and y i is the actual value.
The initial state vector is generated by giving each of the 10 variables a random value between 0 and 1. Generated states are then mapped back to real values: 0 becomes 0.25 + and 1 becomes 0.75 + , where ∈ (-0.25, 0.25), chosen randomly simulates the measurement noise. We used the following training parameters for our experiment:
-Training steps: 10 4 -Batch size: 100 -Gradient descent optimizer: Adam, learning rate and various other parameters are left with the defaults for Tensorflow r1.2 -Dropout: probability of 0.3 per training step -Regularization hyperparameter λ of 0.2
The model was implemented on Tensorflow r1.2 [START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF], and all experiments were done on Intel Xeon E5-2630 with 64 GiB of RAM and GTX 1080 Ti.
Training data is generated randomly by first randomly generating logic rules and grouping them together as NLPs. Then the initial state is set as the zero vector, and we continuously perform the T P operator to generate all the consequent states. Variables referring to delays before the initial state is assumed to be 0. In order to assure effective training of the model, we only train on data that varies a lot. We do so by calculating the standard deviation of all states that are generated from a certain NLP, and only keeping those with standard deviation greater than or equal 0.4. We show some of the accepted NLPs in table 3.
Here, we consider two methods for training the model. One by training the model with data without noise, that is the training data is strictly Z 2 . Another way of training the model is by training on data with added noise. Each model is trained with 50 acceptable NLPs, generating 500 data points from each NLP, and training for a total of 4 hours. We evaluate each method in the following section. 1 shows the RMSE of the prediction made by the proposed model. Each dataset represents 50 datapoints from the same NLP, generated from different initial states. The results show that there is little difference in the accuracy between dataset with noise and without noise, which shows that the robustness of our model regarding the presence of noise in the input. In table 2, we show the performance of the model trained with data with noise. Comparing with table 1, the presence of noise in the training data doesn't affect the performance. Both models are equally robust in dealing with noise in the test data. The results obtained appear a little bit skewed due to being produced from the same system. We are planning to test the model in various other dataset when we can get access to them.
Results
Dataset
Figure 5 shows the graph of the learned representation for 8 different randomly generated NLPs based on principal component analysis (PCA). PCA is a popular technique for visualizing high dimensional embeddings [START_REF] Wold | Principal component analysis[END_REF]. As with the previous experiment, the model is fed with state transitions that were generated from the NLPs. The logic representation obtained from our model is a 4 × 128 matrix. We obtain the graph shown in Figure 5 by applying PCA on this matrix which extracts 3 of the dimensions that separate the data the most. Each dot in the graph is a representation learned separately from various state transitions from the logic program. Note that learned representations that are from different logic programs are clearly separated. For each dot plotted on the graph, they are actually multiple dots, representing different initial state generated from the same NLPs. The overlap for NLP 7 and NLP 8 that can be seen in the plot is due to the 2D projection of a 3D graph.
In this experiment, we observe that the model is able to identify the dynamics of the system solely based on a sequence of state transitions. We further expect that the accuracy of the predictions can be improved more by tweaking the neural network architecture. In this paper we propose a method for learning a matrix representation of dynamical systems with delays. One of the interesting aspects of this approach is that it produces a logic program representation in matrix form, which when multiplied with a feature vector of the past states, is able to compute a vector that represents the predicted state. This could lead to future works such as reasoning and performing induction purely in the algebraic space.
The main contribution of this work is to devise a method of modeling systems where only limited amounts of data can be collected. Without sufficient amount of data, purely logical methods cannot provide useful information, and attempts at training neural networks to model the system will result in overfitting. Therefore we speculate that generating artificial data in order to train a more generalized neural network may be a more successful approach in such cases. We also managed to show that the devised method is resilience to noise, where purely logical methods are not able to deal with.
As future work, we are planning to adapt the current method to take as input a partial program as background knowledge to the network and to decode the NLP representation into logical form to allow humans to reason with. We also hope to evaluate the predictions made by this model with other similar models. a t ← f t-5 ∧ ¬d t-4 ∧ ¬i t-1 ∧ ¬g t-1 ∧ ¬g t-4 ∧ ¬d t-1 b t ← ¬d t-1 ∧ ¬d t-5 c t ← ¬b t-1 d t ←¬c t-1 ∧ ¬i t-5 ∧ ¬f t-3 ∧ ¬c t-2 ∧ ¬i t-1 ∧ ¬h t-1 ∧ ¬a t-1 ∧ ¬d t-3 ∧ ¬d t-5 e t ← e t-2 ∧ ¬e t-1 ∧ ¬a t-3 ∧ ¬f t-4 ∧ ¬j t-5 f t ← b t-2 ∧ g t-1 ∧ h t-5 ∧ ¬i t-2 ∧ ¬f t-2 g t ← ¬d t-1 ∧ ¬g t-1 h t ← ¬i t-1 i t ← ¬e t-4 ∧ ¬j t-1 ∧ ¬d t-2 ∧ ¬g t-5 ∧ ¬c t-2 ∧ ¬i t-5 ∧ ¬g t-3 ∧ ¬j t-2 ∧ ¬i t-1 j t ← b t-3 ∧ c t-4 ∧ ¬j t-2 ∧ ¬c t-3 a t ←b t-3 ∧ g t-2 ∧ f t-4 ∧ j t-4 ∧ ¬c t-5 ∧ ¬e t-2 ∧ ¬a t-4 ∧ ¬h t-3 ∧ ¬i t-3 ∧ ¬h t-1 ∧ ¬e t-3 ∧ ¬c t-1 ∧ ¬c t-2 ∧ ¬a t-5 b t ←¬e t-4 ∧ ¬c t-3 ∧ ¬i t-3 ∧ ¬f t-3 ∧ ¬b t-2 ∧ ¬i t-5 ∧ ¬i t-3 ∧ ¬a t-5 ∧ ¬f t-5 c t ← ¬c t-3 ∧ ¬b t-1 ∧ ¬c t-5 ∧ ¬j t-2 ∧ ¬b t-5 ∧ ¬i t-2 ∧ ¬a t-5 ∧ ¬b t-3 d t ← ¬e t-4 ∧ ¬a t-5 ∧ ¬e t-4 e t ← ¬g t-1 f t ← h t-1 ∧ e t-3 ∧ c t-3 ∧ ¬a t-2 ∧ ¬g t-4 g t ← ¬f t-5 h t ← ¬e t-5 i t ← ¬j t-3 ∧ ¬a t-5 ∧ ¬i t-4 j t ← ¬g t-5 ∧ ¬e t-5 ∧ ¬d t-1 a t ← ¬e t-2 b t ← ¬b t-4 c t ← j t-1 ∧ f t-1 ∧ f t-2 ∧ d t-1 ∧ h t-5 ∧ ¬g t-3 ∧ ¬c t-5 d t ←¬g t-3 ∧ ¬b t-5 ∧ ¬c t-3 ∧ ¬b t-5 ∧ ¬j t-3 ∧ ¬h t-2 ∧ ¬f t-5 ∧ ¬d t-2 ∧ ¬c t-5 e t ←g t-2 ∧ g t-4 ∧ f t-5 ∧ j t-3 ∧ e t-1 ∧ ¬j t-1 ∧ ¬a t-1 ∧ ¬f t-1 ∧ ¬e t-4 f t ←f t-5 ∧ b t-5 ∧ g t-5 ∧ ¬j t-2 ∧ ¬c t-5 ∧ ¬i t-5 ∧ ¬g t-4 ∧ ¬g t-5 ∧ ¬f t-2 ∧ ¬f t-3 ∧ ¬h t-4 g t ← a t-2 ∧ d t-3 ∧ ¬g t-2 ∧ ¬c t-3 h t ← ¬j t-5 ∧ ¬e t-4 ∧ ¬g t-5 ∧ ¬f t-1 i t ← ¬e t-4 j t ← ¬i t-5 Table 3: Example NLPs that are randomly generated and used for training
Fig. 1 :
1 Fig. 1: An LSTM memory cell
Fig. 2 :Fig. 3 :
23 Fig. 2: Unfolding of an LSTM network for BPTT training
Fig.4: A visualization of the LSTM model that is responsible for performing the LFIT. It receives as input a matrix p 0 , a series of state vectors x 1 , . . . , x t and outputs a matrix.
Fig. 5 :
5 Fig. 5: PCA plot of the learned representation for NLPs based on input time series 6 Conclusion and Future Work
Table 1 :
1 Results of the RMSE of the prediction made by the proposed model trained on non-noisy data on various datasets
RMSE (Original) RMSE (Noisy)
1 0.27 0.28
2 0.27 0.28
3 0.26 0.26
4 0.27 0.26
5 0.27 0.28
6 0.27 0.27
7 0.27 0.28
8 0.27 0.28
9 0.27 0.27
10 0.27 0.27
Dataset RMSE (Original) RMSE (Noisy)
1 0.27 0.28
2 0.27 0.27
3 0.27 0.28
4 0.27 0.28
5 0.28 0.28
6 0.27 0.28
7 0.28 0.28
8 0.27 0.27
9 0.27 0.27
10 0.27 0.27
Table 2 :
2 Results of the RMSE of the prediction made by the proposed model trained on noisy data on various datasets
Table |
01766300 | en | [
"math.math-ap",
"math.math-ca"
] | 2024/03/05 22:32:13 | 2021 | https://hal.science/hal-01766300/file/Lovignenko_Hermite_10avril18.pdf | Karine Beauchard
email: [email protected]
Philippe Jaming
email: [email protected]
Karel Pravda-Starov
email: [email protected]
Karel Pravda
SPECTRAL INEQUALITY FOR FINITE COMBINATIONS OF HERMITE FUNCTIONS AND NULL-CONTROLLABILITY OF HYPOELLIPTIC QUADRATIC EQUATIONS
Keywords: 2010 Mathematics Subject Classification. 93B05, 42C05, 35H10 Uncertainty principles, Logvinenko-Sereda type estimates, Hermite functions, Null-controllability, observability, quadratic equations, hypoellipticity, Gelfand-Shilov regularity
come
Spectral inequality for finite combinations of Hermite functions and null-controllability of hypoelliptic quadratic equations
Introduction
The classical uncertainty principle was established by Heisenberg. It points out the fundamental problem in quantum mechanics that the position and the momentum of particles cannot be both determined explicitly, but only in a probabilistic sense with a certain uncertainty. More generally, uncertainty principles are mathematical results that give limitations on the simultaneous concentration of a function and its Fourier transform. When using the following normalization for the Fourier transform (1.1)
f (ξ) = R n f (x)e -ix•ξ dx, ξ ∈ R n ,
the mathematical formulation of the Heisenberg's uncertainty principle can be stated in a directional version as follows
(1.2) inf a∈R R n (x j -a) 2 |f (x)| 2 dx inf b∈R 1 (2π) n R n (ξ j -b) 2 | f (ξ)| 2 dξ ≥ 1 4 f 4 L 2 (R n ) ,
for all f ∈ L 2 (R n ) and 1 ≤ j ≤ n, and shows that a function and its Fourier transform cannot both be arbitrarily localized. Moreover, the inequality (1.2) is an equality if and only if f is of the form f (x) = g(x 1 , ..., x j-1 , x j+1 , ..., x n )e -ibx j e -α(x j -a) 2 , where g is a function in L 2 (R n-1 ), α > 0, and a and b are real constants for which the two infima in (1.2) are achieved. There are various uncertainty principles of different nature.
We refer in particular the reader to the survey article by Folland and Sitaram [START_REF] Folland | The uncertainty principle: a mathematical survey[END_REF], and the book of Havin and Jöricke [START_REF] Havin | The uncertainty principle in harmonic analysis[END_REF] for detailed presentations and references for these topics. Another formulation of uncertainty principles is that a non-zero function and its Fourier transform cannot both have small supports. For instance, a non-zero L 2 (R n )-function whose Fourier transform is compactly supported must be an analytic function with a discrete zero set and therefore a full support. This leads to the notion of weak annihilating pairs as well as the corresponding quantitative notion of strong annihilating pairs: Definition 1.1 (Annihilating pairs). Let S, Σ be two measurable subsets of R n .
-The pair (S, Σ) is said to be a weak annihilating pair if the only function f ∈ L 2 (R n ) with supp f ⊂ S and supp f ⊂ Σ is zero f = 0. -The pair (S, Σ) is said to be a strong annihilating pair if there exists a positive constant C = C(S, Σ) > 0 such that for all f ∈ L 2 (R n ),
(1.3)
R n |f (x)| 2 dx ≤ C R n \S |f (x)| 2 dx + R n \Σ | f (ξ)| 2 dξ .
It can be readily checked that a pair (S, Σ) is a strong annihilating pair if and only if there exists a positive constant D = D(S, Σ) > 0 such that for all f ∈ L 2 (R n ) with supp f ⊂ Σ, (1.4)
f L 2 (R n ) ≤ D f L 2 (R n \S) .
As already mentioned above, the pair (S, Σ) is a weak annihilating one if S and Σ are compact sets. More generally, Benedicks has shown in [START_REF] Benedicks | On Fourier transforms of functions supported on sets of finite Lebesgue measure[END_REF] that (S, Σ) is a weak annihilating pair if S and Σ are sets of finite Lebesgue measure |S|, |Σ| < +∞. Under this assumption, the result of Amrein-Berthier [START_REF] Amrein | On support properties of L p -functions and their Fourier transforms[END_REF] actually shows that the pair (S, Σ) is a strong annihilating one. The estimate C(S, Σ) ≤ κe κ|S||Σ| (which is sharp up to numerical constant κ > 0) has been established by Nazarov [START_REF] Nazarov | Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type[END_REF] in dimension n = 1. This result was extended in the multi-dimensional case by the second author [START_REF] Jaming | Nazarov's uncertainty principles in higher dimension[END_REF], with the quantitative estimate C(S, Σ) ≤ κe κ(|S||Σ|) 1/n holding if in addition one of the two subsets of finite Lebesgue measure S or Σ is convex.
An exhaustive description of all strong annihilating pairs seems for now totally out of reach. We refer the reader for instance to the works [START_REF] Amit | On the annihilation of thin sets[END_REF][START_REF] Bourgain | Fourier dimension and spectral gaps for hyperbolic surfaces[END_REF][START_REF] Bourgain | Spectral gaps without the pressure condition, to appear in Ann. Math[END_REF][START_REF] Demange | Uncertainty principles associated to non-degenerate quadratic forms[END_REF][START_REF] Dyatlov | Dolgopyat's method and the fractal uncertainty principle[END_REF][START_REF] Shubin | Some harmonic analysis questions suggested by Anderson-Bernoulli models[END_REF] for a large variety of results and techniques available as well as for examples of weak annihilating pairs that are not strong annihilating ones. However, there is a complete description of all the support sets S forming a strong annihilating pair with any bounded spectral set Σ. This description is given by the Logvinenko-Sereda theorem [START_REF] Logvinenko | Equivalent norms in spaces of entire functions of exponential type[END_REF]: Theorem 1.2 (Logvinenko-Sereda). Let S, Σ ⊂ R n be measurable subsets with Σ bounded. Denoting S = R n \ S, the following assertions are equivalent:
-The pair (S, Σ) is a strong annihilating pair -The subset S is thick, that is, there exists a cube K ⊂ R n with sides parallel to coordinate axes and a positive constant 0 < γ ≤ 1 such that
∀x ∈ R n , |(K + x) ∩ S| ≥ γ|K| > 0,
where |A| denotes the Lebesgue measure of the measurable set A
It is noticeable to observe that if (S, Σ) is a strong annihilating pair for some bounded subset Σ, then S makes up a strong annihilating pair with every bounded subset Σ, but the above constants C(S, Σ) > 0 and D(S, Σ) > 0 do depend on Σ. In order to be able to use this remarkable result in the control theory of partial differential equations, it is essential to understand how the positive constant D(S, Σ) > 0 depends on the Lebesgue measure of the bounded set Σ. This question was answered by Kovrijkine [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda Theorem[END_REF]Theorem 3] who established the following quantitative estimates : Theorem 1.3 (Kovrijkine). There exists a universal positive constant C n > 0 depending only on the dimension n ≥ 1 such that if S is a γ-thick set at scale L > 0, that is, for all x ∈ R n ,
(1.5) | S ∩ (x + [0, L] n )| ≥ γL n ,
with 0 < γ ≤ 1, then we have for all R > 0 and f ∈ L 2 (R n ) with supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : ∀j = 1, ..., n, |ξ j | ≤ R},
(1.6) f L 2 (R n ) ≤ C n γ Cn(1+LR) f L 2 ( S) .
Thanks to this explicit dependence of the constant with respect to the parameter R > 0 in the estimate (1.6), Egidi and Veselic [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF], and Wang, Wang, Zhang and Zhang [START_REF] Wang | Observable set, observability, interpolation inequality and spectral inequality for the heat equation in R n[END_REF] have independently established the striking result that the heat equation
(1.7) (∂ t -∆ x )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , t > 0, f | t=0 = f 0 ∈ L 2 (R n ),
is null-controllable in any positive time T > 0 from a measurable control subset ω ⊂ R n if and only if this subset ω is thick in R n . The notion of null-controllability is defined as follows:
Definition 1.4 (Null-controllability). Let P be a closed operator on L 2 (R n ) which is the infinitesimal generator of a strongly continuous semigroup (e -tP ) t≥0 on L 2 (R n ), T > 0 and ω be a measurable subset of R n . The equation
(1.8) (∂ t + P )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , t > 0, f | t=0 = f 0 ∈ L 2 (R n ),
is said to be null-controllable from the set ω in time T > 0 if, for any initial datum
f 0 ∈ L 2 (R n ), there exists u ∈ L 2 ((0, T ) × R n ), supported in (0, T ) × ω, such that the mild (or semigroup) solution of (1.8) satisfies f (T, •) = 0.
By the Hilbert Uniqueness Method, see [START_REF] Coron | Control and nonlinearity[END_REF]Theorem 2.44] or [START_REF] Lions | Contrôlabilité exacte, perturbations et stabilisation de systèmes distribués[END_REF], the null-controllability of the equation (1.8) is equivalent to the observability of the adjoint system (1.9)
(∂ t + P * )g(t, x) = 0 , x ∈ R n , g| t=0 = g 0 ∈ L 2 (R n ).
The notion of observability is defined as follows: Definition 1.5 (Observability). Let T > 0 and ω be a measurable subset of R n . Equation (1.9) is said to be observable from the set ω in time T > 0 if there exists a positive constant C T > 0 such that, for any initial datum g 0 ∈ L 2 (R n ), the mild (or semigroup) solution of (1.9) satisfies
(1.10) R n |g(T, x)| 2 dx ≤ C T T 0 ω |g(t, x)| 2 dx dt .
Following [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF] or [START_REF] Wang | Observable set, observability, interpolation inequality and spectral inequality for the heat equation in R n[END_REF], the necessity of the thickness property of the control subset for the null-controllability in any positive time is a consequence of a quasimodes construction; whereas the sufficiency is derived in [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF] from an abstract observability result obtained by an adapted Lebeau-Robbiano method and established by the first and third authors with some contributions of Luc Miller 1 : Theorem 1.6. [4, Theorem 2.1]. Let Ω be an open subset of R n , ω be a measurable subset of Ω, (π k ) k∈N * be a family of orthogonal projections defined on L 2 (Ω), (e -tA ) t≥0 be a strongly continuous contraction semigroup on L 2 (Ω); c 1 , c 2 , a, b, t 0 , m > 0 be positive constants with a < b. If the following spectral inequality
(1.11) ∀g ∈ L 2 (Ω), ∀k ≥ 1, π k g L 2 (Ω) ≤ e c 1 k a π k g L 2 (ω) ,
and the following dissipation estimate
(1.12) ∀g ∈ L 2 (Ω), ∀k ≥ 1, ∀0 < t < t 0 , (1 -π k )(e -tA g) L 2 (Ω) ≤ 1 c 2 e -c 2 t m k b g L 2 (Ω) ,
hold, then there exists a positive constant C > 1 such that the following observability estimate holds
(1.13) ∀T > 0, ∀g ∈ L 2 (Ω), e -T A g 2 L 2 (Ω) ≤ C exp C T am b-a T 0 e -tA g 2 L 2 (ω) dt.
In the statement of [4, Theorem 2.1], the subset ω is supposed to be an open subset of Ω. However, the proof given in [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF] works as well when the subset ω is only assumed to be measurable. Notice that the assumptions in the above statement do not require that the orthogonal projections (π k ) k≥1 are spectral projections onto the eigenspaces of the infinitesimal generator A, which is allowed to be non-selfadjoint. According to the above statement, there are two key ingredients to derive a result of null-controllability, or equivalently a result of observability, while using Theorem 1.6: a spectral inequality (1.11) and a dissipation estimate (1.12). For the heat equation, the orthogonal projections used are the frequency cutoff operators given by the orthogonal projections onto the closed vector subspaces
(1.14) E k = f ∈ L 2 (R n ) : supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : |ξ j | ≤ k, 1 ≤ j ≤ n} ,
for k ≥ 1. With this choice, the dissipation estimate readily follows from the explicit formula
(1.15) (e t∆x g)(t, ξ) = g(ξ)e -t|ξ| 2 , t ≥ 0, ξ ∈ R n ,
whereas the spectral inequality is given by the sharpened formulation of the Logvinenko-Sereda theorem (1.6). Notice that the power 1 for the parameter R in (1.6) and the power 2 for the term |ξ| in (1.15) account for the fact that Theorem 1.6 can be applied with the parameters a = 1, b = 2 that satisfy the required condition 0 < a < b. It is therefore essential that the power of the parameter R in the exponent of the estimate (1.6) is strictly less than 2. As there is still a gap between the cost of the localization (a = 1) given by the spectral inequality and its compensation by the dissipation estimate (b = 2), it is interesting to notice that we could have expected that the null-controllability of the heat equation could have held under weaker assumptions than the thickness property on the control subset, by allowing some higher costs for localization with some parameters 1 < a < 2, but the Logvinenko-Sereda theorem actually shows that this is not the case. Notice that Theorem 1.6 does not only apply with the use of frequency cutoff projections and a dissipation estimate induced by some Gevrey type regularizing effects. Other regularities than the Gevrey regularity can be taken into account. In the previous work by the first and third authors [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF], Theorem 1.6 is used for a general class of accretive hypoelliptic quadratic operators q w generating some strongly continuous contraction semigroups (e -tq w ) t≥0 enjoying some Gelfand-Shilov regularizing effects. The definition and standard properties related to Gelfand-Shilov regularity are recalled in Appendix (Section 4.3). As recalled in this appendix, the Gelfand-Shilov regularity is characterized by specific exponential decays of the functions and their Fourier transforms; and in the symmetric case, can be read on the exponential decay of the Hermite coefficients of the functions in theirs expansions in the L 2 (R n )-Hermite basis (Φ α ) α∈N n . Explicit formulas and some reminders of basic facts about Hermite functions are given in Appendix (Section 4.1). The class of hypoelliptic quadratic operators whose description will be given in Section 2.2 enjoys some Gelfand-Shilov regularizing effects ensuring that the following dissipation estimate holds [4, Proposition 4.1]:
(1.16) ∃C 0 > 1, ∃t 0 > 0, ∀t ≥ 0, ∀k ≥ 0, ∀f ∈ L 2 (R n ), (1 -π k )(e -tq w f ) L 2 (R n ) ≤ C 0 e -δ(t)k f L 2 (R n ) , with (1.17) δ(t) = inf(t, t 0 ) 2k 0 +1 C 0 ≥ 0, t ≥ 0, 0 ≤ k 0 ≤ 2n -1,
where (1.18)
P k g = α∈N n |α|=k g, Φ α L 2 (R n ) Φ α , k ≥ 0, with |α| = α 1 + • • • + α n ,
denotes the orthogonal projection onto the k th energy level associated with the harmonic oscillator
H = -∆ x + |x| 2 = +∞ k=0 (2k + n)P k ,
and
(1. [START_REF] Ganzburg | Polynomial inequalities on measurable sets and their applications[END_REF])
π k = k j=0 P j , k ≥ 0,
denotes the orthogonal projection onto the (k + 1) th first energy levels. In order to apply Theorem 1.6, we need a spectral inequality for finite combinations of Hermite functions of the type
(1.20) ∃C > 1, ∀k ≥ 0, ∀f ∈ L 2 (R n ), π k f L 2 (R n ) ≤ Ce Ck a π k f L 2 (ω) ,
with a < 1, where π k is the orthogonal projection (1.19). In [4, Proposition 4.2], such a spectral inequality is established with a = 1 2 when the control subset ω is an open subset of R n satisfying the following geometrical condition: In the present work, we study under which conditions on the control subset ω ⊂ R n , the spectral inequality
(1.21) ∃δ, r > 0, ∀y ∈ R n , ∃y ′ ∈ ω, B(y ′ , r) ⊂ ω, |y -y ′ | < δ,
(1.22) ∀k ≥ 0, ∃C k (ω) > 0, ∀f ∈ L 2 (R n ), π k f L 2 (R n ) ≤ C k (ω) π k f L 2 (ω) ,
holds and how the geometrical properties of the set ω relate to the possible growth of the positive constant C k (ω) > 0 with respect to the energy level when k → +∞. The main results contained in this article provide some quantitative upper bounds on the positive constant C k (ω) > 0 with respect to the energy level for three different classes of measurable subsets :
-non-empty open subsets in R n , -measurable sets in R n verifying the condition
(1.23) lim inf R→+∞ |ω ∩ B(0, R)| |B(0, R)| = lim R→+∞ inf r≥R |ω ∩ B(0, r)| |B(0, r)| > 0,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0, -thick measurable sets in R n . We observe that in the first two classes, the measurable control subsets are allowed to have gaps containing balls with radii tending to infinity, whereas in the last class there must be a bound on such radii. We shall see that the quantitative upper bounds obtained for the two first classes (Theorem 2.1, estimates (i) and (ii)) are not sufficient to obtain any result of null-controllability for the class of hypoelliptic quadratic operators studied in Section 2.2. Regarding the third one, the quantitative upper bound (Theorem 2.1, estimate (iii)) is a noticeable analogue of the Logvinenko-Sereda theorem for finite combinations of Hermite functions. As an application of this third result, we extend in Theorem 2.2 the result of null-controllability for parabolic equations associated with accretive quadratic operators with zero singular spaces from any thick set ω ⊂ R n in any positive time T > 0.
Statements of the main results
2.1. Uncertainty principles for finite combinations of Hermite functions. Let (Φ α ) α∈N n be the n-dimensional Hermite functions and
(2.1)
E N = Span C {Φ α } α∈N n ,|α|≤N ,
be the finite dimensional vector space spanned by all the Hermite functions Φ α with |α| ≤ N , whose definition is recalled in Appendix (Section 4.1).
As the Lebesgue measure of the zero set of a non-zero analytic function on C is zero, the L 2 -norm • L 2 (ω) on any measurable set ω ⊂ R of positive measure |ω| > 0 defines a norm on the finite dimensional vector space E N . As a consequence of the Remez inequality, we check in Appendix (Section 4.4) that this result holds true as well in the multi-dimensional case when ω ⊂ R n , with n ≥ 1, is a measurable subset of positive Lebesgue measure |ω| > 0. By equivalence of norms in finite dimension, for any measurable set ω ⊂ R n of positive Lebesgue measure |ω| > 0 and all N ∈ N, there therefore exists a positive constant C N (ω) > 0 depending on ω and N such that the following spectral inequality holds
(2.2) ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω) .
We aim at studying how the geometrical properties of the set ω relate to the possible growth of the positive constant C N (ω) > 0 with respect to the energy level. The main results of the present work are given by the following uncertainty principles for finite combinations of Hermite functions:
Theorem 2.1. With E N the finite dimensional vector space spanned by the Hermite functions (Φ α ) |α|≤N defined in (2.1), the following spectral inequalities hold:
(i) If ω is a non-empty open subset of R n , then there exists a positive constant C = C(ω) > 1 such that ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce 1 2 N ln(N +1)+CN f L 2 (ω) . (ii) If the measurable subset ω ⊂ R n satisfies the condition (1.23), then there exists a positive constant C = C(ω) > 1 such that ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce CN f L 2 (ω) .
(iii) If the measurable subset ω ⊂ R n is γ-thick at scale L > 0 in the sense defined in (1.5), then there exist a positive constant C = C(L, γ, n) > 0 depending on the dimension n ≥ 1 and the parameters γ, L > 0, and a universal positive constant κ = κ(n) > 0 only depending on the dimension such that
∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ C κ γ κL √ N f L 2 (ω) .
According to the above result, the control on the growth of the positive constant C N (ω) > 0 with respect to the energy level for an arbitrary non-empty open subset ω of R n , or when the measurable subset ω ⊂ R n satisfies the condition (1.23), is not sufficient to satisfy the estimates (1.20) needed to obtain some results of null-controllability and observability for the parabolic equations associated to the class of hypoelliptic quadratic operators studied in Section 2.2. As the one-dimensional harmonic heat equation is known from [13, Proposition 5.1], see also [START_REF] Miller | Unique continuation estimates for sums of semiclassical eigenfunctions and nullcontrollability from cones[END_REF], to not be null-controllable, nor observable, in any time T > 0 from a half-line and as the harmonic oscillator obviously belongs to the class of hypoelliptic quadratic operators studied in Section 2.2, we observe that spectral estimates of the type
∃0 < a < 1, ∃C > 1, ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ Ce CN a f L 2 (ω) ,
cannot hold for an arbitrary non-empty open subset ω of R n , nor when the measurable subset ω ⊂ R n satisfies the condition (1.23), since Theorem 1.6 together with (1.16) would then imply the null-controlllability and the observability of the one-dimensional harmonic heat equation from a half-line. This would be in contradiction with the results of [START_REF] Duyckaerts | Resolvent conditions the control of parabolic equations[END_REF][START_REF] Miller | Unique continuation estimates for sums of semiclassical eigenfunctions and nullcontrollability from cones[END_REF].
On the other hand, when the measurable subset ω ⊂ R n is γ-thick at scale L > 0, the above spectral inequality (iii) is an analogue for finite combinations of Hermite functions of the sharpened version of the Logvinenko-Sereda theorem proved by Kovrijkine in [30, Theorem 3] with a similar dependence of the constant with respect to the parameters 0 < γ ≤ 1 and L > 0 as in (1.6). Notice that the growth in √ N is of the order of the square root of the largest eigenvalue of the harmonic oscillator H = -∆ x + |x| 2 on the spectral vector subspace E N , whereas the growth in R in (1.6) is also of order of the square root of the largest spectral value of the Laplace operator -∆ x on the spectral vector subspace
E R = f ∈ L 2 (R n ) : supp f ⊂ {ξ = (ξ 1 , ..., ξ n ) ∈ R n : ∀j = 1, ..., n, |ξ j | ≤ R .
This is in agreement with what is usually expected for that type of spectral inequalities, see [START_REF] Rousseau | Applications to unique continuation and control of parabolic equations[END_REF].
The spectral inequality (i) for arbitrary non-empty open subsets is proved in Section 3.1. Its proof uses some estimates on Hermite functions together with the Remez inequality. The spectral inequality (ii) for measurable subsets satisfying the condition (1.23) is proved in Section 3.2 and follows from similar arguments as the ones used in Section 3.1. The spectral inequality (iii) for thick sets is proved in Section 3.3. This proof is an adaptation of the proof of the sharpened version of the Logvinenko-Sereda theorem given by Kovrijkine in [30, Theorem 1]. As in [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda Theorem[END_REF], the proof is only written with full details in the onedimensional case with hints for its extension to the multi-dimensional one following some ideas of Nazarov [START_REF] Nazarov | Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type[END_REF], the proof given in Section 3.3 is therefore more specifically inspired by the proof of the Logvinenko-Sereda theorem in the multi-dimensional setting given by Wang, Wang, Zhang and Zhang in [49, Lemma 2.1].
2.2.
Null-controllability of hypoelliptic quadratic equations. This section presents the result of null-controllability for parabolic equations associated with a general class of hypoelliptic non-selfadjoint accretive quadratic operators from any thick set ω of R n in any positive time T > 0. We begin by recalling few facts about quadratic operators.
2.2.1.
Miscellaneous facts about quadratic differential operators. Quadratic operators are pseudodifferential operators defined in the Weyl quantization
(2.3) q w (x, D x )f (x) = 1 (2π) n R 2n e i(x-y)•ξ q x + y 2 , ξ f (y)dydξ, by symbols q(x, ξ), with (x, ξ) ∈ R n × R n , n ≥ 1, which are complex-valued quadratic forms q : R n x × R n ξ → C (x, ξ) → q(x, ξ).
These operators are non-selfadjoint differential operators in general; with simple and fully explicit expression since the Weyl quantization of the quadratic symbol x α ξ β , with (α, β) ∈ N 2n , |α + β| = 2, is the differential operator
x α D β x + D β x x α 2 , D x = i -1 ∂ x .
Let q w (x, D x ) be a quadratic operator defined by the Weyl quantization (2.3) of a complexvalued quadratic form q on the phase space R 2n . The maximal closed realization of the quadratic operator q w (x, D x ) on L 2 (R n ), that is, the operator equipped with the domain
(2.4) D(q w ) = f ∈ L 2 (R n ) : q w (x, D x )f ∈ L 2 (R n ) ,
where q w (x, D x )f is defined in the distribution sense, is known to coincide with the graph closure of its restriction to the Schwartz space [28, pp. 425-426],
q w (x, D x ) : S (R n ) → S (R n ).
Let q : R n x × R n ξ → C be a quadratic form defined on the phase space and write q(•, •) for its associated polarized form. Classically, one associates to q a matrix F ∈ M 2n (C) called its Hamilton map, or its fundamental matrix. With σ standing for the standard symplectic form
(2.5) σ((x, ξ), (y, η)) = ξ, y -x, η = n j=1 (ξ j y j -x j η j ), with x = (x 1 , ..., x n ), y = (y 1 , ...., y n ), ξ = (ξ 1 , ..., ξ n ), η = (η 1 , ..., η n ) ∈ C n ,
the Hamilton map F is defined as the unique matrix satisfying the identity
(2.6) ∀(x, ξ) ∈ R 2n , ∀(y, η) ∈ R 2n , q((x, ξ), (y, η)) = σ((x, ξ), F (y, η)).
We observe from the definition that
F = 1 2 ∇ ξ ∇ x q ∇ 2 ξ q -∇ 2 x q -∇ x ∇ ξ q ,
where the matrices
∇ 2 x q = (a i,j ) 1≤i,j≤n , ∇ 2 ξ q = (b i,j ) 1≤i,j≤n , ∇ ξ ∇ x q = (c i,j ) 1≤i,j≤n , ∇ x ∇ ξ q = (d i,j ) 1≤i,
j≤n are defined by the entries
a i,j = ∂ 2 x i ,x j q, b i,j = ∂ 2 ξ i ,ξ j q, c i,j = ∂ 2 ξ i ,x j q, d i,j = ∂ 2 x i ,ξ j q.
The notion of singular space was introduced in [START_REF] Hitrik | Spectra and semigroup smoothing for non-elliptic quadratic operators[END_REF] by Hitrik and the third author by pointing out the existence of a particular vector subspace in the phase space S ⊂ R 2n , which is intrinsically associated with a given quadratic symbol q. This vector subspace is defined as the following finite intersection of kernels
(2.7) S = 2n-1 j=0 Ker Re F (Im F ) j ∩ R 2n ,
where Re F and Im F stand respectively for the real and imaginary parts of the Hamilton map F associated with the quadratic symbol q,
Re F = 1 2 (F + F ), Im F = 1 2i (F -F ).
As pointed out in [START_REF] Hitrik | Spectra and semigroup smoothing for non-elliptic quadratic operators[END_REF][START_REF] Hitrik | Short-time asymptotics of the regularizing effect for semigroups generated by quadratic operators[END_REF][START_REF] Hitrik | From semigroups to subelliptic estimates for quadratic operators[END_REF][START_REF] Ottobre | Exponential return to equilibrium for hypoelliptic quadratic systems[END_REF][START_REF] Pravda-Starov | Subelliptic estimates for quadratic differential operators[END_REF][START_REF] Pravda-Starov | Propagation of Gabor singularities for Schrödinger equations with quadratic Hamiltonians[END_REF][START_REF] Viola | Spectral projections and resolvent bounds for partially elliptic quadratic differential operators[END_REF], the notion of singular space plays a basic role in the understanding of the spectral and hypoelliptic properties of the (possibly) nonelliptic quadratic operator q w (x, D x ), as well as the spectral and pseudospectral properties of certain classes of degenerate doubly characteristic pseudodifferential operators [START_REF] Hitrik | Semiclassical hypoelliptic estimates for non-selfadjoint operators with double characteristics[END_REF][START_REF] Hitrik | Eigenvalues and subelliptic estimates for non-selfadjoint semiclassical operators with double characteristics[END_REF][START_REF] Viola | Resolvent estimates for non-selfadjoint operators with double characteristics[END_REF][START_REF] Viola | Non-elliptic quadratic forms and semiclassical estimates for non-selfadjoint operators[END_REF]. In particular, the work [23, Theorem 1.2.2] gives a complete description for the spectrum of any non-elliptic quadratic operator q w (x, D x ) whose Weyl symbol q has a non-negative real part Re q ≥ 0, and satisfies a condition of partial ellipticity along its singular space S,
(2.8) (x, ξ) ∈ S, q(x, ξ) = 0 ⇒ (x, ξ) = 0.
Under these assumptions, the spectrum of the quadratic operator q w (x, D x ) is shown to be composed of a countable number of eigenvalues with finite algebraic multiplicities. The structure of this spectrum is similar to the one known for elliptic quadratic operators [START_REF] Sjöstrand | Parametrices for pseudodifferential operators with multiple characteristics[END_REF]. This condition of partial ellipticity is generally weaker than the condition of ellipticity, S R 2n , and allows one to deal with more degenerate situations. An important class of quadratic operators satisfying condition (2.8) are those with zero singular spaces S = {0}.
In this case, the condition of partial ellipticity trivially holds. More specifically, these quadratic operators have been shown in [39, Theorem 1.2.1] to be hypoelliptic and to enjoy global subelliptic estimates of the type
(2.9) ∃C > 0, ∀f ∈ S (R n ), (x, D x ) 2(1-δ) f L 2 (R n ) ≤ C( q w (x, D x )f L 2 (R n ) + f L 2 (R n ) ), where (x, D x ) 2 = 1 + |x| 2 + |D x | 2
, with a sharp loss of derivatives 0 ≤ δ < 1 with respect to the elliptic case (case δ = 0), which can be explicitly derived from the structure of the singular space.
When the quadratic symbol q has a non-negative real part Re q ≥ 0, the singular space can be also defined in an equivalent way as the subspace in the phase space where all the Poisson brackets
H k Imq Re q = ∂Im q ∂ξ • ∂ ∂x - ∂Im q ∂x • ∂ ∂ξ k Re q, k ≥ 0, are vanishing S = X = (x, ξ) ∈ R 2n : (H k Imq Re q)(X) = 0, k ≥ 0 .
This dynamical definition shows that the singular space corresponds exactly to the set of points X ∈ R 2n , where the real part of the symbol Re q under the flow of the Hamilton vector field H Imq associated with its imaginary part (2.10) t → Re q(e tH Imq X), vanishes to infinite order at t = 0. This is also equivalent to the fact that the function (2.10) is identically zero on R.
In this work, we study the class of quadratic operators whose Weyl symbols have nonnegative real parts Re q ≥ 0, and zero singular spaces S = {0}. According to the above description of the singular space, these quadratic operators are exactly those whose Weyl symbols have a non-negative real part Re q ≥ 0, becoming positive definite
(2.11) ∀ T > 0, Re q T (X) = 1 2T T -T (Re q)(e tH Imq X)dt ≫ 0,
after averaging by the linear flow of the Hamilton vector field associated with its imaginary part. These quadratic operators are also known [23, Theorem 1.2.1] to generate strongly continuous contraction semigroups (e -tq w ) t≥0 on L 2 (R n ), which are smoothing in the Schwartz space for any positive time
∀t > 0, ∀f ∈ L 2 (R n ), e -tq w f ∈ S (R n ).
In the recent work [27, Theorem 1.2], these regularizing properties were sharpened and these contraction semigroups were shown to be actually smoothing for any positive time in the Gelfand-Shilov space
S 1/2 1/2 (R n ): ∃C > 0, ∃t 0 > 0, ∀f ∈ L 2 (R n ), ∀α, β ∈ N n , ∀0 < t ≤ t 0 , (2.12) x α ∂ β x (e -tq w f ) L ∞ (R n ) ≤ C 1+|α|+|β| t 2k 0 +1 2 (|α|+|β|+2n+s) (α!) 1/2 (β!) 1/2 f L 2 (R n ) ,
where s is a fixed integer verifying s > n/2, and where 0 ≤ k 0 ≤ 2n -1 is the smallest integer satisfying (2.13)
k 0 j=0 Ker Re F (Im F ) j ∩ R 2n = {0}.
The definition and few facts about the Gelfand-Shilov regularity are recalled in Appendix (Section 4.3). Thanks to this Gelfand-Shilov smoothing effect (2.12), the first and third authors have established in [4, Proposition 4.1] that, for any quadratic form q : R 2n x,ξ → C with a non-negative real part Re q ≥ 0 and a zero singular space S = {0}, the dissipation estimate (1.16) holds with 0 ≤ k 0 ≤ 2n -1 being the smallest integer satisfying (2.13). Let ω ⊂ R n be a measurable γ-thick set at scale L > 0. We can then deduce from Theorem 1.6 with the following choices of parameters:
(i) Ω = R n , (ii) A = -q w (x, D x ), (iii) a = 1 2 , b = 1, (iv) t 0 > 0 as in (1. [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF]) and (1.17), (v) m = 2k 0 + 1, where k 0 is defined in (2.13), (vi) any constant c 1 > 0 satisfying for all
∀k ≥ 1, C κ γ κL √ k ≤ e c 1 √ k ,
where the positive constants [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF]) and (1.17), the following observability estimate in any positive time
C = C(L, γ, n) > 0 and κ = κ(n) > 0 are defined in Theorem 2.1 (formula (iii)), (vii) c 2 = 1 C 0 > 0, where C 0 > 1 is defined in (1.
∃C > 1, ∀T > 0, ∀f ∈ L 2 (R n ), e -T q w f 2 L 2 (R n ) ≤ C exp C T 2k 0 +1 T 0 e -tq w f 2 L 2 (ω) dt.
We therefore obtain the following result of null-controllability:
Theorem 2.2. Let q : R n x × R n ξ → C be a complex-valued quadratic form with a non negative real part Re q ≥ 0, and a zero singular space S = {0}. If ω is a measurable thick subset of R n , then the parabolic equation
∂ t f (t, x) + q w (x, D x )f (t, x) = u(t, x)1l ω (x) , x ∈ R n , f | t=0 = f 0 ∈ L 2 (R n ),
with q w (x, D x ) being the quadratic differential operator defined by the Weyl quantization of the symbol q, is null-controllable from the set ω in any positive time T > 0.
As in [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF], this new result of null-controllability given by Theorem 2.2 applies in particular for the parabolic equation associated to the Kramers-Fokker-Planck operator
(2.14) K = -∆ v + v 2 4 + v∂ x -∇ x V (x)∂ v , (x, v) ∈ R 2 ,
with a quadratic potential
V (x) = 1 2 ax 2 , a ∈ R * ,
which is an example of accretive quadratic operator with a zero singular space S = {0}. It also applies in the very same way to hypoelliptic Ornstein-Uhlenbeck equations posed in weighted L 2 -spaces with respect to invariant measures, or to hypoelliptic Fokker-Planck equations posed in weighted L 2 -spaces with respect to invariant measures. We refer the reader to the works [START_REF] Beauchard | Null-controllability of hypoelliptic quadratic differential equations[END_REF][START_REF] Ottobre | Exponential return to equilibrium for hypoelliptic quadratic systems[END_REF] for detailed discussions of various physics models whose evolution turns out to be ruled by accretive quadratic operators with zero singular space and to which therefore apply the above result of null-controllability.
Proof of the spectral inequalities
This section is devoted to the proof of Theorem 2.1. We recall from (2.2) that
(3.2) ∀N ∈ N, ∃C N (ω) > 0, ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω) .
On the other hand, it follows from Lemma 4.2 that
(3.3) ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ 2 √ 3 f L 2 (B(0,cn √ N +1)) .
Let N ∈ N and f ∈ E N . According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] of degree at most N such that
(3.4) ∀x ∈ R n , f (x) = P (x)e -|x| 2 2
. We observe from (3.3) and (3.4) that
(3.5) f 2 L 2 (R n ) ≤ 4 3 B(0,cn √ N +1) |P (x)| 2 e -|x| 2 dx ≤ 4 3 P 2 L 2 (B(0,cn √ N +1))
and
(3.6) P 2 L 2 (B(x 0 ,r)) = B(x 0 ,r) |P (x)| 2 e -|x| 2 e |x| 2 dx ≤ e (|x 0 |+r) 2 f 2 L 2 (B(x 0 ,r)) .
We aim at deriving an estimate of the term P L 2 (B(0,cn
√ N +1)) by P L 2 (B(x 0 ,r)) when N ≫ 1 is sufficiently large. Let N be an integer such that c n √ N + 1 > 2|x 0 | + r. It implies the inclusion B(x 0 , r) ⊂ B(0, c n √ N + 1).
To that end, we may assume that P is a non-zero polynomial function. By using polar coordinates centered at x 0 , we notice that
B(x 0 , r) = {x 0 + tσ : 0 ≤ t < r, σ ∈ S n-1 } and (3.7) P 2 L 2 (B(x 0 ,r)) = S n-1 r 0 |P (x 0 + tσ)| 2 t n-1 dtdσ.
As c n √ N + 1 > 2|x 0 | + r, we notice that there exists a continuous function ρ N : S n-1 → (0, +∞) such that
(3.8) B(0, c n √ N + 1) = {x 0 + tσ : 0 ≤ t < ρ N (σ), σ ∈ S n-1 } and (3.9) ∀σ ∈ S n-1 , 0 < |x 0 | + r < c n √ N + 1 -|x 0 | < ρ N (σ) < c n √ N + 1 + |x 0 |.
It follows from (3.8) and (3.9) that (3.10) P 2
L 2 (B(0,cn √ N +1)\B(x 0 , r 2 )) = S n-1 ρ N (σ) r 2 |P (x 0 + tσ)| 2 t n-1 dtdσ ≤ (c n √ N + 1 + |x 0 |) n-1 S n-1 ρ N (σ) r 2 |P (x 0 + tσ)| 2 dtdσ.
By noticing that
t → P x 0 + ( ρ N (σ) 2 + r 4 )σ + tσ ,
is a polynomial function of degree at most N , we deduce from (3.9) and Lemma 4.4 used in the one-dimensional case n = 1 that
ρ N (σ) r 2 |P (x 0 + tσ)| 2 dt = ρ N (σ) 2 -r 4 -( ρ N (σ) 2 -r 4 ) P x 0 + ρ N (σ) 2 + r 4 σ + tσ 2 dt (3.11) ≤ 2 4N +2 3 4(ρ N (σ) -r 2 ) r 2 2 - r 2 4(ρ N (σ)-r 2 ) r 2 4(ρ N (σ)-r 2 ) 2N -ρ N (σ) 2 + 3r 4 -( ρ N (σ) 2 -r 4 ) P x 0 + ρ N (σ) 2 + r 4 σ + tσ 2 dt ≤ 2 4N +2 3 4(ρ N (σ) -r 2 ) r 2 2 - r 2 4(ρ N (σ)-r 2 ) r 2 4(ρ N (σ)-r 2 ) 2N r r 2 |P (x 0 + tσ)| 2 dt ≤ 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 r r 2 |P (x 0 + tσ)| 2 t n-1 dt.
It follows from (3.10) and (3.11) that (3.12) P 2
L 2 (B(0,cn √ N +1)\B(x 0 , r 2
)) ≤ (c n √ N + 1 + |x 0 |) n-1 × 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 S n-1 r r 2 |P (x 0 + tσ)| 2 t n-1 dt, implying that (3.13) P 2 L 2 (B(0,cn √ N +1)) ≤ 1 + (c n √ N + 1 + |x 0 |) n-1 × 2 12N +n+4 3r 2N +n c n √ N + 1 + |x 0 | - r 2 2N +1 P 2 L 2 (B(x 0 ,r)) ,
thanks to (3.7). We deduce from (3.13) that there exists a positive constant C = C(x 0 , r, n) > 1 independent on the parameter N such that (3.14)
P L 2 (B(0,cn √ N +1)) ≤ Ce
f L 2 (R n ) ≤ 2 √ 3 Ce 1 2 (|x 0 |+r) 2 e 1 2 N ln(N +1)+CN f L 2 (B(x 0 ,r)) .
The two estimates (3.2) and (3.15) allow to prove the assertion (i) in Theorem 2.1.
3.2.
Case when the control subset is a measurable set satisfying the condition (1.23). Let ω ⊂ R n be a measurable subset satisfying the condition
(3.16) lim inf R→+∞ |ω ∩ B(0, R)| |B(0, R)| = lim R→+∞ inf r≥R |ω ∩ B(0, r)| |B(0, r)| > 0,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0. It follows that there exist some positive constants R 0 > 0 and δ > 0 such that
(3.17) ∀R ≥ R 0 , |ω ∩ B(0, R)| |B(0, R)| ≥ δ > 0.
We recall from (2.2) that
(3.18) ∀N ∈ N, ∃C N (ω) > 0, ∀f ∈ E N , f L 2 (R n ) ≤ C N (ω) f L 2 (ω)
and as in the above section, it follows from Lemma 4.2 that
(3.19) ∀N ∈ N, ∀f ∈ E N , f L 2 (R n ) ≤ 2 √ 3 f L 2 (B(0,cn √ N +1)) .
Let N ∈ N be an integer satisfying
c n √ N + 1 ≥ R 0 and f ∈ E N . It follows from (3.17) that (3.20) |ω ∩ B(0, c n √ N + 1)| ≥ δ|B(0, c n √ N + 1)| > 0.
According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] of degree at most N such that
(3.21) ∀x ∈ R n , f (x) = P (x)e -|x| 2 2
. We observe from (3.19) and (3.21) that
(3.22) f 2 L 2 (R n ) ≤ 4 3 B(0,cn √ N +1) |P (x)| 2 e -|x| 2 dx ≤ 4 3 P 2 L 2 (B(0,cn √ N +1))
and
(3.23) P 2 L 2 (ω∩B(0,cn √ N +1)) = ω∩B(0,cn √ N +1) |P (x)| 2 e -|x| 2 e |x| 2 dx ≤ e c 2 n (N +1) f 2 L 2 (ω∩B(0,cn √ N +1))
. We deduce from Lemma 4.4 and (3.20) that
(3.24) P 2 L 2 (B(0,cn √ N +1)) ≤ 2 4N +2 3 4|B(0, c n √ N + 1)| |ω ∩ B(0, c n √ N + 1)| F |ω ∩ B(0, c n √ N + 1)| 4|B(0, c n √ N + 1)| 2N P 2 L 2 (ω∩B(0,cn √ N +1)) ,
with F the decreasing function
∀0 < t ≤ 1, F (t) = 1 + (1 -t) 1 n 1 -(1 -t) 1 n ≥ 1.
By using that F is a decreasing function, it follows from (3.20) and (3.24) that (3.25)
P 2 L 2 (B(0,cn √ N +1)) ≤ 2 4N +4 3δ F δ 4 2N P 2 L 2 (ω∩B(0,cn √ N +1)) .
Putting together (3.22), (3.23) and (3.25), we deduce that there exists a positive constant
C = C(δ, n) > 0 such that for all N ∈ N with c n √ N + 1 ≥ R 0 and for all f ∈ E N , (3.26) f 2 L 2 (R n ) ≤ 2 4N +6 9δ F δ 4 2N e c 2 n (N +1) f 2 L 2 (ω∩B(0,cn √ N +1)) ≤ C 2 e 2CN f 2 L 2 (ω) .
The two estimates (3.18) and (3.26) allow to prove the assertion (ii) in Theorem 2.1.
3.3.
Case when the control subset is a thick set. Let ω be a measurable subset of R n . We assume that ω is γ-thick at scale L > 0,
(3.27) ∃0 < γ ≤ 1, ∃L > 0, ∀x ∈ R n , |ω ∩ (x + [0, L] n )| ≥ γL n .
The following proof is an adaptation of the proof of the sharpened version of the Logvinenko-Sereda theorem given by Kovrijkine in [30, Theorem 1] in the one-dimensional setting, and the one given by Wang, Wang, Zhang and Zhang in [49, Lemma 2.1] in the multidimensional case.
3.3.1.
Step 1. Bad and good cubes. Let N ∈ N be a non-negative integer and f ∈ E N \ {0}.
For each multi-index α = (α 1 , ..., α n ) ∈ (LZ) n , let
Q(α) = x = (x 1 , ..., x n ) ∈ R n : ∀1 ≤ j ≤ n, |x j -α j | < L 2 .
Notice that
∀α, β ∈ (LZ) n , α = β, Q(α) ∩ Q(β) = ∅, R n = α∈(LZ) n Q(α),
where Q(α) denotes the closure of Q(α). It follows that for all f ∈ L 2 (R n ),
f 2 L 2 (R n ) = R n |f (x)| 2 dx = α∈(LZ) n Q(α) |f (x)| 2 dx.
Let δ > 0 be a positive constant to be chosen later on. We divide the family of cubes (Q(α)) α∈(LZ) n into families of good and bad cubes. A cube Q(α), with α ∈ (LZ) n , is said to be good if it satisfies (3.28)
∀β ∈ N n , Q(α) |∂ β x f (x)| 2 dx ≤ e eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |f (x)| 2 dx.
On the other hand, a cube Q(α), with α ∈ (LZ) n , which is not good, is said to be bad, that is,
(3.29) ∃β ∈ N n , |β| > 0, Q(α) |∂ β x f (x)| 2 dx > e eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |f (x)| 2 dx.
If Q(α) is a bad cube, it follows from (3.29) that there exists
β 0 ∈ N n , |β 0 | > 0 such that (3.30) Q(α) |f (x)| 2 dx ≤ e -eδ -2 8δ 2 (2 n + 1) |β 0 | (|β 0 |!) 2 e 2δ -1 √ N Q(α) |∂ β 0 x f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N Q(α) |∂ β x f (x)| 2 dx.
By summing over all the bad cubes, we deduce from (3.30) and the Fubini-Tonelli theorem that
(3.31) bad cubes Q(α) |f (x)| 2 dx = bad cubes Q(α) |f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N bad cubes Q(α) |∂ β x f (x)| 2 dx ≤ β∈N n ,|β|>0 e -eδ -2 8δ 2 (2 n + 1) |β| (|β|!) 2 e 2δ -1 √ N R n |∂ β x f (x)| 2 dx.
By using that the number of solutions to the equation
β 1 + ... + β n = k, with k ≥ 0, n ≥ 1 and unknown β = (β 1 , ..., β n ) ∈ N n , is given by k+n-1 k
, we obtain from the Bernstein type estimates in Proposition 4.3 (formula (i)) and (3.31) that
(3.32) bad cubes Q(α) |f (x)| 2 dx ≤ β∈N n ,|β|>0 1 2(2 n + 1) |β| f 2 L 2 (R n ) = +∞ k=1 k + n -1 k 1 2 k (2 n + 1) k f 2 L 2 (R n ) ≤ 2 n-1 +∞ k=1 1 (2 n + 1) k f 2 L 2 (R n ) = 1 2 f 2 L 2 (R n ) , since (3.33) k + n -1 k ≤ k+n-1 j=0 k + n -1 j = 2 k+n-1 .
By writing
f 2 L 2 (R n ) = good cubes Q(α) |f (x)| 2 dx + bad cubes Q(α) |f (x)| 2 dx, it follows from (3.32) that (3.34) f 2 L 2 (R n ) ≤ 2 good cubes Q(α) |f (x)| 2 dx.
3.3.2.
Step 2. Properties on good cubes. As any cube Q(α) satisfies the cone condition, the Sobolev embedding
W n,2 (Q(α)) ֒-→ L ∞ (Q(α)),
see e.g. [1, Theorem 4.12] implies that there exists a universal positive constant C n > 0 depending only on the dimension n ≥ 1 such that
(3.35) ∀u ∈ W n,2 (Q(α)), u L ∞ (Q(α)) ≤ C n u W n,2 (Q(α))
.
By translation invariance of the Lebesgue measure, notice in particular that the constant C n does not depend on the parameter α ∈ (LZ) n . Let Q(α) be a good cube. We deduce from (3.28) and (3.35) that for all β ∈ N n ,
∂ β x f L ∞ (Q(α)) ≤ C n β∈N n ,| β|≤n ∂ β+ β x f 2 L 2 (Q(α)) 1 2 (3.36) ≤ C n e eδ -2 2 e δ -1 √ N β∈N n ,| β|≤n 8δ 2 (2 n + 1) |β|+| β| (|β| + | β|)! 2 1 2 f L 2 (Q(α)) ≤ Cn (δ) 32δ 2 (2 n + 1) |β| 2 |β|!e δ -1 √ N f L 2 (Q(α)) , with (3.37) Cn (δ) = C n e eδ -2 2 β∈N n ,| β|≤n 32δ 2 (2 n + 1) | β| (| β|!) 2 1 2 > 0, since (|β| + | β|)! ≤ 2 |β|+| β| |β|!| β|!.
Recalling that f is a finite combination of Hermite functions, we deduce from the continuity of the function f and the compactness of Q(α) that there exists
x α ∈ Q(α) such that (3.38) f L ∞ (Q(α)) = |f (x α )|.
By using spherical coordinates centered at x α ∈ Q(α) and the fact that the Euclidean diameter of the cube Q(α) is √ nL, we observe that
|ω ∩ Q(α)| = +∞ 0 S n-1 1l ω∩Q(α) (x α + rσ)dσ r n-1 dr (3.39) = √ nL 0 S n-1 1l ω∩Q(α) (x α + rσ)dσ r n-1 dr = n n 2 L n 1 0 S n-1 1l ω∩Q(α) (x α + √ nLrσ)dσ r n-1 dr,
where 1l ω∩Q(α) denotes the characteristic function of the measurable set ω ∩ Q(α). By using the Fubini theorem, we deduce from (3.39) that
(3.40) |ω ∩ Q(α)| ≤ n n 2 L n 1 0 S n-1 1l ω∩Q(α) (x α + √ nLrσ)dσ dr = n n 2 L n S n-1 1 0 1l ω∩Q(α) (x α + √ nLrσ)dr dσ = n n 2 L n S n-1 1 0 1l Iσ (r)dr dσ = n n 2 L n S n-1 |I σ |dσ,
where (3.41)
I σ = {r ∈ [0, 1] : x α + √ nLrσ ∈ ω ∩ Q(α)}.
The estimate (3.40) implies that there exists σ 0 ∈ S n-1 such that
(3.42) |ω ∩ Q(α)| ≤ n n 2 L n |S n-1 ||I σ 0 |.
By using the thickness property (3.27), it follows from (3.42) that
(3.43) |I σ 0 | ≥ γ n n 2 |S n-1 | > 0. 3.3.3.
Step 3. Recovery of the L 2 (R)-norm. We first notice that f L 2 (Q(α)) = 0, since f is a non-zero entire function. We consider the entire function
(3.44) ∀z ∈ C, φ(z) = L n 2 f (x α + √ nLzσ 0 ) f L 2 (Q(α))
.
We observe from (3.38) that
|φ(0)| = L n 2 |f (x α )| f L 2 (Q(α)) = L n 2 f L ∞ (Q(α)) f L 2 (Q(α)) ≥ 1.
Instrumental in the proof is the following lemma proved by Kovrijkine in [30, Lemma 1]:
sup x∈[0,1] |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ≤ C |I σ 0 | ln M ln 2 L n 2 sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) , with (3.46) M = L n 2 sup |z|≤4 |f (x α + √ nLzσ 0 )| f L 2 (Q(α)) .
It follows from (3.43) and (3.45) that
(3.47) sup x∈[0,1] |f (x α + √ nLxσ 0 )| ≤ Cn n 2 |S n-1 | γ ln M ln 2 sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| ≤ M 1 ln 2 ln( Cn n 2 |S n-1 | γ ) sup x∈Iσ 0 |f (x α + √ nLxσ 0 )|.
According to (3.41), we notice that
(3.48) sup x∈Iσ 0 |f (x α + √ nLxσ 0 )| ≤ f L ∞ (ω∩Q(α)) .
On the other hand, we deduce from (3.38) that
(3.49) f L ∞ (Q(α)) = |f (x α )| ≤ sup x∈[0,1] |f (x α + √ nLxσ 0 )|.
It follows from (3.47), (3.48) and (3.49) that
(3.50) f L ∞ (Q(α)) ≤ M 1 ln 2 ln( Cn n 2 |S n-1 | γ ) f L ∞ (ω∩Q(α)) .
By using the analyticity of the function f , we observe that
(3.51) ∀z ∈ C, f (x α + √ nLzσ 0 ) = β∈N n (∂ β x f )(x α ) β! σ β 0 n |β| 2 L |β| z |β| .
By using that Q(α) is a good cube, x α ∈ Q(α) and the continuity of the functions ∂ β x f , we deduce from (3.36) and (3.51) that for all |z| ≤ 4,
(3.52) |f (x α + √ nLzσ 0 )| ≤ β∈N n |(∂ β x f )(x α )| β! (4 √ nL) |β| ≤ Cn (δ)e δ -1 √ N β∈N n |β|! β! δL 2 9 n(2 n + 1) |β| f L 2 (Q(α))
.
By using anew that the number of solutions to the equation
β 1 + ... + β n = k, with k ≥ 0, n ≥ 1 and unknown β = (β 1 , ..., β n ) ∈ N n ,
β∈N n |β|! β! δL 2 9 n(2 n + 1) |β| ≤ β∈N n δL 2 9 n 3 (2 n + 1) |β| = +∞ k=0 k + n -1 k δL 2 9 n 3 (2 n + 1) k ≤ 2 n-1 +∞ k=0 δL 2 11 n 3 (2 n + 1) k .
For now on, the positive parameter δ > 0 is fixed and taken to be equal to The positive constant C > 1 given by Lemma 3.1 may be chosen such that
(3.56) Cn n 2 |S n-1 | > 1.
With this choice, we deduce from (3.50) and (3.55) that
(3.57) f L ∞ (Q(α)) ≤ Cn n 2 |S n-1 | γ ln((4L) n 2 Cn(δ -1 n L -1 )) ln 2 + δn ln 2 L √ N f L ∞ (ω∩Q(α)) .
Recalling from the thickness property (3.27) that |ω ∩ Q(α)| ≥ γL n > 0 and setting
(3.58) ωα = x ∈ ω ∩ Q(α) : |f (x)| ≤ 2 |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx , we observe that (3.59) ω∩Q(α) |f (x)|dx ≥ (ω∩Q(α))\ωα |f (x)|dx ≥ 2|(ω ∩ Q(α)) \ ωα | |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx.
By using that the integral
ω∩Q(α) |f (x)|dx > 0,
is positive, since f is a non-zero entire function and |ω ∩ Q(α)| > 0, we obtain that
|(ω ∩ Q(α)) \ ωα | ≤ 1 2 |ω ∩ Q(α)|, which implies that (3.60) |ω α | = |ω ∩ Q(α)| -|(ω ∩ Q(α)) \ ωα | ≥ 1 2 |ω ∩ Q(α)| ≥ 1 2 γL n > 0,
thanks anew to the thickness property (3.27). By using again spherical coordinates as in (3.39) and (3.40), we observe that
(3.61) |ω α | = |ω α ∩ Q(α)| = n n 2 L n 1 0 S n-1 1l ωα∩Q(α) (x α + √ nLrσ)dσ r n-1 dr ≤ n n 2 L n S n-1 | Ĩσ |dσ, where (3.62) Ĩσ = {r ∈ [0, 1] : x α + √ nLrσ ∈ ωα ∩ Q(α)}.
As in (3.42), the estimate (3.61) implies that there exists σ 0 ∈ S n-1 such that
(3.63) |ω α | ≤ n n 2 L n |S n-1 || Ĩσ 0 |. We deduce from (3.60) and (3.63) that (3.64) | Ĩσ 0 | ≥ γ 2n n 2 |S n-1 | > 0. Applying anew Lemma 3.1 with I = [0, 1], E = Ĩσ 0 ⊂ [0, 1] verifying |E| = | Ĩσ 0 | > 0,
sup x∈[0,1] |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ≤ C | Ĩσ 0 | ln M ln 2 L n 2 sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| f L 2 (Q(α)) ,
where M denotes the constant defined in (3.46). It follows from (3.64) and (3.65) that (3.66) sup
x∈[0,1] |f (x α + √ nLxσ 0 )| ≤ 2Cn n 2 |S n-1 | γ ln M ln 2 sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| ≤ M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )|.
According to (3.62), we notice that
(3.67) sup x∈ Ĩσ 0 |f (x α + √ nLxσ 0 )| ≤ f L ∞ (ωα∩Q(α)) .
It follows from (3.49), (3.66) and (3.67) that
(3.68) f L ∞ (Q(α)) ≤ M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f L ∞ (ωα∩Q(α)) .
On the other hand, it follows from (3.58)
(3.69) f L ∞ (ωα∩Q(α)) ≤ 2 |ω ∩ Q(α)| ω∩Q(α) |f (x)|dx.
We deduce from (3.68), (3.69) and the Cauchy-Schwarz inequality that
f L 2 (Q(α)) ≤ L n 2 f L ∞ (Q(α)) (3.70) ≤ 2L n 2 |ω ∩ Q(α)| M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) ω∩Q(α) |f (x)|dx ≤ 2L n 2 |ω ∩ Q(α)| 1 2 M 1 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f L 2 (ω∩Q(α)) .
By using the thickness property (3.27), it follows from (3.55), (3.56) and (3.70)
(3.71) f 2 L 2 (Q(α)) ≤ 4 γ M 2 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f 2 L 2 (ω∩Q(α)) ≤ 4 γ (4L) n 2 Cn (δ -1 n L -1 )e δnL √ N 2 ln 2 ln( 2Cn n 2 |S n-1 | γ ) f 2 L 2 (ω∩Q(α)) . With (3.72) κ n (L, γ) = 2 3 2 γ 1 2 2Cn n 2 |S n-1 | γ ln((4L) n 2 Cn(δ -1 n L -1 )) ln 2
> 0, we deduce from (3.71) that there exists a positive universal constant κn > 0 such that for any good cube Q(α),
(3.73) f 2 L 2 (Q(α)) ≤ 1 2 κ n (L, γ) 2 κn γ 2κnL √ N f 2 L 2 (ω∩Q(α)) .
It follows from (3.34) and (3.73) that
(3.74) f 2 L 2 (R n ) ≤ 2 good cubes Q(α) |f (x)| 2 dx = 2 good cubes f 2 L 2 (Q(α)) ≤ κ n (L, γ) 2 κn γ 2κnL √ N good cubes f 2 L 2 (ω∩Q(α)) ≤ κ n (L, γ) 2 κn γ 2κnL √ N ω∩( good cubes Q(α)) |f (x)| 2 dx ≤ κ n (L, γ) 2 κn γ 2κnL √ N f 2 L 2 (ω) .
This ends the proof of assertion (iii) in Theorem 2.1.
Appendix
4.1. Hermite functions. This section is devoted to set some notations and recall basic facts about Hermite functions. The standard Hermite functions (φ k ) k≥0 are defined for x ∈ R,
(4.1) φ k (x) = (-1) k 2 k k! √ π e x 2 2 d k dx k (e -x 2 ) = 1 2 k k! √ π x - d dx k (e -x 2 2 ) = a k + φ 0 √ k! ,
where a + is the creation operator
a + = 1 √ 2 x - d dx .
The Hermite functions satisfy the identity
(4.2) ∀ξ ∈ R, ∀k ≥ 0, φ k (ξ) = (-i) k √ 2πφ k (ξ),
when using the normalization of the Fourier transform (1.1). The L 2 -adjoint of the creation operator is the annihilation operator
a -= a * + = 1 √ 2 x + d dx .
The following identities hold
(4.3) [a -, a + ] = Id, - d 2 dx 2 + x 2 = 2a + a -+ 1, (4.4) ∀k ∈ N, a + φ k = √ k + 1φ k+1 , ∀k ∈ N, a -φ k = √ kφ k-1 (= 0 si k = 0), (4.5) ∀k ∈ N, - d 2 dx 2 + x 2 φ k = (2k + 1)φ k
, where N denotes the set of non-negative integers. The family (φ k ) k∈N is an orthonormal basis of L 2 (R). We set for α
= (α j ) 1≤j≤n ∈ N n , x = (x j ) 1≤j≤n ∈ R n , (4.6) Φ α (x) = n j=1 φ α j (x j ).
The family (Φ α ) α∈N n is an orthonormal basis of L 2 (R n ) composed of the eigenfunctions of the n-dimensional harmonic oscillator
(4.7) H = -∆ x + |x| 2 = k≥0 (2k + n)P k , Id = k≥0 P k , where P k is the orthogonal projection onto Span C {Φ α } α∈N n ,|α|=k , with |α| = α 1 + • • • + α n .
The following estimates on Hermite functions are a key ingredient for the proof of the spectral inequalities (i) and (ii) in Theorem 2.1. This result was established by Bonami, Karoui and the second author in the proof of [6, Theorem 3.2], and is recalled here for the sake of completeness of the present work.
Lemma 4.1. The one-dimensional Hermite functions (φ k ) k∈N defined in (4.1) satisfy the following estimates:
∀k ∈ N, ∀a ≥ √ 2k + 1, |x|≥a |φ k (x)| 2 dx ≤ 2 k+1 k! √ π a 2k-1 e -a 2 .
Proof. For any k ∈ N, the k th Hermite polynomial function
(4.8) H k (x) = (-1) k e x 2 d dx k (e -x 2 ),
has degree k and is an even (respectively odd) function when k is an even (respectively odd) non-negative integer. The first Hermite polynomial functions are given by (4.9)
H 0 (x) = 1, H 1 (x) = 2x, H 2 (x) = 4x 2 -2.
The k th Hermite polynomial function H k admits k distinct real simple roots. More specifically, we recall from [44, Section 6.31] that the k roots of
H k denoted -x [ k 2 ],k , ..., -x 1,k , x 1,k , ..., x [ k 2 ],k , satisfy (4.10) - √ 2k + 1 ≤ -x [ k 2 ],k < ... < -x 1,k < 0 < x 1,k < ... < x [ k 2 ],k ≤ √ 2k + 1, with [ k 2 ]
the integer part of k 2 , when k ≥ 2 is an even positive integer. On the other hand, the k roots of
H k denoted -x [ k 2 ],k , ..., -x 1,k , x 0,k , x 1,k , ..., x [ k 2 ],k , satisfy (4.11) - √ 2k + 1 ≤ -x [ k 2 ],k < ... < -x 1,k < x 0,k = 0 < x 1,k < ... < x [ k 2 ],k ≤ √ 2k + 1,
when k is an odd positive integer. We denote by z k the largest non-negative root of the k th Hermite polynomial function H k , that is, with the above notations
z k = x [ k 2 ],k , when k ≥ 1. Relabeling temporarily (a j ) 1≤j≤k the k roots of H k such that a 1 < a 2 < ... < a k . The classical formula (4.12) ∀k ∈ N * , H ′ k (x) = 2kH k-1 (x),
see e.g. [44, Section 5.5], together with Rolle's Theorem imply that H k-1 admits exactly one root in each of the k -1 intervals (a j , a j+1 ), with 1 ≤ j ≤ k -1, when k ≥ 2. According to (4.9), (4.10) and (4.11), it implies in particular that for all k ≥ 1,
(4.13) 0 = z 1 < z 2 < ... < z k ≤ √ 2k + 1.
Next, we claim that
(4.14) ∀k ≥ 1, ∀|x| ≥ z k , |H k (x)| ≤ 2 k |x| k .
To that end, we first observe that
(4.15) ∀k ≥ 1, ∀x ≥ z k , H k (x) ≥ 0, since the leading coefficient of H k ∈ R[X]
is given by 2 k > 0. As the polynomial function H k is an even or odd function, we notice from (4.15) that it is actually sufficient to establish that
(4.16) ∀k ≥ 1, ∀x ≥ z k , H k (x) ≤ 2 k x k ,
to prove the claim. The estimates (4.16) are proved by recurrence on k ≥ 1. Indeed, we observe from (4.9) that ∀x ≥ z 1 = 0, H 1 (x) = 2x.
Let k ≥ 2 such that the estimate (4. 16) is satisfied at rank k -1. It follows from (4.12) for all x ≥ z k , (4.17)
H k (x) = H k (x) -H k (z k ) = x z k H ′ k (t)dt = 2k x z k H k-1 (t)dt ≤ 2k x z k 2 k-1 t k-1 dt = 2 k (x k -z k k ) ≤ 2 k x k , since 0 ≤ z k-1 < z k .
This ends the proof of the claim (4.14). We deduce from (4.9), (4.13) and (4.14) that
(4.18) ∀k ∈ N, ∀|x| ≥ √ 2k + 1, |H k (x)| ≤ 2 k |x| k .
It follows from (4.1), (4.8) and (4.18) that
(4.19) ∀k ∈ N, ∀|x| ≥ √ 2k + 1, |φ k (x)| ≤ 2 k 2 √ k!π 1 4 |x| k e -x 2 2 .
We observe that
(4.20) ∀a > 0, +∞ a e -t 2 dt ≤ a -1 e -a 2 2 +∞ a te -t 2 2 dt = a -1 e -a 2 and (4.21) ∀α > 1, ∀a > √ α -1, +∞ a t α e -t 2 dt ≤ a α-1 e -a 2 2 +∞ a te -t 2 2 dt = a α-1 e -a 2 ,
as the function (a, +∞) ∋ t → t α-1 e -t 2 2 ∈ (0, +∞) is decreasing on (a, +∞). We deduce from (4.19), (4.20) and (4.21) that
(4.22) ∀k ∈ N, ∀a ≥ √ 2k + 1, |x|≥a |φ k (x)| 2 dx ≤ 2 k k!π 1 2 |x|≥a x 2k e -x 2 dx = 2 k+1 k!π 1 2 x≥a x 2k e -x 2 dx ≤ 2 k+1 k!π 1 2 a 2k-1 e -a 2 .
This ends the proof of Lemma 4.1.
We consider E N = Span C {Φ α } α∈N n ,|α|≤N the finite dimensional vector space spanned by all the Hermite functions Φ α with |α| ≤ N . The following lemma is also instrumental in the proof of Theorem 2.1 : Lemma 4.2. There exists a positive constant c n > 0 depending only on the dimension n ≥ 1 such that
∀N ∈ N, ∀f ∈ E N , |x|≥cn √ N +1 |f (x)| 2 dx ≤ 1 4 f 2 L 2 (R n ) ,
where | • | denotes the Euclidean norm on R n .
Proof. Let N ∈ N. We deduce from Lemma 4.1 and the Cauchy-Schwarz inequality that the one-dimensional Hermite functions (φ k ) k∈N satisfy for all 0 ≤ k, l ≤ N and a ≥ √ 2N + 1, (4.23)
|t|≥a |φ k (t)φ l (t)|dt ≤ |t|≥a |φ k (t)| 2 dt 1 2 |t|≥a |φ l (t)| 2 dt 1 2 ≤ 2 k+l 2 +1 √ π √ k! √ l! a k+l-1 e -a 2 .
In order to extend these estimates in the multi-dimensional setting, we first notice that for all a > 0, α, β
|x j |≥ a √ n |Φ α (x)Φ β (x)|dx = |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j 1≤k≤n k =j R |φ α k (x k )φ β k (x k )|dx k ≤ |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j 1≤k≤n k =j φ α k L 2 (R) φ β k L 2 (R) ,
implies that for all a ≥ √ n
√ 2N + 1, α, β ∈ N n ,
γ α γ β |x|≥a Φ α (x)Φ β (x)dx ≤ |α|≤N |β|≤N |γ α ||γ β | |x|≥a |Φ α (x)Φ β (x)|dx ≤ 2 n π e -a 2 n a |α|≤N, |β|≤N 1≤j≤n
|γ α ||γ β | α j ! β j ! 2 n a α j +β j .
For any α = (α 1 , ..., α n ) ∈ N n , we denote α ′ = (α 2 , ..., α n ) ∈ N n-1 when n ≥ 2. We observe that (4.27)
|α|≤N |β|≤N |γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 = |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ ||γ β 1 ,β ′ | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 and
(4.28)
0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ ||γ β 1 ,β ′ | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( 2a 2 n ) α 1 +β 1 α 1 !β 1 ! 1 2 ,
thanks to the Cauchy-Schwarz inequality. On the other hand, we notice that (4.29)
0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( 2a 2 n ) α 1 +β 1 α 1 !β 1 ! 1 2 ≤ 4 N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | ( a 2 2n ) α 1 +β 1 α 1 !β 1 ! 1 2 ≤
|γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 4 N e a 2 2n |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 .
The Cauchy-Schwarz inequality implies that (4.31)
|α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 ≤ |α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 |α ′ |≤N |β ′ |≤N 1 1 2 .
By using that the family (Φ α ) α∈N n is an orthonormal basis of L 2 (R n ) and that the number of solutions to the equation α 2 + ... + α n = k, with k ≥ 0, n ≥ 2 and unknown α ′ = (α 2 , ..., α n ) ∈ N n-1 , is given by k+n-2 n-2 , we deduce from (4.31) that (4.32)
|α ′ |≤N |β ′ |≤N 0≤α 1 ≤N -|α ′ | 0≤β 1 ≤N -|β ′ | |γ α 1 ,α ′ | 2 |γ β 1 ,β ′ | 2 1 2 ≤ |α|≤N |γ α | 2 |α ′ |≤N 1 = N k=0 k + n -2 n -2 f 2 L 2 (R n ) ≤ 2 n-2 N k=0 2 k f 2 L 2 (R n ) ≤ 2 N +n-1 f 2 L 2 (R n ) , since k+n-2 n-2 ≤ k+n-
|γ α ||γ β | √ α 1 ! √ β 1 ! 2 n a α 1 +β 1 ≤ 2 n-1 8 N e a 2 2n f 2 L 2 (R n ) ,
when n ≥ 2. Notice that the very same estimate holds true as well in the one-dimensional case n = 1. We deduce from (4.26) and (4.33) that for all
N ∈ N, f ∈ E N and a ≥ √ n √ 2N + 1, (4.34) |x|≥a |f (x)| 2 dx ≤ 2 n n 3 2 √ π e -a 2 2n a 8 N f 2 L 2 (R n ) .
It follows from (4.34) that there exists a positive constant c n > 0 depending only on the dimension n ≥ 1 such that
∀N ∈ N, ∀f ∈ E N , |x|≥cn √ N +1 |f (x)| 2 dx ≤ 1 4 f 2 L 2 (R n ) .
This ends the proof of Lemma 4.2.
4.2.
Bernstein type and weighted L 2 estimates for Hermite functions. This section is devoted to the proof of the following Bernstein type and weighted L 2 estimates for Hermite functions:
Proposition 4.3. With E N the finite dimensional vector space spanned by the Hermite functions (Φ α ) |α|≤N defined in (2.1), finite combinations of Hermite functions satisfy the following estimates:
(i) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ ≤ 1, ∀β ∈ N n , ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |β| |β|!e δ -1 √ N f L 2 (R n ) . (ii) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ < 1 32n , ∀β ∈ N n , e δ|x| 2 ∂ β x f L 2 (R n ) + e δ|Dx| 2 x β f L 2 (R n ) ≤ 2 n 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) .
Proof. We notice that (4.35)
x j = 1 √ 2 (a j,+ + a j,-), ∂ x j = 1 √ 2 (a j,--a j,+ ), with a j,+ = 1 √ 2 (x j -∂ x j ), a j,-= 1 √ 2 (x j + ∂ x j ).
By denoting (e j ) 1≤j≤n the canonical basis of R n , we obtain from (4.4) and (4.35) that for all N ∈ N and f ∈ E N ,
a j,+ f 2 L 2 (R n ) = a j,+ |α|≤N f, Φ α L 2 Φ α 2 L 2 (R n ) = |α|≤N α j + 1 f, Φ α L 2 Φ α+e j 2 L 2 (R n ) = |α|≤N (α j + 1)| f, Φ α L 2 | 2 ≤ (N + 1) |α|≤N | f, Φ α L 2 | 2 = (N + 1) f 2 L 2 (R n ) and a j,-f 2 L 2 (R n ) = a j,- |α|≤N f, Φ α L 2 Φ α 2 L 2 (R n ) = |α|≤N √ α j f, Φ α L 2 Φ α-e j 2 L 2 (R n ) = |α|≤N α j | f, Φ α L 2 | 2 ≤ N |α|≤N | f, Φ α L 2 | 2 = N f 2 L 2 (R n ) .
It follows that for all N ∈ N and f ∈ E N , (4.36)
x j f L 2 (R n ) ≤ 1 √ 2 ( a j,+ f L 2 (R n ) + a j,-f L 2 (R n ) ) ≤ √ 2N + 2 f L 2 (R n ) and (4.37) ∂ x j f L 2 (R n ) ≤ 1 √ 2 ( a j,+ f L 2 (R n ) + a j,-f L 2 (R n ) ) ≤ √ 2N + 2 f L 2 (R n ) .
We notice from (4.4) and (4.35) that
∀N ∈ N, ∀f ∈ E N , ∀α, β ∈ N n , x α ∂ β x f ∈ E N +|α|+|β| , with x α = x α 1 1 ...x αn n and ∂ β x = ∂ β 1 x 1 ...∂ βn xn .
We deduce from (4.36) that for all N ∈ N, f ∈ E N , and α, β ∈ N n , with α 1 ≥ 1,
x α ∂ β x f L 2 (R n ) = x 1 ( x α-e 1 ∂ β x f ∈E N+|α|+|β|-1 ) L 2 (R n ) ≤ √ 2 N + |α| + |β| x α-e 1 ∂ β x f L 2 (R n ) .
By iterating the previous estimates, we readily obtain from (4.36) and (4.37) that for all
N ∈ N, f ∈ E N and α, β ∈ N n , (4.38) x α ∂ β x f L 2 (R n ) ≤ 2 |α|+|β| 2 (N + |α| + |β|)! N ! f L 2 (R n ) .
We recall the following basic estimates,
(4.39) ∀k ∈ N * , k k ≤ e k k!, ∀t, A > 0, t A ≤ A A e t-A , ∀t > 0, ∀k ∈ N, t k ≤ e t k
≤ (2δ) |α|+|β| (δ -1 √ N ) |α|+|β| ≤ (2δ) |α|+|β| (|α| + |β|) |α|+|β| e δ -1 √ N -|α|-|β| ≤ (2δ) |α|+|β| (|α| + |β|)!e δ -1 √ N .
It follows from (4.38), (4.40) and (4.41) that for all N ∈ N, f ∈ E N and α, β ∈ N n , (4.42)
x α ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |α|+|β| (|α| + |β|)!e δ -1 √ N f L 2 (R n ) .
It provides in particular the following Bernstein type estimates
(4.43) ∀N ∈ N, ∀f ∈ E N , ∀0 < δ ≤ 1, ∀β ∈ N n , ∂ β x f L 2 (R n ) ≤ e e 2δ 2 (2δ) |β| |β|!e δ -1 √ N f L 2 (R n ) .
On the other hand, we deduce from (4.38) that for all N ∈ N, f ∈ E N and α, β ∈ N n , (4.44)
x α ∂ β x f L 2 (R n ) ≤ 2 |α|+|β| 2 (N + |α| + |β|)! N ! f L 2 (R n ) ≤ 2 N 2 2 |α|+|β| (|α| + |β|)! f L 2 (R n ) , since (k 1 + k 2 )! k 1 !k 2 ! = k 1 + k 2 k 1 ≤ k 1 +k 2 j=0 k 1 + k 2 j = 2 k 1 +k 2 .
We observe from (4.44) that for all N ∈ N, f ∈ E N , δ > 0 and α, β ∈ N n , (4.45)
δ |α| x 2α α! ∂ β x f L 2 (R n ) ≤ 2 N 2 δ |α| 2 2|α|+|β| α! (2|α| + |β|)! f L 2 (R n ) ≤ 2 N 2 δ |α| 2 4|α|+ 3 2 |β| |α|! α! |β|! f L 2 (R n ) ≤ 2 N 2 (16nδ) |α| 2 3 2 |β| |β|! f L 2 (R n ) , since (2|α| + |β|)! ≤ 2 2|α|+|β| (2|α|)!|β|! ≤ 2 4|α|+|β| (|α|!) 2 |β|! and (4.46) |α|! ≤ n |α| α!.
The last estimate is a direct consequence of the generalized Newton formula
∀x = (x 1 , ..., x n ) ∈ R n , ∀N ∈ N, n j=1 x j N = α∈N n ,|α|=N N ! α! x α .
By using that the number of solutions to the equation α 1 + ... + α n = k, with k ≥ 0, n ≥ 1 and unknown α = (α 1 , ..., α n ) ∈ N n , is given by k+n-1 n-1 , it follows from (4.45) that for all
N ∈ N, f ∈ E N , 0 < δ < 1 32n and β ∈ N n , e δ|x| 2 ∂ β x f L 2 (R n ) ≤ α∈N n δ |α| x 2α α! ∂ β x f L 2 (R n ) (4.47) ≤ 2 N 2 α∈N n (16nδ) |α| 2 3 2 |β| |β|! f L 2 (R n ) = 2 N 2 +∞ k=0 k + n -1 n -1 (16nδ) k 2 3 2 |β| |β|! f L 2 (R n ) ≤ 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) , since k+n-1 n-1 ≤ k+n-1 j=0 k+n-1 j = 2 k+n-1
N ∈ N, f ∈ E N , 0 < δ < 1 32n and β ∈ N n , (4.48) e δ|Dx| 2 x β f L 2 (R n ) = 1 (2π) n 2 e δ|ξ| 2 ∂ β ξ f L 2 (R n ) ≤ 1 (2π) n 2 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) = 2 n-1 1 -32nδ 2 N 2 2 3 2 |β| |β|! f L 2 (R n ) .
This ends the proof of Proposition 4.3.
4.3.
Gelfand-Shilov regularity. We refer the reader to the works [START_REF] Gelfand | Generalized Functions II[END_REF][START_REF] Gramchev | Classes of degenerate elliptic operators in Gelfand-Shilov spaces[END_REF][START_REF] Nicola | Global pseudo-differential calculus on Euclidean spaces, Pseudo-Differential Operators[END_REF][START_REF] Toft | Decompositions of Gelfand-Shilov kernels into kernels of similar class[END_REF] and the references herein for extensive expositions of the Gelfand-Shilov regularity theory. The Gelfand-Shilov spaces S µ ν (R n ), with µ, ν > 0, µ+ν ≥ 1, are defined as the spaces of smooth functions f ∈ C ∞ (R n ) satisfying the estimates
∃A, C > 0, |∂ α x f (x)| ≤ CA |α| (α!) µ e -1 A |x| 1/ν , x ∈ R n , α ∈ N n , or, equivalently ∃A, C > 0, sup x∈R n |x β ∂ α x f (x)| ≤ CA |α|+|β| (α!) µ (β!) ν , α, β ∈ N n .
These Gelfand-Shilov spaces S µ ν (R n ) may also be characterized as the spaces of Schwartz functions f ∈ S (R n ) satisfying the estimates
∃C > 0, ε > 0, |f (x)| ≤ Ce -ε|x| 1/ν , x ∈ R n , | f (ξ)| ≤ Ce -ε|ξ| 1/µ , ξ ∈ R n .
In particular, we notice that Hermite functions belong to the symmetric Gelfand-Shilov space S
1/2 1/2 (R n ). More generally, the symmetric Gelfand-Shilov spaces S µ µ (R n ), with µ ≥ 1/2, can be nicely characterized through the decomposition into the Hermite basis (Φ α ) α∈N n , see e.g. [45, Proposition 1.2],
f ∈ S µ µ (R n ) ⇔ f ∈ L 2 (R n ), ∃t 0 > 0, f, Φ α L 2 exp(t 0 |α| 1 2µ ) α∈N n l 2 (N n ) < +∞ ⇔ f ∈ L 2 (R n ), ∃t 0 > 0, e t 0 H 1 2µ f L 2 (R n ) < +∞,
where H = -∆ x + |x| 2 stands for the harmonic oscillator.
4.4. Remez inequality. The classical Remez inequality [START_REF] Remez | Sur une propriété des polynômes de Tchebycheff[END_REF], see also [START_REF] Erdélyi | The Remez inequality on the size of polynomials[END_REF][START_REF] Erdélyi | Remez-type inequalities and their applications[END_REF], is the following estimate providing a bound on the maximum of the absolute value of an arbitrary real polynomial function P ∈ R The Remez inequality was extended in the multi-dimensional case in [START_REF] Brudnyi | A certain extremal problem for polynomials in n variables, (Russian)[END_REF], see also [START_REF] Ganzburg | Polynomial inequalities on measurable sets and their applications[END_REF]Formula (4.1)] and [START_REF] Kroó | Some extremal problems for multivariate polynomials on convex bodies[END_REF], as follows: for all convex bodies2 K ⊂ R n , measurable subsets E ⊂ K of positive Lebesgue measure 0 < |E| < |K| and real polynomial functions P ∈ R[X 1 , ..., X n ] of degree d, the following estimate holds Thanks to this estimate, we can prove that the L 2 -norm • L 2 (ω) on any measurable subset ω ⊂ R n , with n ≥ 1, of positive Lebesgue measure |ω| > 0 defines a norm on the finite dimensional vector space E N defined in (2.1). Indeed, let f be a function in E N verifying f L 2 (ω) = 0, with ω ⊂ R n a measurable subset of positive Lebesgue measure |ω| > 0. According to (4.1) and (4.6), there exists a complex polynomial function P ∈ C[X 1 , ..., X n ] such that ∀(x 1 , ..., x n ) ∈ R n , f (x 1 , ..., x n ) = P (x 1 , ..., x n )e -x 2 1 +...+x 2 n 2 . The condition f L 2 (ω) = 0 first implies that f = 0 almost everywhere in ω, and therefore that P = 0 almost everywhere in ω. We deduce from (4.55) that the polynomial function P has to be zero on any convex body K verifying |K ∩ ω| > 0, and therefore is zero everywhere. We conclude that the L 2 -norm • L 2 (ω) actually defines a norm on the finite dimensional vector space E N .
On the other hand, the Remez inequality is a key ingredient in the proof of the following instrumental lemma needed for the proof of Theorem 2.1: Lemma 4.4. Let R > 0 and ω ⊂ R n be a measurable subset verifying |ω ∩ B(0, R)| > 0. Then, the following estimate holds for all complex polynomial functions P ∈ C[X 1 , ..., X n ] of degree d,
P L 2 (B(0,R)) ≤ 2 2d+1 √ 3 4|B(0, R)| |ω ∩ B(0, R)| 1 + (1 -|ω∩B(0,R)| 4|B(0,R)| ) 1 n 1 -(1 -|ω∩B(0,R)| 4|B(0,R)| ) 1 n d P L 2 (ω∩B(0,R)) ,
where B(0, R) denotes the open Euclidean ball in R n centered in 0 with radius R > 0.
Proof. Let P ∈ C[X 1 , ..., X n ] be a non-zero complex polynomial function of degree d and R > 0. We consider the following subset This ends the proof of Lemma 4.4.
where B(y ′ , r) denotes the open Euclidean ball centered in y ′ with radius r > 0. It allows to derive the null-controllability of parabolic equations associated with accretive quadratic operators with zero singular spaces in any positive time T > 0 from any open subset ω of R n satisfying (1.21).
3. 1 .
1 Case when the control subset is a non-empty open set. Let ω ⊂ R n be a nonempty open set. There exist x 0 ∈ R n and r > 0 such that the control subset ω contains the open Euclidean ball B(x 0 , r) centered at x 0 with radius r > 0, (3.1) B(x 0 , r) ⊂ ω.
1 2 N
2 ln(N +1)+CN P L 2 (B(x 0 ,r)) .It follows from (3.5), (3.6) and (3.14) that for all N ∈ N such that c n √ N + 1 > 2|x 0 | + r and for all f ∈ E N ,(3.15)
Lemma 3 . 1 .
31 ([30, Lemma 1]). Let I ⊂ R be an interval of length 1 such that 0 ∈ I and E ⊂ I be a subset of positive measure |E| > 0. There exists a positive constant C > 1 such that for all analytic function Φ on the open ball B(0, 5) centered in zero with radius 5 such that |Φ(0)| ≥ 1, then sup x∈I |Φ(x)| ≤ C |E| ln M ln 2 sup x∈E |Φ(x)|, with M = sup |z|≤4 |Φ(z)| ≥ 1. Applying Lemma 3.1 with I = [0, 1], E = I σ 0 ⊂ [0, 1] verifying |E| = |I σ 0 | > 0 according to (3.43), and the analytic function Φ = φ defined in (3.44) satisfying |φ(0)| ≥ 1, we obtain that (3.45) L n 2
δ n = 2 2
2 11 n 3 (2 n + 1) > 0 With this choice, it follows from (3.46), (3.52), (3.53) and (3
and the analytic function Φ = φ defined in (3.44) satisfying |φ(0)| ≥ 1, we obtain that (3.65) L n 2
2 - 1 )
21 [X] of degree d on [-1, 1] by the maximum of its absolute value on any measurable subsetE ⊂ [-1, 1] of positive Lebesgue measure 0 < |E| < 2, k (dk -1)! k!(d -2k)! 2 d-2k X d-2k = k X d-2k , see e.g. [7, Chapter 2], where [x] stands the integer part of x, denotes the d th Chebyshev polynomial function of first kind. We also recall from [7, Chapter 2] the definition of Chebyshev polynomial functions of second kind (4.51)∀d ∈ N, U d (X) = [ d 2 ] k=0 (-1) k dk k 2 d-2k X d-2kand (4.52) ∀d ∈ N * , U d-1 (
By recalling that all the zeros of the Chebyshev polynomial functions of first and second kind are simple and contained in the set ] -1, 1[, we observe from (4.50) and (4.52) that the function T d is increasing on [1, +∞) and that(4.54) ∀d ∈ N, ∀x ≥ 1, 1 = T d (1) ≤ T d (x) = 1) k (x + 1) k x d-2kwe deduce from (4.53) and (4.54) that for all convex bodies K ⊂ R n , measurable subsets E ⊂ K of positive Lebesgue measure 0 < |E| < |K|, and complex polynomial functions P ∈ C[X 1 , ..., X n ] of degree d,
( 4 .( 4 1l≥ 2 - 2 ≤ 2
44222 56)E ε = x ∈ B(0, R) : |P (x)| ≤ 2 -2d-1for all 0 < ε ≤ B(0, R), and F the decreasing function (4.57)∀0 < t ≤ 1, F (t) = 1 + (1t) that |E ε | < |B(0, R)|.We first check that the Lebesgue measure of this subset satisfies|E ε | ≤ ε. If |E ε | > 0, it follows from (4We obtain from (4.58) that (4.59)F ε |B(0, R)| ≤ F |E ε | |B(0, R)| .As F is a decreasing function, we deduce from (4.59) that(4.60) ∀0 < ε ≤ B(0, R), |E ε | ≤ ε.Let ω ⊂ R n be a measurable subset verifying |ω ∩ B(0, R)| > 0. We consider the positive parameterG ε 0 = x ∈ B(0, R) : |P (x)| > 2 -2d-Gε 0 (x)|P (x)| 2 dx ε 0 |.We deduce from (4.56), (4.60) and (4.62) that|ω ∩ G ε 0 | = |G ε 0 |x ∈ B(0, R) \ ω : |P (x)| > 2 -2d-1 F ε 0 |B(0, R)| -d sup B(0,R) |P | ≥ (|B(0, R)|-|E ε 0 |)-|B(0, R)\ω| ≥ |B(0, R)|-1 4 |ω ∩B(0, R)|-(|B(0, R)|-|ω ∩B(0, R)|), that is (4.64) |ω ∩ G ε 0 | ≥ 3 4 |ω ∩ B(0, R)| > 0.It follows from (4.61), (4.63) and (4.64) that (4.65)P 2 L 2 (B(0,R)) ≤ |B(0, R)| sup B(0,R) |P | 4d+2 4|B(0, R)| 3|ω ∩ B(0, R)| F |ω ∩ B(0, R)| 4|B(0, R)| 2d ω∩B(0,R) |P (x)| 2 dx.We deduce from (4.65) that (4.66) P L 2 (B(0,R)) ≤ 2 2d+1 √ 3 4|B(0, R)| |ω ∩ B(0, R)| F |ω ∩ B(0, R)| 4|B(0, R)| d P L 2 (ω∩B(0,R)) .
∈ N n , |α|, |β| ≤ N , denotes the Euclidean norm on R n and (Φ α ) α∈N n stand for the n-dimensional Hermite functions defined in (4.6). On the other hand, we notice from (4.23) and (4.24) that
n
(4.24) |x|≥a |Φ α (x)Φ β (x)|dx ≤ j=1 |x j |≥ a √ n |Φ α (x)Φ β (x)|dx,
where | • |
) k∈N is an orthonormal basis of L 2 (R). For any f = |α|≤N γ α Φ α ∈ E N and
since (φ k a ≥ √ n √ 2N + 1, we deduce from (4.25) that
(4.26) |x|≥a |f (x)| 2 dx = |α|≤N |β|≤N
|α|, |β| ≤ N ,
n
(4.25) j=1 |x j |≥ a √ n |φ α j (x j )φ β j (x j )|dx j
≤ 2 n π e -a 2 n a n j=1 1 α j ! β j ! 2 n a α j +β j ,
|x|≥a |Φ α (x)Φ β (x)|dx ≤
4 N e
a 2 2n .
It follows from (4.27), (4.28) and (4.29) that
(4.30)
|α|≤N |β|≤N
On the other hand, when N ≥ |α| + |β|, we deduce from (4.39) that
!, see e.g. [37] (formulas (0.3.12) and (0.3.14)) Let 0 < δ ≤ 1 be a positive constant. When N ≤ |α| + |β|, we deduce from (4.39) that (4.40) 2 |α|+|β| 2 (N + |α| + |β|)! N ! ≤ 2 |α|+|β| 2 |α|+|β| 2 ≤ (2 √ e) 2 |α|+|β| 2 (N + |α| + |β|)! N ! ≤ 2 |α|+|β| 2 (N + |α| + |β|) |α|+|β| 2 (4.41)
(N + |α| + |β|) |α|+|β| 2 ≤ 2 |α|+|β| (|α| + |β|) |α|+|β| (|α| + |β|)! = (2 √ e) |α|+|β| (|α| + |β|)! (|α| + |β|)! ≤ e e 2δ 2 (2δ) |α|+|β| (|α| + |β|)!.
. By noticing from (4.2) that f ∈ E N if and only if f ∈ E N , we deduce from the Parseval formula and (4.47) that for all
A compact convex subset of R n with non-empty interior. |
01766354 | en | [
"sdu.stu.gp"
] | 2024/03/05 22:32:13 | 2012 | https://hal.science/hal-01766354/file/doc00028875.pdf | Daniel R H O'connell
Jon P Ake
Fabian Bonilla
Pengcheng Liu
Roland Laforge
Dean Ostenaa
Strong Ground Motion Estimation
Introduction
At the time of its founding, only a few months after the great 1906 M 7.7 San Francisco Earthquake, the Seismological Society of America noted in their timeless statement of purpose "that earthquakes are dangerous chiefly because we do not take adequate precautions against their effects, whereas it is possible to insure ourselves against damage by proper studies of their geographic distribution, historical sequence, activities, and effects on buildings." Seismic source characterization, strong ground motion recordings of past earthquakes, and physical understanding of the radiation and propagation of seismic waves from earthquakes provide the basis to estimate strong ground motions to support engineering analyses and design to reduce risks to life, property, and economic health associated with earthquakes. When a building is subjected to ground shaking from an earthquake, elastic waves travel through the structure and the building begins to vibrate at various frequencies characteristic of the stiffness and shape of the building. Earthquakes generate ground motions over a wide range of frequencies, from static displacements to tens of cycles per second [Hertz (Hz)]. Most structures have resonant vibration frequencies in the 0.1 Hz to 10 Hz range. A structure is most sensitive to ground motions with frequencies near its natural resonant frequency. Damage to a building thus depends on its properties and the character of the earthquake ground motions, such as peak acceleration and velocity, duration, frequency content, kinetic energy, phasing, and spatial coherence. Strong ground motion estimation must provide estimates of all these ground motion parameters as well as realistic ground motion time histories needed for nonlinear dynamic analysis of structures to engineer earthquake-resistant buildings and critical structures, such as dams, bridges, and lifelines. Strong ground motion estimation is a relatively new science. Virtually every M > 6 earthquake in the past 35 years that provided new strong ground motion recordings produced a paradigm shift in strong motion seismology. The 1979 M 6.9 Imperial Valley, California, earthquake showed that rupture velocities could exceed shear-wave velocities over a significant portion of a fault, and produced a peak vertical acceleration > 1.5 g [START_REF] Spudich | Direct observation of rupture propagation during the 1979 Imperial Valley earthquake using a short baseline accelerometer array[END_REF]Archuleta;[START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF]. The 1983 M 6.5 Coalinga, California, earthquake revealed a new class of seismic sources, blind thrust faults [START_REF] Stein | Seismicity and geometry of a 110-km-long blind thrust fault: 2. Synthesis of the 1982-1985 California earthquake sequence[END_REF]. The 1985 M 6.9 Nahanni earthquake produced horizontal accelerations of 1.2 g and a peak vertical acceleration > 2 g (Weichert et al., 1986). The 1989 M 7.0 Loma Prieta, California, earthquake occurred on an unidentified steeply-dipping fault adjacent to the San Andreas fault, with reverse-slip on half of the fault [START_REF] Hanks | The 1989 Loma Prieta, California, earthquake and its effects: Introduction to the Special Issue[END_REF], and produced significant damage > 100 km away related to critical reflections of shear-waves off the Moho [START_REF] Somerville | The influence of critical Moho reflections on strong ground motions recorded in San Francisco and Oakland during the 1989 Loma Prieta earthquake[END_REF][START_REF] Catchings | Reflected seismic waves and their effect on strong shaking during the 1989 Loma Prieta, California, earthquake[END_REF]. The 1992 M 7.0 Petrolia, California, earthquake produced peak horizontal accelerations > 1.4 g [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF]. The 1992 M 7.4 Landers, California, earthquake demonstrated that multisegment fault rupture could occur on fault segments with substantially different orientations that are separated by several km [START_REF] Li | Fine structure of the Landers fault zone; segmentation and the rupture process[END_REF]. The 1994 M 6.7 Northridge, California, earthquake produced a then world-record peak horizontal velocity (> 1.8 m/s) associated with rupture directivity (O'Connell, 1999a), widespread nonlinear soil responses [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Cultera | Nonlinear soil response in the vicinity of the Van Norman Complex following the 1994 Northridge, California, earthquake[END_REF], and resulted in substantial revision of existing ground motion-attenuation relationships [START_REF] Abrahamson | Overview[END_REF]. The 1995 M 6.9 Hyogoken Nanbu (Kobe) earthquake revealed that basin-edge generated waves can strongly amplify strong ground motions [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF] and provided ground motion recordings demonstrating time-dependent nonlinear soil responses that amplified and extended the durations of strong ground motions [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. The 1999 M > 7.5 Izmit, Turkey, earthquakes produced asymmetric rupture velocities, including rupture velocities ~40% faster than shear-wave velocities, which may be associated with a strong velocity contrast across the faults [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. The 1999 M 7.6 Chi-Chi, Taiwan, earthquake produced a world-record peak velocity > 3 m/s with unusually low peak accelerations [START_REF] Shin | A preliminary report of the 1999 Chi-Chi (Taiwan) earthquake[END_REF]. The 2001 M 7.7 Bhuj India demonstrated that M > 7.5 blind thrust earthquakes can occur in intraplate regions. The M 6.9 2008 Iwate-Miyagi, Japan, earthquake produced a current world-record peak vector acceleration > 4 g, with a vertical acceleration > 3.8 g (Aoi et al., 2008). The 2011 M 9.1 Tohoku, Japan, earthquake had a world-record peak slip on the order of 60 m (Shao et al., 2011) and produced a world-record peak horizontal acceleration of 2.7 g at > 60 km from the fault [START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF]. This progressive sequence of ground motion surprises suggests that the current state of knowledge in strong motion seismology is probably not adequate to make unequivocal strong ground motion predictions. However, with these caveats in mind, strong ground motion estimation provides substantial value by reducing risks associated with earthquakes and engineered structures. We present the current state of earthquake ground motion estimation. We start with seismic source characterization, because this is the most important and challenging part of the problem. To better understand the challenges of developing ground motion prediction equations (GMPE) using strong motion data, we present the physical factors that influence strong ground shaking. New calculations are presented to illustrate potential pitfalls and identify key issues relevant to ground motion estimation and future ground motion research and applications. Particular attention is devoted to probabilistic implications of all aspects of ground motion estimation.
Seismic source characterization
The strongest ground shaking generally occurs close to an earthquake fault rupture because geometric spreading reduces ground shaking amplitudes as distance from the fault increases. Robust ground motion estimation at a specific site or over a broad region is predicated on the availability of detailed geological and geophysical information about locations, geometries, and rupture characteristics of earthquake faults. These characteristics are not random, but are dictated by the physical properties of the upper crust including rock types, pre-existing faults and fractures, and strain rates and orientations. Because such information is often not readily available or complete, the resultant uncertainties of source characterization can be the dominant contributions to uncertainty in ground motion estimation. [START_REF] Lettis | Empirical observations regarding reverse earthquakes, blind thrust faults, and Quaternary deformation: Are blind thrust faults truly blind[END_REF] showed that intraplate blind thrust earthquakes with moment magnitudes up to 7 have occurred in intraplate regions where often there was no previously known direct surface evidence to suggest the existence of the buried faults. This observation has been repeatedly confirmed, even in plate boundary settings, by numerous large earthquakes of the past 30 years including several which have provided rich sets of ground motion data from faults for which neither the locations, geometries, or other seismic source characterization properties were known prior to the earthquake. Regional seismicity and geodetic measurements may provide some indication of the likely rate of earthquake occurrence in a region, but generally do not demonstrate where that deformation localizes fault displacement. Thus, an integral and necessary step in reducing ground motion estimation uncertainties in most regions remains the identification and characterization of earthquake source faults at a sufficiently detailed scale to fully exploit the full range of ground motion modelling capabilities. In the absence of detailed source characterizations, ground motion uncertainties remain large, with the likely consequence of overestimation of hazard at most locations, and potentially severe underestimation of hazard in those few locations where a future earthquake ultimately reveals the source characteristics of a nearby, currently unknown fault. The latter case is amply demonstrated by the effects of the 1983 M 6.5 Coalinga, 1986M 6.0 Whittier Narrows, 1989M 6.6 Sierra Madre, 1989M 7.0 Loma Prieta, 1992M 7.4 Landers, 1994M 6.7 Northridge, 1999 M 7.6 Chi-Chi Taiwan, 2001 M 7.7 Bhuj, India, 2010 M 7.0 Canterbury, New Zealand, and 2011 M 6.1 Christchurch, New Zealand, earthquakes. The devastating 2011 M 9.1 Tohoku, Japan, earthquake and tsunami were the result of unusually large fault displacement over a relatively small fault area (Shao et al., 2011), a source characteristic that was not forseen, but profoundly influenced strong ground shaking [START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF] and tsunami responses (SIAM News, 2011). All these earthquakes occurred in regions where the source faults were either unknown or major source characteristics were not recognized prior to the occurrence of these earthquakes.
Physical basis for ground motion prediction
In this section we present the physical factors that influence ground shaking in response to earthquakes. A discrete representation is used to emphasize the discrete building blocks or factors that interact to produce strong ground motions. For simplicity, we start with linear stress-strain. Nonlinear stress-strain is most commonly observed in soils and evaluated in terms of site response. This is the approach we use here; nonlinear site response is discussed in Section 4. The ground motions produced at any site by an earthquake are the result of seismic radiation associated with the dynamic faulting process and the manner in which seismic energy propagates from positions on the fault to a site of interest. We assume that fault rupture initiates at some point on the fault (the hypocenter) and proceeds outward along the fault surface. Using the representation theorem [START_REF] Spudich | Techniques for earthquake ground-motion calculation with applications to source parameterization to finite faults[END_REF]
u t s t g t k i j k i j ij nm (1)
where k is the component of ground motion, ij are the indices of the discrete fault elements, n is the number of fault elements in the strike direction and m is the number of elements in dip direction (Figure 3.1). We use the notation F() to indicate the modulus of the Fourier transform of f(t). It is instructive to take the Fourier transform of (1) and pursue a discussion similar to [START_REF] Hutchings | Empirical Green's functions from small earthquakes -A waveform study of locally recorded aftershocks of the San Fernando earthquakes[END_REF] and [START_REF] Hutchings | Kinematic earthquake models and synthesized ground motions using empirical Green's functions[END_REF]
using, U S e G e k i j i kij ij nm i ij kij
(2)
where at each element ij,
S ij is the source slip-velocity amplitude spectrum, ij is the source phase spectrum,
G kij is the Green's function amplitude spectrum, and kij is the Green's function phase spectrum. The maximum peak ground motions are produced by a combination of factors that produce constant or linear phase variations with frequency over a large frequency band. While the relations in (1) and ( 2) are useful for synthesizing ground motions, they don't provide particularly intuitive physical insights into the factors that contribute to produce specific ground motion characteristics, particularly large peak accelerations, velocities, and displacements. We introduce isochrones as a fundamental forensic tool for understanding the genesis of ground motions. Isochrones are then used to provide simple geometric illustrations of how directivity varies between dipping dip-slip and vertical strike-slip faults. Bernard and Madariaga (1984) and [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF]1987) developed the isochrone integration method to compute near-source ground motions for finite-fault rupture models. Isochrones are all the positions on a fault that contribute seismic energy that arrives at a specific receiver at the same time. By plotting isochrones projected on a fault, times of large amplitudes in a ground motion time history can be associated with specific regions and characteristics of fault rupture and healing. A simple and reasonable way to employ the isochrone method for sites located near faults is to assume that all significant seismic radiation from the fault consists of first shear-wave arrivals. A further simplification is to use a simple trapezoidal slip-velocity pulse. Let f(t) be the slip function, For simplicity we assume where t r is rupture time, and t h is healing time. Then, all seismic radiation from a fault can be described with rupture and healing isochrones. Ground velocities (v) and accelerations (a) produced by rupture or healing of each point on a fault can be calculated from [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF]Zeng et al., 1991;Smedes and Archuleta, 2008)
Isochrones analysis of rupture directivity
v t f t dl y t x sGc x , , (3) 2 , , y t d d d a t f t dl dq dq dq 2 2 x s G c x c G c s s G sGc ( 4
)
where c is isochrone velocity, s is slip velocity (either rupture or healing), G is a ray theory Green function, x are position vectors, y(t,x) are isochrones, is the curvature of the isochrone, dl denotes isochrone line integral integration increment, and dq denotes a spatial derivative. Since isochrones are central to understanding ground motions, we provide explicit expressions for rupture and healing isochrones to illustrate how source and propagation factors can combine to affect ground motions. The arrival times of rupture at a specific receiver are
T t t r r x x , (5)
where x is the receiver position, are all fault positions, t are shear-wave propagation times between the receiver and all fault positions, and t r are rupture times at all fault positions. The arrival times of healing at a specific receiver are
T T R h r x x , (6)
where R are the rise times (the durations of slip) at all fault positions. [START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF] showed that variations in rupture velocity had pronounced effects on calculated ground motions, whereas variations in rise times and slip-rate amplitudes cause small or predictable changes on calculated ground motions. The effect of changing slipvelocity amplitudes on ground motions is strongly governed by the geometrical attenuation (1/r for far-field terms). Any change in the slip-velocity amplitudes affects most the ground motions for sites closest to the region on the fault where large slip-velocities occurred [START_REF] Spudich | Techniques for earthquake ground-motion calculation with applications to source parameterization to finite faults[END_REF]. This is not the case with rupture velocity or rise time; these quantities influence ground motions at all sites. However, as [START_REF] Anderson | Comparison of strong ground motion from several dislocation models[END_REF] showed, it takes a 300% change in rise time to compensate for a 17% change in rupture time. [START_REF] Spudich | Dense seismograph array observations of earthquake rupture dynamics[END_REF] show why this is so. Spatial variability of rupture velocity causes the integrand in (3) to become quite rough, thereby adding considerable highfrequency energy to ground motions. The roughness of the integrand in ( 3) is caused by variations of isochrone velocity c, where
c T r s 1 (7)
where T r are the isochrones from (5) and s is the surface gradient operator. Variations of T r on the fault surface associated with supershear rupture velocities, or regions on the fault where rupture jumps discontinuously can cause large or singular values of c, called critical points by [START_REF] Farra | Fast near source evaluation of strong ground motion for complex source models[END_REF]. [START_REF] Spudich | Use of ray theory to calculate high-frequency radiation from earthquake sources having spatially variable rupture velocity and stress drop[END_REF] showed that the reciprocal of c, isochrone slowness is equivalent to the seismic directivity function in the twodimensional case. Thus, by definition, critical points produce rupture directivity, and as is shown with simulations later, need not be associated strictly with forward rupture directivity, but can occur for any site located normal to a portion of a fault plane where rupture velocities are supershear. It is useful to interpret (3) and (4) in the context of the discrete point-source summations in (1) and (2). When isochrone velocities become large on a substantial area of a fault it simply means that all the seismic energy from that portion of the fault arrives at nearly the same time at the receiver; the summation of a finite, but large number of band-limited Green's functions means that peak velocities remain finite, but potentially large. Large isochrone velocities or small isochrone slownesses over significant region of a fault are diagnostic of ground motion amplification associated with rupture directivity; the focusing of a significant fraction of the seismic energy radiated from a fault at a particular site in a short time interval. In this way isochrones are a powerful tool to dissect ground motions in relation to various characteristics of fault rupture. Times of large ground motion amplitudes can be directly associated with the regions of the fault that have corresponding large isochrone velocities or unusually large slip velocities. From ( 5) and ( 6) it is clear that both fault rupture variations, and shear-wave propagation time variations, combine to determine isochrones and isochrone velocities.
3.1.1
The fundamental difference between strike-slip and dip-slip directivity [START_REF] Boore | The effect of directivity on the stress parameter determined from ground motion observations[END_REF] and [START_REF] Joyner | Directivity for nonuniform ruptures[END_REF] discussed directivity using a simple line source model. A similar approach is used here to illustrate how directivity differs between vertical strike-slip faults and dipping dip-slip faults. To focus on source effects, we consider unilateral, one-dimensional ruptures in a homogenous half-space (Figure 3.2). The influence of the free surface on amplitudes is ignored. The rupture velocity is set equal to the shearwave velocity to minimize time delays and to maximize rupture directivity. To eliminate geometric spreading, stress drops increase linearly with distance from the site in a manner that produces uniform ground motion velocity contribution to the surface site for all points on the faults. Healing is ignored; only the rupture pulse is considered. Thrust dip-slip faulting is used to produce coincident rake and rupture directions. Seismic radiation is simplified to triangular slip-velocity pulses with widths of one second. For the strike-slip fault, the fault orientation and rupture directional are coincident. But, as fault rupture approaches the site, takeoff angles increase, so the radiation pattern reduces amplitudes, and total propagation distances (rupture length plus propagation distance) increase to disperse shear-wave arrivals in time (Figures 3.2a and 3.2b). The surface site located along the projection of the thrust fault to the surface receives all seismic energy from the fault at the same time, and c is infinity because the fault orientation, rupture, and shearwave propagation directions are all coincident for the entire length of the fault (Figures 3.2c and 2d). Consequently, although the strike-slip fault is 50% longer than the thrust fault, the thrust fault produces a peak amplitude 58% larger than the strike-slip fault. The thrust fault site receives maximum amplitudes over the entire radiated frequency band. High-frequency amplitudes are reduced for the strike-slip site relative to the thrust fault site because shearwaves along the strike-slip fault become increasingly delayed as rupture approaches the site, producing a broadened ground motion velocity pulse. The geometric interaction between dip-slip faults and propagation paths to surface sites located above those faults produces a kinematic recipe for maximizing both isochrone velocities and radiation patterns for surface sites that is unique to dip-slip faults. In contrast, [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF] use kinematic rupture simulations and isochrone analyses to show why directivity becomes bounded during strike-slip fault along long faults. [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF] consider the case of subshear rupture velocities and use critical point analyses with (3) and (4) to show that for long strike-slip ruptures there is a saturation effect for peak velocities and accelerations at sites close to the fault located at increasing distances along strike relative to the epicenter, consistent with empirical observations (Cua, 2004;Abrahamson and Silva, 2008;[START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF][START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF][START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF]. Dynamic fault rupture processes during dip-slip rupture complicate dip-slip directivity by switching the region of maximum fault-normal horizontal motion from the hangingwall to the footwall as fault dips increase from 50 to 60 [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF]. Typically, seismic velocities increase with depth, which changes positions of maximum rupture directivity compared to Figure 3.2. For dip-slip faults, the region of maximum directivity is moved away from the projection of the fault to the surface, toward the hanging wall. This bias is dependent on velocity gradients, and the dip and depth of the fault. For strike-slip faults, a refracting velocity geometry can increase directivity by reducing takeoff angle deviations relative to the rupture direction for depth intervals that depend on the velocity structure and position of the surface site (Smedes and [START_REF] Schmedes | Near-source ground motion along strike-slip faults: Insights into magnitude saturation of PGV and PGA[END_REF]. When the two-dimensional nature of finite-fault rupture is considered, rupture directivity is not as strong as suggested by this one-dimensional analysis [START_REF] Bernard | Modeling directivity of heterogeneous earthquake ruptures[END_REF], but the distinct amplitude and frequency differences between ground motions produced by strike-slip and dip-slip faulting directivity remain. Full two-dimensional analyses are presented in a subsequent section. A more complete discussion of source and propagation factors influencing ground motions is presented next to provide a foundation for discussion of amplification associated with rupture directivity. The approach here is to discuss ground motions separately in terms of source and propagation factors and then to discuss how source and propagation factors can jointly interact to strongly influence ground motion behavior.
Seismic source amplitude and phase factors
ij .
The flat portion of an amplitude spectrum is composed of the frequencies less than a corner frequency, c , which is defined as the intersection of low-and high-frequency asymptotes following [START_REF] Brune | Tectonic stress and the spectra of seismic shear waves from earthquakes[END_REF]. The stress drop, , defined as the difference between an initial stress, 0 , minus the dynamic frictional stress, f , is the stress available to drive fault slip [START_REF] Aki | Strong-motion seismology[END_REF]. Rise time, R, is the duration of slip at any particular point on the fault. Rise times are heterogeneous over a fault rupture surface. Because the radiation pattern for seismic phases such as body waves and surface waves are imposed by specification of rake (slip direction) at the source and are a function of focal mechanism, radiation pattern is included in the source discussion. Regressions between moment and fault area [START_REF] Wells | New empirical relationships amoung magnitude, rupture length, rupture width, rupture area, and surface displacement[END_REF][START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF]Leonard, 2010) show that uncertainties in moment magnitude and fault area are sufficient to produce moment uncertainties of 50% or more for any particular fault area. Consequently, the absolute scaling of synthesized ground motions for any faulting scenario have about factor of two uncertainties related to seismic moment (equivalently, average stress drop) uncertainties. Thus, moment-fault area uncertainties introduce a significant source of uncertainty in ground motion estimation. [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF] and [START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF] showed that correlated-random variations of stress drop over fault surfaces that produce self-similar spatial distributions of fault slip are required to explain observed ground motion frequency amplitude responses. [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] showed that a self-similar slip model can explain inferred slip distributions for many large earthquakes and they derive relations between many fault rupture parameters and seismic moment. Their results provide support for specifying fault rupture models using a stochastic spatially varying stress drop where stress drop amplitude decays as the inverse of wavenumber to produce self-similar slip distributions. They assume that mean stress drop is independent of seismic moment. Based on their analysis and assumptions, [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] provide recipes for specifying fault rupture parameters such as slip, rise times, and asperity dimensions as a function of moment. [START_REF] Mai | Source scaling properties from finite-fault-rupture models[END_REF] showed that 5.3 < M < 8.1 magnitude range dip-slip earthquakes follow self-similar scaling as suggest by [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF]. However, for strike-slip earthquakes, as moment increases in this magnitude range, they showed that seismic moments scale as the cube of fault length, but fault width saturates. Thus, for large strike slip earthquakes average slip increases with fault rupture length, stress drop increases with magnitude, and self-similar slip scaling does not hold. The large stress drops observed for the M 7.7 1999 Chi-Chi, Taiwan thrust-faulting earthquake [START_REF] Oglesby | The three-dimensional dynamics of dipping faults[END_REF] suggest that self-similar slip scaling relations may also breakdown at larger moments for dip-slip events.
Factor
Influence Moment rate,
S ij M 0
Moment rate scales peak velocities and accelerations. Moment determines the average slip for a fixed fault area and known shear moduli.
Stress drop,
S ij Since S ij ,
S ij C
Diffraction at the crack tip introduces a frequency dependent amplitude to the radiation pattern [START_REF] Madariaga | High-frequency radiation from crack (stress-drop) models of earthquake faulting[END_REF][START_REF] Boatwright | A dynamic model for far-field acceleration[END_REF][START_REF] Fukuyama | Integral equation method for plane crack with arbitary shape in 3D elastic medium[END_REF].
Dynamics, S ij D
Fault rupture in heterogeneous velocity structure can produce anisotropic slip velocities relative to rupture direction [START_REF] Harris | Effects of a low-velocity zone on dynamic rupture[END_REF] and slip velocities and directivity are a function of rake and dip for dip-slip faults [START_REF] Oglesby | Earthquakes on dipping faults:The effects of broken symmetry[END_REF]2000;[START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF]. Frictional heating, fault zone fluids, and melting may also influence radiated energy (Kanamori and Brodsky, 2001;Andrews, 200X)
S ij ) Factor Influence Rupture velocity, ij V r
High rupture velocities increase directivity. Rupture velocities interact with stress drops and rise times to modify the amplitude spectrum. Supershear rupture velocities can increase directivity far from the fault (Andrews, 2010). Healing velocity,
ij V h
High healing velocities increase amplification associated with directivity.
Healing velocities interact with stress drop and rise time variations to modify the amplitude spectrum, although to a smaller degree than rupture velocities, since rupture slip veloc ities are typically several times larger than healing slip velocities.
Rake,
ij A
Rake and spatial and temporal rake variations scale amplitudes as a function of azimuth and take-off angle. Rake spatial and temporal variations over a fault increase the spatial complexity of radiation pattern amplitude variations and produce frequency-dependent amplitude variability. Rise time, Diffraction at the crack tip introduces a frequency dependent amplitude to the radiation pattern [START_REF] Madariaga | High-frequency radiation from crack (stress-drop) models of earthquake faulting[END_REF][START_REF] Boatwright | A dynamic model for far-field acceleration[END_REF][START_REF] Fukuyama | Integral equation method for plane crack with arbitary shape in 3D elastic medium[END_REF].
ij R Since c R 1 ,
Dynamics, ij D
The same dynamic processes identified in Table 1 produce corresponding phase variability. (1998; 2000) showed that stress drop behaviors are fundamentally different between dipping reverse and normal faults. These results suggest that stress drop may be focal mechanism and magnitude dependent. There are still significant uncertainties as to the appropriate specifications of fault rupture parameters to simulate strong ground motions, particularly for larger magnitude earthquakes. [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF] used dynamic rupture simulations to show that homogeneous and weakly heterogeneous half-spaces with faults dipping ≲50°, maximum fault-normal peak velocities occurred on the hanging wall. However, for fault dips ≳50°, maximum fault-normal peak velocities occurred on the footwall. Their results indicate that simple amplitude parameterizations based on the hanging wall and/or footwall and the fault normal and/or fault parallel currently used in ground motion prediction relations may not be appropriate for some faults with dips > 50°. Thus, the details of appropriate spatial specification of stress drops and/or slip velocities as a function of focal mechanism, magnitude, and fault dip are yet to be fully resolved. [START_REF] Day | Three-dimensional simulation of spontaneous rupture: The effect of nonuniform prestress[END_REF] showed that intersonic rupture velocities ( < V r < ) can occur during earthquakes, particularly in regions of high prestress (asperities), and that peak slip velocity is strongly coupled to rupture velocity for non-uniform prestresses. While average rupture velocities typically remain subshear, high-stress asperities can produce local regions of supershear rupture combined with high slip velocities. Supershear rupture velocities have been observed or inferred to have occurred during several earthquakes, including the M 6.9
1979 Imperial Valley strike-slip earthquake (Olson and Apsel, 1982;[START_REF] Spudich | Direct observation of rupture propagation during the 1979 Imperial Valley earthquake using a short baseline accelerometer array[END_REF][START_REF] Archuleta | A faulting model for the 1979 Imperial Valley earthquake[END_REF], the M 6.9 1980 Irpinia normal-faulting earthquake [START_REF] Belardinelli | Redistribution of dynamic stress during coseismic ruptures: Evidence for fault interaction and earthquake triggering[END_REF], the M 7.0 1992 Petrolia thrust-faulting earthquake [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF], the M 7.3 Landers strike-slip earthquake [START_REF] Olsen | Three-dimensional dynamic simulation of the 1992 Landers earthquake[END_REF][START_REF] Bouchon | Stress field associated with the rupture of the 1992 Landers, California, earthquake and its implications concerning the fault strenght at the onset of the earthquake[END_REF][START_REF] Hernandez | Contribution of radar interfermetry to a two-step inversion of the kinematic process of the 1992 Landers earthquake[END_REF] the M 6.7 1994 Northridge thrust-faulting earthquake [START_REF] O'connell | Possible super-shear rupture velocities during the 1994 Northridge earthquake[END_REF], and the 1999 M 7.5 Izmit and M 7.3 Duzce Turkey strike-slip earthquakes [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. Bouchon et al. (2010) find that surface trace of the portions of strike-slip faults with inferred supershear rupture velocities are remarkably linear, continuous and narrow, that segmentation features along these segments are small or absent, and the deformation is highly localized. [START_REF] O'connell | Possible super-shear rupture velocities during the 1994 Northridge earthquake[END_REF] postulates that subshear rupture on the faster footwall in the deeper portion of the Northridge fault relative to the hangingwall produced supershear rupture in relation to hangingwall velocities and contributed to the large peak velocities observed on the hangingwall. [START_REF] Harris | Effects of a low-velocity zone on dynamic rupture[END_REF] showed that rupture velocities and slip-velocity functions are significantly modified when a fault is bounded on one side by a low-velocity zone. The lowvelocity zone can produce asymmetry of rupture velocity and slip velocity. This type of velocity heterogeneity produces an asymmetry in seismic radiation pattern and abrupt and/or systematic spatial variations in rupture velocity. These differences are most significant in regions subject to rupture directivity, and may lead to substantially different peak ground motions occurring at either end of a strike slip fault [START_REF] Bouchon | How Fast is Rupture during an Earthquake? New Insights from the 1999 Turkey Earthquakes[END_REF]. Thus, the position of a site relative to the fast and slow sides of a fault and rupture direction may be significant in terms of the dynamic stress drops and rupture velocities that are attainable in the direction of the site. Observations and numerical modeling show that the details of stress distribution on the fault can produce complex rupture velocity distributions and even discontinuous rupture, factors not typically accounted for in kinematic rupture models used to predict ground motions (e.g. [START_REF] Somerville | Simulations of strong ground motions recorded during the Michoacan, Mexico and Valparaiso, Chile, earthquakes[END_REF][START_REF] Schneider | Ground motion model for the 1989 M 6.9 Loma Prieta earthquake including effects of source, path, and site[END_REF][START_REF] Hutchings | Kinematic earthquake models and synthesized ground motions using empirical Green's functions[END_REF][START_REF] Tumarkin | Scaling relations for composite earthquake models[END_REF][START_REF] Zeng | A composite source model for computing realistic strong ground motions[END_REF][START_REF] Beresnev | Modeling finite-fault radiation from the n spectrum[END_REF]O'Connell, 1999c). Even if only smooth variations of subshear rupture velocities are considered (0.6* < Vr < 1.0*), rupture velocity variability introduces ground motion estimation uncertainties of at least a factor of two [START_REF] Beresnev | Modeling finite-fault radiation from the n spectrum[END_REF], and larger uncertainties for sites subject to directivity. Rupture direction may change due to strength or stress heterogeneities on a fault. [START_REF] Beroza | Linearized inversion for fault rupture behavior: Application to the 1984 Morgan Hill, California, earthquake[END_REF] inferred that rupture was delayed and then progressed back toward the hypocenter during the M 6.2 1984 Morgan Hill earthquake. [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF] inferred that arcuate rupture of an asperity may have produced accelerations > 1.40 g at Cape Mendocino during the M 7.0 1992 Petrolia earthquake. These results are compatible with numerical simulations of fault rupture on a heterogeneous fault plane. [START_REF] Das | A numerical study of two-dimensional spontaneous rupture propagation[END_REF] modeled rupture for a fault plane with high-strength barriers and found that rupture could occur discontinuously beyond strong regions, which may subsequently rupture or remain unbroken. [START_REF] Day | Three-dimensional simulation of spontaneous rupture: The effect of nonuniform prestress[END_REF] found that rupture was very complex for the case of nonuniform prestress and that rupture jumped beyond some points on the fault, leaving unbroken areas behind the rupture which subsequently ruptured. In the case of slip resistant asperity, [START_REF] Das | Breaking of a single asperity: Rupture process and seismic radiation[END_REF] found that when rupture began at the edge of the asperity, it proceeded first around the perimeter and then failed inward in a "double pincer movement". Thus, even the details of rupture propagation direction are not truly specified once a hypocenter position is selected. [START_REF] Guatteri | Coseismic temporal changes of slip direction: The effect of absolute stress on dynamic rupture[END_REF] showed that time-dependent dynamic rake rotations on a fault become more likely as stress states approach low stresses on a fault when combined with heterogeneous distributions of stress and nearly complete stress drops. [START_REF] Pitarka | Simulation of near-fault strong-ground motion using hybrid Green's functions[END_REF] found that eliminating radiation pattern coherence between 1 Hz and 3 Hz reproduced observed ground motions for the 1995 M 6.9 Hyogo-ken Nanbu (Kobe) earthquake. [START_REF] Spudich | Use of fault striations and dislocation models to infer tectonic shear stress during the 1995 Hyogo-ken Nanbu (Kobe) earthquake[END_REF] used fault striations to infer that the Nojima fault slipped at low stress levels with substantial rake rotations occurring during the 1995 Hyogo-ken Nanbu earthquake. This dynamic rake rotation can reduce radiation-pattern coherence at increasing frequencies by increasingly randomizing rake directions for decreasing time intervals near the initiation of slip at each point on a fault, for increasingly complex initial stress distributions on faults. [START_REF] Vidale | Influence of focal mechanism on peak accelerations of strong motions of the Whittier Narrows, California, earthquake and an aftershock[END_REF] showed that the standard double-couple radiation pattern is observable to 6 Hz based on analysis of the mainshock and an aftershock from the Whittier Narrows, California, thrust-faulting earthquake sequence. In contrast, [START_REF] Liu | The 23:19 aftershock of the 15 October 1979 Imperial Valley earthquake: More evidence for an asperity[END_REF] found that a double-couple radiation pattern was only discernible for frequencies extending to 1 Hz based on analysis the 1979 Imperial Valley earthquake and an aftershock. [START_REF] Bent | Source complexity of the October 1, 1987, Whittier Narrows earthquake[END_REF] estimate a of 75 MPa for the 1987 Whittier Narrows M 6.1 thrust faulting earthquake, but allow for a as low as 15.5 MPa. The case of high initial, nearly homogeneous stresses that minimize rake rotations may produce high-frequency radiation pattern coherence as observed by [START_REF] Vidale | Influence of focal mechanism on peak accelerations of strong motions of the Whittier Narrows, California, earthquake and an aftershock[END_REF]. These results suggest that there may be a correlation between the maximum frequency of radiation pattern coherence, initial stress state on a fault, focal mechanism, and stress drop. [START_REF] Frankel | A three-dimensional simulation of seimic waves in the Santa Clara Valley, California, from a Loma Prieta aftershock[END_REF][START_REF] Frankel | Three-dimensional simulations of ground motions in the San Bernardino Valley, California, for hypothetical earthquakes on the San Andreas fault[END_REF][START_REF] Olsen | Three-dimensional simulation of earthquakes on the Los Angeles fault system[END_REF][START_REF] Wald | The seismic response of the Los Angeles Basin, California[END_REF][START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Frankel | Three-dimensional simulations of ground motins in the Seattle region for earthquakes in the Seattle fault zone[END_REF][START_REF] Koketsu | Propagation of seismic ground motion in the Kanto Basin, Japan[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]. Basin-edge waves can substantially amplify strong ground motions in basins [START_REF] Liu | Array analysis of the ground velocities and accelerations from the 1971 San Fernando, California, earthquake[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF][START_REF] Phillips | Basin-induced Love waves observed using the strong motion array at Fuchu, Japan[END_REF][START_REF] Spudich | The seismic coda, site effects, and scattering in alluvial basins studied using aftershocks of the 1986 North Palm Springs, California, earthquakes as source arrays[END_REF][START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]. This is a particular concern for fault-bounded basins where rupture directivity can constructively interact with basin-edge waves to produce extended zones of extreme ground motions [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF], a topic revisited later in the paper. Even smaller scale basin or lens structures on the order of several kilometers in diameter can produce substantial amplification of strong ground motions [START_REF] Alex | Lens-effect in Santa Monica?[END_REF][START_REF] Graves | Ground motion amplification in the Santa Monica area: Effects of shallow basin structure[END_REF][START_REF] Davis | Northridge earthquake damage caused by geologic focusing of seismic waves[END_REF]. Basin-edge waves can be composed of both body and surface waves [START_REF] Spudich | The seismic coda, site effects, and scattering in alluvial basins studied using aftershocks of the 1986 North Palm Springs, California, earthquakes as source arrays[END_REF][START_REF] Meremonte | Urban seismology: Northridge aftershocks recorded by multiscale arrays of portable digital seismographs[END_REF][START_REF] Frankel | Observations of basin ground motions from a dense seismic array in San Jose, California[END_REF]) which provides a rich wavefield for constructive interference phenomena over a broad frequency range. Critical reflections off the Moho can produce amplification at distances > ~75-100 km [START_REF] Somerville | The influence of critical Moho reflections on strong ground motions recorded in San Francisco and Oakland during the 1989 Loma Prieta earthquake[END_REF][START_REF] Catchings | Reflected seismic waves and their effect on strong shaking during the 1989 Loma Prieta, California, earthquake[END_REF]. The depth to the Moho, hypocentral depth, direction of rupture (updip versus downdip), and focal mechanism determine the amplification and distance range that Moho reflections may be important. For instance, [START_REF] Catchings | New Madrid and central California apparent Q values as determined from seismic refraction data[END_REF] showed that Moho reflections amplify ground motions in the > 100 km distance range in the vicinity of the New Madrid seismic zone in the central United States.
Seismic wave propagation amplitude and phase factors
Factor Influence Geometric spreading, G kij r
Amplitudes decrease with distance at 1/r, 1/r 2 , and 1/r 4 for body waves and 1/ r for surface waves. The 1/r term has the strongest influence on high-frequency ground motions. The 1/ r term can be significant for locally generated surface waves. Large-scale velocity structure,
G kij V D 3
Horizontal and vertical velocity gradients and velocity discontinuities can increase or decrease amplitudes and durations. Low-velocity basins can amplify and extend ground motion durations. Abrupt changes in lateral velocity structure can induce basin-edge-waves in the lower velocity material that amplify ground motions. Near-surface resonant responses,
G kij L Low-
G kij Q
Linear hysteretic behavior that reduces amplitudes of the form
e f r Q . High-frequency atten uation, G kij
Strong attenuation of high-frequencies in the shallow crust of the form
e r f . Scattering, G kij S
Scattering tends to reduce amplitudes on average, but introduces high amplitude caustics and low-amplitude shadow zones and produces nearly log-normal distributions of amplitudes (O'Connell, 1999a).
Anisotropy,
G kij A Complicates shear-wave amplitudes and modifies radiation pattern amplitudes and can introduce frequency-dependent amplification based on direction of polarization.
Topography,
G kij T Can produce amplification near topographic highs and introduces an additional sources of scattering.
Table 3. Seismic Wave Propagation Amplitude Factors (
G kij )
Numerous studies have demonstrated that the seismic velocities in the upper 30 to 60 m can greatly influence the amplitudes of earthquake grounds motions at the surface (e.g. [START_REF] Borcherdt | Progress on ground motion predictions for the San Francisco Bay region, California, in Progress on Seismic Zonation in the San Francisco Bay Region[END_REF][START_REF] Joyner | The effect of Quaternary alluvium on strong ground motion in the Coyote Lake, California earthquake of 1979[END_REF][START_REF] Seed | The Mexico earthquake of September 19, 1985 -relationships between soil conditions and earthquake ground motions[END_REF]. [START_REF] Williams | Surface seismic measurements of near-surface P-and S-wave seismic velocities at earthquake recording stations[END_REF] showed that significant resonances can occur for impedance boundaries as shallow as 7-m depth. Boore and Joyner (1997) compared the amplification of generic rock sites with very hard rock sites for 30 m depth averaged velocities. They defined very hard rocks sites as sites that have shear-wave velocities at the surface > 2.7 km/s and generic rock sites as sites where shear-wave velocities at the surface are ~0.6 km/s and increase to > 1 km/s at 30 m depth. Boore and Joyner (1997) found that amplifications on generic rock sites can be in excess of 3.5 at high frequencies, in contrast to the amplifications of less than 1.2 on very hard rock sites. Considering the combined effect of attenuation and amplification, amplification for generic rocks sites peaks between 2 and 5 Hz at a maximum value less than 1.8 (Boore and Joyner, 1997).
Factor Influence Geometric spreading, kij r
Introduces frequency dependent propagation delays.
Large-scale velocity structure,
kij V D 3
Horizontal and vertical velocity and density gradients and velocity and density discontinuities produce frequency dependent phase shifts. Near-surface resonant responses,
kij L
Interactions of shear-wave arrivals of varying angles of incidence and directions produce frequency dependent phase shifts.
Nonlinear soil responses,
kij N u , (equivalent linear), kij N u t , ( fully nonlinear)
Depending on the dynamic soil properties and pore pressure responses, nonlinear responses can increase or reduce phase dispersion. In the case of coupled pore-pressure with dilatant materials can collapse phase producing intermittent amplification caus tics.
Frequency indepen dent attenuation,
kij Q
Linear hysteretic behavior produces frequency-dependent velocity dispersion that produces frequency dependent phase variations.
Scattering,
kij S
The scattering strength and scattering characteristics determine propagation distances required to randomize the phase of shear waves as a function of frequency.
Anisotropy,
kij A
Complicates shear-wave polarizations and modifies radiation pattern polarizations.
Topography,
kij T
Complicates phase as a function of topographic length scale and near-surface velocities.
kij )
A common site-response estimation method is to use the horizontal-to-vertical (H/V) spectral ratio method with shear waves [START_REF] Lermo | Site effect evaluation using spectral ratios with only one station[END_REF] to test for site resonances. The H/V method is similar to the receiver-function method of [START_REF] Langston | Structure under Mount Ranier, Washington, inferred from teleseismic body waves[END_REF].
Several investigations have shown the H/V approach provides robust estimates of resonant frequencies (e.g., [START_REF] Field | A comparison and test of various site response estimation techniques including threee that are not reference site dependent[END_REF][START_REF] Castro | S-wave site-response estimates using horizontal-to-vertical spectra ratios[END_REF][START_REF] Tsubio | Verification of horizontal-to-vertical spectral-ratio technique for estimate of site response using borehole seismographs[END_REF] although absolute amplification factors are less well resolved [START_REF] Castro | S-wave site-response estimates using horizontal-to-vertical spectra ratios[END_REF]Bonilla et al., 1997).
One-dimensional site-response approaches may fail to quantify site amplification in cases when upper-crustal three-dimensional velocity structure is complex. In southern California, [START_REF] Field | A modified ground-motion attenaution relationship for southern California that accounts for detailed site classification and a basin-depth effect[END_REF] found that the basin effect had a stronger influence on peak acceleration than detailed geology used to classify site responses. [START_REF] Hartzell | Variability of site response in Seattle, Washington[END_REF] found that site amplification characteristics at some sites in the Seattle region cannot be explained using 1D or 2D velocity models, but that 3D velocity structure must be considered to fully explain local site responses. [START_REF] Chavez-Garcia | Lateral propagation effects observed at Parkway, New Zealand. A case history to compare 1D versus 2D site effects[END_REF] showed that laterally propagating basingenerated surface waves can not be differentiated from 1D site effects using frequency domain techniques such as H/V ratios or reference site ratios. The ability to conduct sitespecific ground motion investigations is predicated on the existence of geological, geophysical, and geotechnical engineering data to realistically characterize earthquake sources, crustal velocity structure, local site structure and conditions, and to estimate the resultant seismic responses at a site. Lack of information about 3D variations in local and crustal velocity structure are serious impediments to ground motion estimation.
It is now recognized that correlated-random 3D velocity heterogeneity is an intrinsic property of Earth's crust (see [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF] for a discussion). Correlated-random means that random velocity fluctuations are dependent on surrounding velocities with the dependence being inversely proportional to distance. Weak (standard deviation, , of ~5%), random fractal crustal velocity variations are required to explain observed short-period (T < 1 s) body-wave travel time variations, coda amplitudes, and coda durations for ground motions recorded over length scales of tens of kilometers to tens of meters [START_REF] Frankel | Finite difference simulations of seismic scattering: implications for the propagation of short-period seismic waves in the crust and models of crustal heterogeneity[END_REF], most well-log data [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF], the frequency dependence of shear-wave attenuation [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF], and envelope broadening of shear waves with distance [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. As a natural consequence of energy conservation, the excitation of coda waves in the crust means that direct waves (particularly direct shear waves that dominate peak ground motions) that propagate along the minimum travel-time path from the source to the receiver lose energy with increasing propagation distance as a result of the dispersion of energy in time and space. Following [START_REF] Frankel | Finite difference simulations of seismic scattering: implications for the propagation of short-period seismic waves in the crust and models of crustal heterogeneity[END_REF] fractal, self-similar velocity fluctuations are described with an autocorrelation function, P, of the form,
P k a k a r n r n 1 (8)
where a is the correlation distance, k r is radial wavenumber, n=2 in 2D, and n=3 in 3D. When n=4 an exponential power law results [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. Smoothness increasing with distance as a increases in ( 8) and overall smoothness is proportional to n in (8). This is a more realistic model of spatial geologic material variations than completely uncorrelated, spatially independent, random velocity variations. "Correlated-random" is shortened here to "random" for brevity. Let denote wavelength. Forward scattering dominates when << a [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. The situation is complicated in self-similar fractal media when considering a broad frequency range relevant to strong motion seismology (0.1 to 10 Hz) because spans the range >> a to << a and both forward and backscattering become important, particularly as n decreases in (8). Thus, it is difficult to develop simple rigorous expressions to quantify amplitude and phase terms associated with wave propagation through the heterogeneous crust (see [START_REF] Sato | Seismic Wave Propagation and Scattering in the Heterogenous Earth[END_REF]. O'Connell (1999a) showed that direct shear-wave scattering produced by P-SV-wave coupling associated with vertical velocity gradients typical of southern California, combined with 3D velocity variations with n=2 and a standard deviation of velocity variations of 5% in (8), reduce high-frequency peak ground motions for sediment sites close to earthquake faults. O'Connell (1999a) showed that crustal scattering could substantially influence the amplification of near-fault ground motions in areas subjected to significant directivity. Scattering also determines the propagation distances required to randomize phase as discussed later in this paper. Dynamic reduction of soil moduli and increases in damping with increasing shear strain can substantially modify ground motion amplitudes as a function of frequency [START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF]. While there has been evidence of nonlinear soil response in surface strong motion recordings [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Cultera | Nonlinear soil response in the vicinity of the Van Norman Complex following the 1994 Northridge, California, earthquake[END_REF], interpretation of these surface records solely in terms of soil nonlinearity is intrinsically non-unique (O'Connell, 1999a). In contrast, downhole strong motion arrays have provided definitive evidence of soil nonlinearity consistent with laboratory testing of soils [START_REF] Chang | Development of shear modulus reduction curves based on Lotung downhole ground motion data[END_REF]Wen et al., 1995, Ghayamghamain and[START_REF] Ghayamghamain | On the characteristics of non-linear soil response and dynamic soil properties using vertical array data in Japan[END_REF][START_REF] Satoh | Nonlinear behavior of soil sediments identified by using borehole records observed at the Ashigara Valley, Japan[END_REF][START_REF] Satoh | Nonlinear behavior of scoria evaluated from borehole records in eastern Shizuoka prefecture, Japan[END_REF][START_REF] Satoh | Inversion of strain-dependent nonlinear characteristics of soils using weak and strong motions observed by borehole sites in Japan[END_REF]. Idriss and Seed (1968a, b) introduced the "equivalent linear method" to calculate nonlinear soil response, which is an iterative method based on the assumption that the response of soil can be approximated by the response of a linear model whose properties are selected in relation to the average strain that occurs at each depth interval in the model during excitation. [START_REF] Joyner | Calculation of nonlinear ground response in earthquakes[END_REF] used a direct nonlinear stress-strain relationship method to demonstrate that the equivalent linear method may significantly underestimate shortperiod motions for thick soil columns and large input motions. [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF] and [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF] demonstrated that dynamic pore-pressure responses can substantially modify nonlinear soil response and actually amplify and extend the durations of strong ground motions for some soil conditions. When a site is situated on soil it is critical to determine whether soil response will decrease or increase ground amplitudes and durations, and to compare the expected frequency dependence of the seismic soil responses with the resonant frequencies of the engineered structure(s). When soils are not saturated, the equivalent linear method is usually adequate with consideration of the caveats of [START_REF] Joyner | Calculation of nonlinear ground response in earthquakes[END_REF]. When soils are saturated and interbedding sands and/or gravels between clay layers is prevalent, a fully nonlinear evaluation of the site that accounts for dynamic pore pressure responses may be necessary [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. [START_REF] Lomnitz | Seismic coupling of interface modes in sedimentary bains: A recipe for distaster[END_REF] showed that for the condition 0.91 1 < 0 , where 1 is the shear-wave velocity of low-velocity material beneath saturated soils, and 0 is the acoustic (compressional-wave) velocity in the near-surface material, a coupled mode between Rayleigh waves propagating along the interface and compressional waves in the near surface material propagates with phase velocity 0 . This mode can propagate over large distances with little attenuation. [START_REF] Lomnitz | Seismic coupling of interface modes in sedimentary bains: A recipe for distaster[END_REF] note that this set of velocity conditions provides a "recipe" for severe earthquake damage on soft ground when combined with a large contrast in Poisson's ratio between the two layers, and when the resonant frequencies of the mode and engineering structures coincide. Linear 2D viscoelastic finite-difference calculations demonstrate the existence of this wave mode at small strains, but nonlinear 2D finite-difference calculations indicate that long-distance propagation of this mode is strongly attenuated [START_REF] O'connell | Influence of 2D Soil Nonlinearity on Basin and Site Responses[END_REF]. Anisotropy complicates polarizations of shear waves. [START_REF] Coutant | Observations of shallow anisotropy on local earthquake records at the Garner Valley, Southern California, downhole array[END_REF] showed that shallow (< 200 m) shear-wave anisotropy strongly influences surface polarizations of shear waves for frequencies < 30 Hz. [START_REF] Chapman | Ray tracing in azimuthally anisotropic media-II. Quasi-shear wave coupling[END_REF] show that quasi-shear (qS) wave polarizations typically twist along ray paths through gradient regions in anisotropic media, causing frequency-dependent coupling between the qS waves. They show that this coupling is much stronger than the analogous coupling between P and SV waves in isotropic gradients because of the small difference between the qS-wave velocities. [START_REF] Chapman | Ray tracing in azimuthally anisotropic media-II. Quasi-shear wave coupling[END_REF] show that in some cases, far-field excitation of both quasi-shear wave and shear-wave splitting will result from an incident wave composed of only one of the quasishear waves. The potential for stronger coupling of quasi-shear waves suggests that the influence of anisotropy on shear-wave polarizations and peak ground motion may be significant in some cases. While the influence of anisotropy on strong ground motions is unknown, it is prudent to avoid suggesting that only a limited class of shear-wave polarizations are likely for a particular site based on isotropic ground motion simulations of ground motion observations at other sites. Velocity anisotropy in the crust can substantially distort the radiation pattern of body waves with shear-wave polarization angles diverging from those in an isotropic medium by as much as 90 degrees or more near directions where group velocities of quasi-SH and SV waves deviate from corresponding phase velocities [START_REF] Kawasaki | Radiation pattern of body waves due to the seismic dislocation occurring in an anisotropic source medium[END_REF].
Thus, anisotropy has the potential to influence radiation pattern coherence as well as ground motion polarization. A common approach is to assume the double-couple radiation pattern disappears over a transition frequency band extending from 1 Hz to 3 Hz [START_REF] Pitarka | Simulation of near-fault strong-ground motion using hybrid Green's functions[END_REF] or up to 10 Hz [START_REF] Zeng | Evaluation of numerical procedures for simulating nearfault long-period ground motions using the Zeng method[END_REF]. The choice of frequency cutoff for the radiation pattern significantly influences estimates of peak response in regions prone to directivity for frequencies close to and greater than the cutoff frequency. This is a very important parameter for stiff (high-frequency) structures such as buildings that tend to have natural frequencies in the 0.5 to 5 Hz frequency band (see discussion in [START_REF] Frankel | How does the ground shake[END_REF]. Topography can substantially influence peak ground motions [START_REF] Boore | A note of the effect of simple topography on seismic SH waves[END_REF][START_REF] Boore | The effect of simple topography on seismic waves: Implications for the acceleration recorded at Pacoima Dam, San Fernando Valley, California[END_REF]. [START_REF] Schultz | Enhanced backscattering of seismic waves from irregular interfaces[END_REF] showed that an amplification factor of 2 can be easily achieved near the flanks of hills relative to the flatter portions of a basin and that substantial amplification and deamplification of shear-wave energy in the 1 to 10 Hz frequency range can occur over short distances. [START_REF] Bouchon | Effect of three-dimensional topography on seismic motion[END_REF] showed that shear-wave amplifications of 50% to 100% can occur in the 1.5 Hz to 20 Hz frequency band near the tops of hills, consistent with observations from the 1994 Northridge earthquake [START_REF] Spudich | Directional topographic site response at Tarzana observed in aftershocks of the 1994 Northrige, Calfornia, earthquake: Implications for mainshock motions[END_REF]. Topography may also contribute to amplification in adjacent basins as well as the contributing to differential ground motions with dilatational strains on the order of 0.003 [START_REF] Hutchings | Ground-motion variability at the Highway 14 and I-5 interchange in the northern San Fernando Valley[END_REF]. Topography has a significant influence on longer-period amplification and groundshaking durations. [START_REF] O'connell | Influence of dip and velocity heterogeneity on reverse-and normal-faulting rupture dynamics and near-fault ground motions[END_REF] showed that topography of the San Gabriel Mountains scatters the surface waves generated by the rupture on the San Andreas fault, leading to lessefficient excitation of basin-edge generated waves and natural resonances within the Los Angeles Basin and reducing peak ground velocity in portions of the basin by up to 50% for frequencies 0.5 Hz or less. These discussions of source and propagation influences on amplitudes and phase are necessarily abbreviated and are not complete, but do provide an indication of the challenges of ground motion estimation, and developing relatively simple, but sufficient ground motion prediction equations based on empirical strong ground motion data. Systematically evaluating all the source and wave propagation factors influencing site-specific ground motions is a daunting task, particularly since it's unlikely that one can know all the relevant source and propagation factors. Often, insufficient information exists to quantitatively evaluate many ground motion factors. Thus, it is useful to develop a susceptibility checklist for ground motion estimation at a particular site. The list would indicate available information for each factor on a scale ranging from ignorance to strong quantitative information and indicate how this state of information could influence ground motions at the site. The result of such a checklist would be a susceptibility rating for potential biases and errors for peak motion and duration estimates of site-specific ground motions.
Nonlinear site response
Introduction
The near surface geological site conditions in the upper tens of meters are one of the dominant factors in controlling the amplitude and variation of strong ground motion, and the damage patterns that result from large earthquakes. It has long been known that soft sediments amplify the earthquake ground motion. Superficial deposits, especially alluvium type, are responsible for a remarkable modification of the seismic waves. The amplification of the seismic ground motion basically originates from the strong contrast between the rock and soil physical properties (e.g. [START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF]. At small deformations, the soil response is linear: strain and stress are related linearly by the rigidity modulus independently of the strain level (Hooke's law). Mainly because most of the first strong motion observations seemed to be consistent with linear elasticity, seismologists generally accept a linear model of ground motion response to seismic excitation even at the strong motion level. However, according to laboratory studies (e.g. [START_REF] Seed | Influence of soil conditions on ground motions during earthquakes[END_REF]), Hooke's law breaks down at larger strains and the nonlinear relation between strain and stress may significantly affect the strong ground motion at soil sites near the source of large earthquakes. Since laboratory conditions are not the same as those in the field, several authors have tried to find field data to understand nonlinear soil behavior. In order to isolate the local site effects, the transfer function of seismic waves in soil layers has to be estimated by calculating the spectral ratio between the motion at the surface and the underlying soil layers. Variation of these spectral ratios between strong and weak motion has actively been searched in order to detect nonlinearity. For example, [START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF] observed an amplification reduction at the Treasure Island soft soil site in San Francisco. [START_REF] Beresnev | Nonlinear site response -a reality?[END_REF] also reported a decrease of amplification factors for the array data in the Lotung valley (Taiwan). Such a decrease has also been observed at different Japanese sites including the Port Island site (e.g. Satoh et al., 1997, Aguirre and[START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF]. On the other hand, [START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF] also reported a quasi-linear behavior for a stiff soil site in the whole range from 0.006 g to 0.43g. According to these results there is a need to precise the thresholds corresponding to the onset of nonlinearity and the maximum strong motions amplification factors according to the nature and thickness of soil deposits [START_REF] Field | Nonlinear sediment response during the 1994 Northridge earthquake: observations and finite-source simulations[END_REF]. Nevertheless, the use of surface ground motion alone does not help to directly calculate the transfer function and these variations. Rock outcrop motion is then usually used to estimate the motion at the bedrock and to calculate sediments amplification for both weak and strong motion (e.g. Celebi et al., 1987;Singh et al., 1988;[START_REF] Darragh | The site response of two rock and soil station pairs to strong and weak ground motion[END_REF][START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Beresnev | Nonlinearity at California generic soil sites from modeling recent strongmotion data[END_REF]. The accuracy of this approximation strongly depends on near surface rock weathering or topography complexity [START_REF] Steidl | What is A Reference Site? Bull[END_REF]. Moreover, the estimate of site response can be biased by any systematic difference for the path effects between stations located on soil and rock. One additional complication is also due to finite source effects such as directivity. In case of large earthquakes, waves arriving from different locations may interfere causing source effects to vary with site location [START_REF] Oglesby | A faulting model for the 1992 Petrolia earthquake: Can extreme ground acceleration be a source effect?[END_REF]. Since these finite source effects strongly depend on the source size, they could mimic the observations cited as evidence for soil nonlinearity. Finally, O'Connell (1999) and Hartzell et al. (2005) show that in the near-fault region of M > 6 earthquakes linear wave propagation in weakly heterogeneous, random three dimensional crustal velocity can mimic observed, apparently, nonlinear sediment response in regions with large vertical velocity gradients that persist from near the surface to several km depth, making it difficult to separate soil nonlinear responses from other larger-scale linear wave propagation effects solely using surface ground motion recordings. Because of these difficulties, the most effective means for quantifying the modification in ground motion induced by soil sediments is to record the motion directly in boreholes that penetrate these layers. Using records from vertical arrays it is possible to separate the site from source and path effects and therefore clearly identify the nonlinear behavior and changes of the soil physical properties during the shaking (e.g. [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF][START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF][START_REF] Satoh | Inversion of strain-dependent nonlinear characteristics of soils using weak and strong motions observed by borehole sites in Japan[END_REF][START_REF] Assimaki | Inverse analysis of weak and strong motion borehole array data from the Mw7.0 Sanriku-Minami earthquake[END_REF][START_REF] Assimaki | A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior[END_REF][START_REF] Bonilla | Nonlinear site response evidence of K-NET and KiK-net records from the 2011 off the Pacific coast of Tohoku Earthquake[END_REF].
Nonlinear soil behavior
For years, it has been established in geotechnical engineering that soils behave nonlinearly. This fact comes from numerous experiments with cyclic loading of soil samples. The stressstrain curve has a hysteretic behavior, which produces a reduction of shear modulus as well as an increasing in damping factor. 4.1 shows a typical stress-strain curve with a loading phase and consequent hysteretic behavior for the later loading process. There have been several attempts to describe mathematically the shape of this curve, and among those models the hyperbolic is one of the easiest to use because of its mathematical formulation as well as for the number of parameters necessary to describe it [START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF][START_REF] Beresnev | Nonlinear site response -a reality?[END_REF]
) = 1 + | |
where is the undisturbed shear modulus, and τ is the maximum stress that the material can support in the initial state.
is also known as G max because it has the highest value of shear modulus at low strains. In order to have the hysteretic behavior, the model follows the so-called Masing's rule, which in its basic form translates the origin and expands the horizontal and vertical axis by a factor of 2. Thus,
- 2 = - 2
where (γ , τ ) is the reversal point for unloading and reloading curves. This behavior produces two changes in the elastic parameters of the soil. First, the larger the maximum strain, the lower the secant shear modulus obtained as the slope of the line between the origin and the reversal point of the hysteresis loop. Second, hysteresis shows a loss of energy in each cycle, and as it was mentioned above, the energy is proportional to the area of the loop. Thus, the larger the maximum strain, the larger the damping factor. How can the changes in the elastic parameters be detected when looking at transfer functions? We know that the resonance frequencies are proportional to (2 + 1) 4 ⁄ (the fundamental frequency corresponds to = 0). Where is the shear velocity and is the soil thickness. Thus, if the shear modulus is reduced then the resonance frequencies are also reduced because =
, where is the material density. In other words, in the presence of nonlinearity the transfer function shifts the resonance frequencies toward lower frequencies. In addition, increased dissipation reduces soil amplification. Figure 4.2 shows an example of nonlinear soil behavior at station TTRH02 (Vs 30 = 340 m/s), KiK-net station that recorded the M JMA 7.3 October 2000 Tottori in Japan. The orange shaded region represents the 95% borehole transfer function computed using events having a PGA less than 10 cm/s 2 . Conversely, the solid line is the borehole transfer function obtained using the data from the Tottori mainshock. One can clearly see the difference between these two estimates of the transfer function, namely a broadband deamplification and a shift of resonance frequencies to lower values. The fact that the linear estimate is computed at the 95% confidence limits means that we are confident that this site underwent nonlinear site responses at a 95% probability level. However, nonlinear effects can also directly be seen on acceleration time histories. Figure 4.3 shows acceleration records, surface and downhole, of the 1995 Kobe earthquake at Port Island (left) and the 1993 Kushiro-Oki earthquake at Kushiro Port (right). Both sites have shear wave velocity profiles relatively close each other, except in the first 30 meters depth. Yet, their response is completely different. Port Island is a man-made site composed of loose sands that liquefied during the Kobe event [START_REF] Aguirre | Nonlinearity, Liquefaction, and Velocity Variation of Soft Soil Layers in Port Island, Kobe, during the Hyogo-ken Nanbu Earthquake[END_REF]. Practically there is no energy after the S-wave train in the record at the surface. Conversely, Kushiro Port is composed of dense sands and shows, in the accelerometer located at ground level, large acceleration spikes that are even higher than their counterpart at depth. [START_REF] Iai | Response of a dense sand deposit during 1993 Kushiro-Oki Earthquake[END_REF], [START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF] showed that the appearance of large acceleration peak values riding a low frequency carrier are an indicator of soil nonlinearity known as cyclic mobility. Laboratory studies show that the physical mechanism that produces such phenomenon is the dilatant nature of cohesionless soils, which introduces the partial recovery of the shear strength under cyclic loads. This recovery translates into the ability to produce large deformations followed by large and spiky shear stresses. The spikes observed in the acceleration records are directly related to these periods of dilatancy and generation of pore pressure. These examples indicate that nonlinear soil phenomena are complex. We cannot see the effects of nonlinear soil behavior on the transfer function only, but also on the acceleration time histories. This involves solving the wave equation by integrating nonlinear soil rheologies in the time domain, the subject treated in the next section.
The strain space multishear mechanism model
The multishear mechanism model [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] is a plane strain formulation to simulate pore pressure generation in sands under cyclic loading and undrained conditions. Iai et al. (1990aIai et al. ( , 1990b) ) modified the model to account for the cyclic mobility and dilatancy of sands. This method has the following strong points: It is relatively easy to implement. It has few parameters that can be obtained from simple laboratory tests that include pore pressure generation.
This model represents the effect of rotation of principal stresses during cyclic behavior of anisotropically consolidated sands.
Since the theory is a plane strain condition, it can be used to study problems in two dimensions, e.g. embankments, quay walls, among others. In two dimensional cartesian coordinates and using vectorial notation, the effective stress σ′ and strain ϵ tensors can be written as
{ } = ′ ′ { } =
where the superscript T represents the vector transpose operation; σ′ , σ′ , ϵ , and ϵ represent the effective normal stresses and strains in the horizontal and vertical directions; τ and γ are the shear stress and shear strain, respectively. The multiple mechanism model relates the stress and strain through the following incremental equation (Iai et al., 1990a(Iai et al., , 1990b)),
{ } = [ ]({ } - )
where the curly brackets represent the vector notation; ϵ is the volumetric strain produced by the pore pressure, and is the tangential stiffness matrix given by
[ ] = K ( ) ( ) + ( ) ( ) ( )
The first term is the volumetric mechanism represented by the bulk modulus . The second part is the shear mechanism represented by the tangential shear modulus ( ) idealized as a collection of springs (Figure 4.4). Each spring follows the hyperbolic stress-strain model [START_REF] Konder | A hyperbolic stress-strain formulation for sands[END_REF] during the loading and unloading hysteresis process. The shear mechanism may also be considered as a combination of pure shear and shear by differential compression.
In addition,
( ) = {1 1 0} ( ) = {cos -cos sin } = ( -1)
where ∆θ = π I ⁄ is the angle between each spring as shown in Figure 4.4. [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] found, using laboratory data, that the pore pressure excess is correlated with the cumulative shear work produced during cyclic loading. Iai et al. (1990aIai et al. ( , 1990b) developed a mathematical model that needs five parameters, called hereafter dilatancy parameters, to take into account this correlation. These parameters represent the initial and final phases of dilatancy, p and p ; overall dilatancy w ; threshold limit and ultimate limit of dilatancy, c and S . These parameters are obtained by fitting laboratory data, from either undrained stress controlled cyclic shear tests or from cyclic stress ratio curves. Details of this constitutive model can be found in Iai et al. (1990aIai et al. ( , 1990b)). Ishihara, 1985).
At this point, this formulation provides only the backbone curve. It is here that the hysteresis is now taken into account by using the generalized Masing rules. In fact, they are not simple rules but a state equation that describes hysteresis given a backbone curve [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF]. They are called generalized Masing rules because its formulation contains [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF] and the original Masing models as special cases. Furthermore, this formulation allows, by controlling the hysteresis scale factor, the reshaping of the backbone curve as suggested by [START_REF] Ishihara | Modelling of stress-strain relations of soils in cyclic loading[END_REF] so that the hysteresis path follows a prescribed damping ratio.
The generalized Masing rules
In previous sections we use the hyperbolic model to describe the stress-strain space of soil materials subjected to cyclic loads. In the hyperbolic model, the nonlinear relation can be written as
= 1 1 + | | ⁄
where γ = τ G ⁄ is the reference strain. Introducing the equation above into = , where is the shear stress and is the shear strain; and adding the hysteresis operator, we have
= ( ) = ⁄ 1 + | ⁄ |
where is the backbone curve, and (. ) is the hysteresis operator (application of the generalized Masing rules). Hysteresis behavior can be implemented in a phenomenological manner with the help of the Masing and extended Masing rules [START_REF] Vucetic | Normalized behavior of clay under irregular cylic loading[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF]. However, these rules are not enough to constrain the shear stress τ to values not exceeding the strength τ . This happens when the time behavior of the shear strain departs from the simple cyclic behavior, and of course, noncyclic time behavior is common in seismic signals. Inadequacy of the Masing rules to describe the hysteretic behavior of complicated signals has been already pointed out and some remedies have been proposed (e.g. [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF][START_REF] Li | Dynamic skeleton curve of soil stress-strain relation under irregular cyclic loading Earthquake research in China[END_REF]. The Masing rules consist of a translation and dilatation of the original law governing the strain-stress relationship. While the initial loading of the material is given by the backbone curve ( ), the subsequent loading and unloading, the strain-stress relationship is given by:
- = -
where the coordinate ( , ) corresponds to the reversal points in the strain-stress space, and is the so-called hysteresis scale factor [START_REF] Archuleta | Nonlinearity in observed and computed accelerograms[END_REF]. In Masing's original formulation, the hysteresis scale factor is equal to 2. A first extension to the Masing rules can be obtained by releasing the constraint = 2. This parameter controls the shape of the loop in the stress-strain space (Bonilla et al., 1998). However, numerical simulations suggest spurious behavior of for irregular loading and unloading processes even when extended Masing rules are used. A further generalization of Masing rules is obtained choosing the value of in such way to assure that the path , at a given unloading or reloading, in the strain-stress space will cross the backbone curve, and becomes bounded by the maximum strength of the material . This can be achieved by having the following condition,
lim ⟶ ( )| | - ≤ ≤ |∞|
where is the specified finite or infinite strain condition, and correspond to the turning point and the hysteresis shape factor at the jth unloading or reloading; and
( ) is the sign of the strain rate. Thus,
= lim ⟶ ( )| | - + where = (| |)
, and ( , ) is the turning point pair at the jth reversal. Replacing the functional form of the backbone (the hyperbolic model) and after some algebra we have,
= - | -| | - - ( -) ≤ ≤ |∞|
The equation above represents a general constraint on the hysteresis scale factor, so that the computed stress does not exceed depending on the chosen maximum deformation that the material is thought to resist. The limit → ∞ corresponds to the Cundall-Pyke hypothesis [START_REF] Pyke | Nonlinear soil model for irregular cyclic loadings[END_REF], while → is similar to some extent to a method discussed in [START_REF] Li | Dynamic skeleton curve of soil stress-strain relation under irregular cyclic loading Earthquake research in China[END_REF]. In the following section, we will see an example of application of this soil constitutive model [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF]Iai et al., 1990aIai et al., , 1990b) ) together with the Generalized Masing hysteresis operator [START_REF] Bonilla | Computation of linear and nonlinear response for near field ground motion[END_REF].
Analysis of the 1987 Superstition Hills Earthquake
On 24 November 1987, the M L 6.6 Superstition Hills earthquake was recorded at the Wildlife Refuge station. This site is located in southern California in the seismically active Imperial Valley. In 1982 it was instrumented by the U.S. Geological Survey with downhole and surface accelerometers and piezometers to record ground motions and pore water pressures during earthquakes [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. The Wildlife site is located in the flood plain of the Alamo River, about 20 m from the river's western bank. In situ investigations have shown that the site stratigraphy consists of a shallow silt layer approximately 2.5 m thick underlain by a 4.3 m thick layer of loose silty sand, which is in turn underlain by a stiff to very stiff clay. The water table fluctuates around 2-m depth [START_REF] Matasovic | Analysis of seismic records obtained on November 24, 1987 at the Wildlife Liquefaction Array[END_REF]. This site shows historically one direct in situ observation of nonlinearity in borehole data. The Wildlife Refuge liquefaction array recorded acceleration at the surface and 7.5-m depth, and pore pressure on six piezometers at various depths [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. The acceleration time histories for the Superstition Hills events at GL-0 m and GL-7.5 m, respectively, are shown in Figure 4.5 (left). Note how the acceleration changes abruptly for the record at GL-0 m after the S wave. Several sharp peaks are observed; they are very close to the peak acceleration for the whole record. In addition, these peaks have lower frequency than the previous part of the record (the beginning of the S wave, for instance). [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF] used the Superstition Hills earthquakes to estimate the stress and strain from borehole acceleration recordings. They approximated the shear stress (ℎ, ) at depth ℎ, and the mean shear strain ̅ between the two sensors as follows,
(ℎ, ) = 1 2 ℎ[ (0, ) + (ℎ, )] (ℎ, ) = (0, ) + ℎ [ ( , ) -(0, )] ̅ ( ) = (ℎ, ) -(0, ) 2
where (0, ) is the horizontal acceleration at the ground surface, (ℎ, ) is the acceleration at depth ℎ (evaluated through linear interpolation); ( , ) is the acceleration at the bottom of the layer; ( , ) and (0, ) are the displacement histories obtained by integrating twice the corresponding acceleration histories; is the thickness of the layer; and is the density. Using this method, the stress and strain at GL-2.9 m were computed (Figure 4.5). This figure clearly shows the large nonlinearity developed during the Superstition Hills event. The stress-strain loops form an S-shape and the strains are as large as 1.5%. At this depth, there is a piezometer (P5 according to [START_REF] Holzer | Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake[END_REF]. With this information it is also possible to reconstruct the stress path (bottom right of Figure 4.5). Note that some of the pore pressure pulses are correlated with episodes of high shear stress development. The stress path shows a strong contractive phase followed by dilatancy when the effective mean stress is close to 15 kPa. Fig. 4.5. Wildlife Refuge station that recorded the 1987 Superstition Hills earthquake both acceleration and pores pressure time histories (left). Computed stress and strain time histories according to [START_REF] Zeghal | Analysis of Site Liquefaction Using Earthquake Records[END_REF], stress-strain loops and stress path history reconstitution (right).
Using the stress and stress time histories at GL-2.9 m computed earlier, [START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF] performed a trial-and-error procedure in order to obtain the dilatancy parameters that best reproduce such observations. Figure 4.6 compares the computed shear stress time history with the observed shear strain at GL-2.9 m. The stress-strain hysteresis loops are also shown. We observe that the computed shear stress is well simulated; the stress-strain space also shows the same dilatant behavior (S-shape hysteresis loops) as the observed data.
Once the model parameters were determined, they proceed to compute the acceleration time history at GL-0 m using the north-south record at GL-7.5 m as input motion.
NOAH2D 2D P-SV analyses of the maximum observed peak acceleration
Current nonlinear formulations generally reproduce all first-order aspects of nonlinear soil responses. To illustrate this point, we present a nonlinear analyses of the largest peak ground acceleration recorded to date of > 4 g, that has a peak vertical acceleration of 3.8 g (Aoi et al., 2008). Aoi et al. (2008) analyzed ground motions recorded by the Kyoshin Network (Kik-net) during the M 6.9 2008 Iwate-Miyagi earthquake that included one soilsurface site that recorded a vertical acceleration of 3.8g (station IWTH25). The horizontal borehole and surface motions reported in Aoi et al. ( 2008) for station IWTH25 are generally consistent with the soil reducing surface horizontal accelerations at high frequencies, as is widely observed at soil sites [START_REF] Field | Nonlinear ground-motion amplification by sediments during the 1994 Northridge earthquake[END_REF][START_REF] Archuleta | Direct observation of nonlinear soil response in acceleration time histories[END_REF][START_REF] Seed | Analyses of ground motions at Union Bay, Seattle, during earthquakes and distant nuclear blasts[END_REF][START_REF] Beresnev | Nonlinear soil amplification: Its corroboration in Taiwan[END_REF][START_REF] Beresnev | Properties of vertical ground motions[END_REF] 2006) uses a plane-strain model (Iai et al., 1990a(Iai et al., , 1990b)). In this section we show that this model could explain the first-order soil responses observed at station IWTH25 using fairly generic approximation to the site's nonlinear soil properties. The P-SV nonlinear rheology developed by Iai et al. (1990aIai et al. ( , 1990b) ) was used in the [START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF] implementation of 2D nonlinear wave propagation. The constitutive equation implemented corresponds to the strain space multishear mechanism model developed by [START_REF] Towhata | Modeling Soil Behavior Under Principal Axes Rotation[END_REF] and Iai et al. (1990aIai et al. ( , 1990b) ) with its backbone characterized by the hyperbolic equation (Hardin and Drnevich, 1972). The multishear mechanism model is a plane strain formulation to simulate cyclic mobility of sands under undrained conditions. In the calculations of this study, a total stress rheology (pore pressure was ignored) was used in the second order staggered grid P-SV plane-strain finite difference code. Perfectly matched layer (PML) absorbing boundary conditions were used to approximate elastic (transmitting) boundary conditions at the bottom and side edges using an implementation adapted for finite differences from Ma and Liu (2006). Linear hysteretic damping (Q) was implemented using the method of [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF]. The horizontal-and vertical component plane waves are inserted in the linear viscoelastic portion of the 2D with a userselectable range of incident angles. The Kyoshin Network, or Kik-net, in Japan (Fujiwara et al., 2005), has recorded numerous earthquakes with ground motion data recorded at surface and at depth in underlying rock and soil. We use the recording at Kik-net station IWTH25, where a 3.8g peak vertical acceleration was recorded (Aoi et al., 2008). Analyses of the combined downhole and surface ground motions from IWTH25 provide an opportunity to evaluate several strategies to estimate vertical ground motions since a P-and S-wave velocity profile is available to the bottom of the borehole at 260m (Aoi et al., 2008). Station IWTH25 is located in a region of rugged topography adjacent to a high-gradient stream channel on a fluvial terrace. The ruggest topography reflects the station's hangingwall location relative to the reverse fault . Station IWTH25 is located near a region of large slip along strike and updip of the hypocenter. Consequently, IWTH25 is subjected to significant rupture directivity and near-fault radiation associated with strong gradients of slip and rupture velocity on the portions of the fault close to the station (Miyazaki et al., 2009). The IWTH25 ground motion has been of particular interest because of the extreme peak vertical acceleration (3.8g) and peculiar asymmetric amplitudes distribution of the vertical accelerations (Aoi et al., 2008;[START_REF] O'connell | Assessing Ground Shaking[END_REF]Hada et al., 2009;Miyazaki et al., 2009;Yamada et al, 2009a); the upward vertical acceleration is much larger than the downward direction, although in the borehole record at a depth of 260 m at the same site, the upward and downward accelerations have symmetric amplitudes (Aoi et al., 2008) The geologic environment at station IWTH25 will clearly produce lateral changes in shallow velocity structure. In particular, the hangingwall uplift associated with repeated faulting similar to the 2008 earthquake will produce a series of uplift terraces adjacent to the stream next to station IWTH25, with the lowest shallow velocities being found on the lowest terrace adjacent to the stream, where station IWTH25 is located. The width of the stream and lowest terrace is about 100m near station IWTH25. We constructed a 2D velocity model by including a region 100 m wide with surface Vs=300 m/s layer 2-m deep and then extended Vs=500 m/s to the free surface in the region surrounding the 100-m-wide low-velocity surface layer. Station IWTH25 is assumed to be located relatively close (4-5 m) to the lateral velocity change within the lowst-velocity portion of the 2D velocity model because the geologic log from station IWTH25 indicates a only 1-2 m of young terrace deposits (Aoi et al., 2008), but the youngest terrace probably extends across and encompasses the stream channels and their margins. The dominant large-amplitude arrivals in the borehole motions are associated with large slip regions below and just south of station IWTH25. Consequently, a planewave incident at 80 degrees from the south was used to propagate the borehole motion to the surface in the 2D model. It is important to mention some factors that are not explicitly accounted for in the approach of [START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF]. Goldberg (1960) was among the first to theoretically show the interaction between P and S waves in an elastic medium for large-amplitude seismic waves. His solution yielded the following results: (1) P-and S-waves couple, (2) S waves induce P waves, (3) the induced waves have a dominant frequency twice the Swave frequency, (4) the induced P waves propagate ahead with P-wave velocity.
Ground motion prediction equations based on empirical data
Ground motion observations are the result of a long history of instrument development and deployment, instigated primarily by earthquake engineers, to acquire data to develop an empirical foundation to understand and predict earthquake ground motions for use in the design of engineered structures. Strong motion instruments usually record time histories of ground acceleration that can be post-processed to estimate ground velocities and displacements. A particularly useful derived quantity for engineering analyses are response spectra, which are the maximum amplitudes of modestly damped resonant responses of single-degree-of-freedom oscillators (an idealization of simple building responses) to a particular ground motion time history, as a function of natural period or natural frequency. While peak accelerations are always of concern for engineering analyses, peak ground velocity is now recognized as a better indicator of damage potential for large structures than is peak ground acceleration (EERI, 1994). Engineering analyses often consist of linear approaches to determine if structures reach their linear strength limits. Ground motion estimation quantities required for linear analyses are peak accelerations and velocities and associated response spectra. Nonlinear engineering analyses require estimates of future acceleration time histories. The discussion presented in this section focuses on empirical ground motion parameter estimation methods. Ground motion estimation methods required for nonlinear engineering analyses are presented in subsequent sections.
Historically the estimation of ground motion parameters such as peak acceleration, velocity, and displacement, and response spectral ordinates, and duration has been based on regression relationships developed using strong motion observations. These ground motion prediction equations strive to interpolate and extrapolate existing ground motion measurements to serve the needs to design for seismic loads.
Function form of GMPEs for regression
In their simplest form, these empirical GMPEs predict peak ground motions based on a limited parametric description of earthquake and site characteristics. Peak ground motion amplitudes generally increase with increasing magnitude up to a threshold magnitude range where peak accelerations saturate, i.e., only slightly increase or stay nearly constant above the threshold magnitude range [START_REF] Campbell | Near-source attenuation of peak horizontal acceleration[END_REF][START_REF] Boore | Estimation of response spectra and peak accelerations from western North American earthquakes: An interim report[END_REF]. Similarly, observed peak ground motion amplitudes decrease with increasing distance from the earthquake fault, but saturate at close distances to faults such that the decrease in amplitudes with increasing distance is small within several km of faults. These GMPEs relate specific ground motion parameters to earthquake magnitude, reduction (attenuation) of ground motion amplitudes with increasing distance from the fault (geometric spreading), and local site characteristics using either site classification schemes or a range of quantitative measures of shallow to deeper velocity averages or thresholds. The 30-m-average shear-wave velocity ( Vs30) is most commonly used to account for firstorder influences of shallow site conditions. Depths to shear-wave velocities of 1.0, 1.5, and 2.5 km/s (Z1.0 in Abrahamson and Silva (2008) and [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], Z1.5 in Choi et al. (2005) and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]Z2.5 in Campbell andBozorgnia (2008), respectively) are sometimes used to account for influences of larger scale crustal velocity structure on ground motions. The "Next Generation Attenuation" (NGA) Project was a collaborative research program with the objective of developing updated GMPEs (attenuation relationships) for the western U.S. and other worldwide active shallow tectonic regions. These relationships have been widely reviewed and applied in a number of settings [START_REF] Stafford | An Evaluation of the Applicability of the NGA Models to Ground Motion Prediction in the Euro-Mediterranean Region[END_REF][START_REF] Shoja-Taheri | A Test of the Applicability of the NGA Models to the Strong-Motion Data in the Iranian Plateau[END_REF]. Five sets of updated GMPEs were developed by teams working independently but interacting throughout the NGA development process. The individual teams all had previous experience in the development of GMPEs. The individual teams all had access to a comprehensive, updated ground motion database that had been consistently processed (Chiou et al., 2008). Each team was free to identify portions of the database to either include or exclude from the development process. A total of 3551 recordings were included in the PEER-NGA database. The number of records actually used by the developers varied from 942 to 2754. The individual GMPEs are described in Abrahamson and Silva (2008), [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF], [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], Chiou andYoungs (2008), and[START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF]. These models are referred to as AS08, BA08, CB08, CY08, and I08, respectively, below. The NGA GMPEs developed equations for the orientation-independent average horizontal component of ground motions [START_REF] Boore | GMRotD and GMRotI: Orientation-Independent Measures of Ground Motion[END_REF].
The NGA account for these ground motion factors using the general form,
1 2 3 4 5 6 7 8 9 ln N N REF Source source site HW main Y A A M A M M A ln R C A R A F A F A F A F (9) and, 10 , 30 , lnY A M Vs ( 10
)
where Y is the ground motion parameter of interest (peak acceleration, velocity, displacement, response spectral ordinate, etc.), M is magnitude, R is a distance measure, M REF and C SOURCE are magnitude and distance terms that define the change in amplitude scaling and the F [source, site, HW, main] are indicator variables of source type, site type, hanging wall geometry and main shock discriminator. The A i are coefficients to be determined by the regression. Not all of the five NGA GMPEs utilize all of these F indicator variables. The lnY term represents the estimate of the period dependent standard deviation in the parameter lnY at the magnitude and distance of interest. The NGA models use different source parameters and distance measures. Some of the models include the depth to top of rupture (TOR) as a source parameter. This choice was partially motivated by research [START_REF] Somerville | Differences in earthquake source and ground motion characteristics between surface and buried earthquakes[END_REF]) that suggested a systematic difference in the ground motion for earthquakes with buried ruptures producing larger short period ground motions as compared to earthquakes with surface rupture. Large reverse-slip earthquakes tend to be buried ruptures more often than large strike-slip earthquakes so the effect of buried ruptures may be partially incorporated in the style-offaulting factor. Not all the NGA developers found the inclusion of TOR to be a statistically significant factor. All of the models except for I08 use the time-averaged S-wave velocity in the top 30 m of a site, Vs30, as the primary site response parameter. I08 is defined only for a reference rock outcrop with Vs30 = 450-900 m/s. Approximately two thirds of the recordings in the PEER-NGA database were obtained at sites without measured values of shear-wave velocity. Empirical correlations between the surface geology and Vs30, were developed (Chiou and others, 2008) and used with assessments of the surface geology to estimate values of Vs30 at the sites without measured velocities. The implications of the use of estimated Vs30 on the standard deviation ( T ) was evaluated and included by AS08.
All of the relationships that model site response incorporate nonlinear site effects. Two different metrics for the strength of the shaking are used to quantify nonlinear site response effects. AS08, BA08, and CB08 use the median estimate of PGA on a reference rock outcrop in the nonlinear site response term. CY08 uses the median estimate of spectral acceleration on a reference rock outcrop at the period of interest. The definition of "reference rock" varies from Vs30=535 m/s (I08) to Vs30=1130 m/s (CY08). A very small fraction of the strong-motion data in the PEER-NGA data set was obtained at sites with Vs30> 900m/s. Depths to shear-wave velocities of 1.0, 1.5, and 2.5 km/s (Z1.0 in Abrahamson and Silva (2008) and [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], Z1.5 in Choi et al. (2005) and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and Z2.5 in [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], respectively) are sometimes used to account for influences of larger scale crustal velocity structure on ground motions. The implications of the methodology chosen to represent larger scale crustal velocity structure on ground motions is discussed in more detail below.
The standard deviation or aleatory variability, often denoted as sigma (σ T ), exerts a very strong influence on the results of probabilistic seismic hazard analysis (PSHA) [START_REF] Bommer | Why do modern probabilistic seismic-hazard analyses often lead to increased hazard estimates?[END_REF]. For this reason it is important to note that the total aleatory uncertainties, as well as the intra-and inter-event uncertainties are systematically larger for the new NGA equations relative to previous relationships (Boore et al., 1997;[START_REF] Sadigh | Atenuation relationships for shallow crustal earthquakes based on California strong motion data[END_REF][START_REF] Campbell | Empirical near-source attenuation relationships for horizontal and vertical components of peak ground acceleration, peak ground velocity, and pseudo-absolute acceleration response spectra[END_REF]. Three of the NGA models incorporate a magnitude dependence in the standard deviation. For magnitudes near 7, the five NGA models have similar standard deviations. However, for M < 5.5, there is a large difference in the standard deviations, with the three magnitude-dependent models exhibiting much larger standard deviations ( T > 0.7) than the magnitude-independent models ( T ~0.54). The three models that include a magnitude-dependent standard deviation (AS08, CY08 and I08) all included aftershocks, whereas the two models that used a magnitude-independent standard deviation (BA08 and CB08) excluded them. Including aftershocks greatly increases the number of small-magnitude earthquakes. However, there is a resulting trade-off of significantly larger variability in predicted ground motions than if only large magnitude mainshocks are used. Significant differences in the standard deviations are also noted for soil sites at short distances, this is most likely due to inclusion or exclusion of nonlinear site effects on the standard deviation.
In general, the NGA models predict similar median values (within a factor of ~1.5) for vertical strike-slip earthquakes with 5.5 < M < 7.5. The largest differences are for small magnitudes (M < 5.5), for very large magnitudes (M = 8), and for sites located over the hanging wall of dipping faults (Abrahamson et al., 2008). As more data has become available to the GMPE developers the number of coefficients in the relationships has increased significantly (>20 in some cases). However, the aleatory variability values ( T ) have not decreased through time (J. Bommer, pers. comm.). Since empirical GMPEs, including NGA GMPEs, are by necessity somewhat generic compared to the wide range of seismic source, crustal velocity structure, and site conditions encountered in engineering applications, there are cases when application of empirical GMPEs is difficult and most importantly, more uncertain. In the context of PSHA, these additional epistemic (knowledge) uncertainties, when quantified, are naturally incorporated into the probabilistic estimation of ground motion parameters. We present two situations of engineering interest, where the application of empirical GMPEs is challenging, to illustrate the difficulties and suggest a path forward in the ongoing process to update and improve empirical GMPEs.
Application of NGA GMPEs for near-fault Vs30 > 900 m/s sites
Independent analysis of the performance of the NGA GMPEs with post-NGA earthquake ground motion recordings demonstrate that use of measured site Vs30 characteristics leads to greatly improved ground motion predictions, with lower performance for sites where Vs30 is inferred instead of directly measured [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF]. Thus, the use of Vs30 represents a significant improvement over previous generations of GMPEs that use a simple qualitative site classification scheme. [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF] suggest that development of of better site characteristics than Vs30 may also improve the prediction accuracy of GMPEs. In this section we illustrate the challenges presented in the use and application of Vs30 in the NGA GMPE regressions and application of the NGA GMPEs for "rock" sites.
It is becoming more common to need ground motion estimates for "rock" site conditions to specify inputs for engineering analyses that include both structures and shallow lowervelocity materials within the analysis model. In this section we consider the challenges in estimating ground motions for site conditions of Vs30 > 900 m/s close to strike slip faults. The problem is challenging for empirical GMPEs because most of the available recordings of near-fault strike-slip ground motions are from sites with Vs30 on the order of 300 m/s. The NGA GMPEs that implement Vs30 used empirical and/or synthetic amplification functions that involve modifying the observed ground motions prior to regression. In this section we discuss some of the challenges of this approach as it applies to estimating ground motions at rock (Vs30 > 900 m/s) sites that are typical of foundation conditions for many large and/or deeply embedded structures. The four NGA GMPEs that implement Vs30 using deterministic ("constrained") amplification coefficients to remap the observed near-fault strike-slip strong motion data that have an average Vs30=299 m/s PSA (Table 5) prior to regression. In contrast, Boore et al. (1997) applied non-linear multi-stage regression using the observed data directly; the observed ground motion values were directly employed in their regression with no remapping of values due to site characteristics). [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] used the Choi and Stewart (2005) linear amplification coefficients to remap observed response spectra to a reference Vs30=760 m/s. [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] used 1D nonlinear soil amplification simulation results of [START_REF] Walling | Nonlinear site amplification factors for constraining the NGA models[END_REF] to deterministically fix nonlinear amplification and remap all response spectra with Vs30 < 400-1086 m/s, depending on period, to create the response spectral "data" input into the non-linear multi-stage regression. Abrahamson and Silva (2008) use an approach similar to [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF]. [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] do not explicitly specify how the coefficients for linear and nonlinear amplification were constrained or obtained. Thus, [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] remap observed response spectra prior to regression using the linear coefficients from Choi and Stewart (2005), [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] and Abrahamson and Silva (2008) remap observed response spectra prior to regression using the nonlinear coefficients from [START_REF] Walling | Nonlinear site amplification factors for constraining the NGA models[END_REF], and it is not clear what [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] did. We use [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] and [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] to illustrate how the observed response spectral data for sites with Vs30=300 m/s are changed to create the actual "data" used in the regression to estimate Vs30=915 m/s ground motions. It is instructive to compare the approaches and resulting near-fault ground motion predictions. For [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] amplification normalized by the GMPEs longest period amplification (10 s) is used in Fig. 5.1, to clearly illustrate the scale of the a priori deterministic linear amplification as a function of period. The a priori deterministic linearamplification normalization (Fig. 5.1a) takes the original median near-fault response spectra that have a peak amplitude at about 0.65 s (Figure 5.1) and create response spectra with peak amplitude at 0.2 s that is used as the "observed data" (red curve in Figure 5.1b) in the nonlinear multi-stage GMPE regression. For [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF], the nonlinear Vs30 amplification coefficients are fixed and create the deterministic nonlinear amplification function (Figure 5.2a) that is always applied to Vs30 < 400 m/s PSA at all periods to create the "data" (red curve in Figure 5.2b) used in the non-linear multi-stage GMPE regression. In the case of nonlinear deterministic amplification it is necessary to specify a reference PGA. We use 0.45 g for the reference PGA for illustration since this is close to the median ground motion case for sites about 2 km from strike-slip faults and M > 6.; use of a higher reference PGA would increase the nonlinear amplification in Fig. 5.2a. The use of a single deterministic amplification function for Vs30, whether linear or nonlinear, assumes that there is a one-to-one deterministic mapping of period-dependent amplification to Vs30, which Idriss (2008) suggests is not likely; a single Vs30 can be associated with a wide variety of amplification functions. Further, in the case of nonlinear amplification (Campbell andBozorgnia, 2008 and[START_REF] Abrahamson | Effects of rupture directivity on probabilistic seismic hazard analysis[END_REF]Silva, 2008), a single deterministic nonlinear amplification function used to account for modulus reduction and damping that vary widely as a function of soil materials, as discussed in Section 4 and [START_REF] Bonilla | Hysteretic and Dilatant Behavior of Cohesionless Soils and Their Effects on Nonlinear Site Response: Field Data Observations and Modeling[END_REF]. Chiou et al., 2008) to have a peak acceleration response at about 0.2 s, prior to regression. Thus, in hindsight it may not be a surprise that the NGA response spectra maintain a strong bias to peak at 0.2 s period that in large part is the result of the deterministic amplification modifications to the observed data prior to non-linear multi-stage regression. What is remarkable is that all four NGA GMPEs that implement Vs30 and [START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF] predict that spectral accelerations normalized by peak ground acceleration always peak at about 0.2 s, virtually independent of magnitude for M > 6; the overall shape of [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] response spectra normalized by peak ground acceleration in Fig. 5.3a are representative of all NGA GMPE response spectral shapes in terms of overall spectra shape and the 0.2 s period of maximum response. Boore et al. (1997) obtained a quite different result, with the period of peak spectral amplitude shifting to longer periods as magnitude increases above M 6.6 (Fig. 5.3b). The few near-fault data from sites with Vs30 > 900 m/s (Table 6 andFig Ground motion acceleration at high frequency scales in proportion to dynamic stress drop [START_REF] Boore | Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra[END_REF]. Average slip is proportional to the product of average dynamic stress drop and average rise time. Dynamic stress drop averaged over the entire fault plane is generally found to remain relatively constant with magnitude [START_REF] Aki | Strong-motion seismology[END_REF][START_REF] Shaw | Constant stress drop from small to great earthquakes in magnitude-area scaling[END_REF]. Thus, as average slip increases with magnitude [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF][START_REF] Mai | Source scaling properties from finite-fault-rupture models[END_REF][START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] average rise time must also increase with increasing magnitude. [START_REF] Somerville | Magnitude scaling of the near fault rupture directivity pulse[END_REF] notes that the period of the dominant amplitude near-fault motions is related to source parameters such as the rise time and the fault dimensions, which generally increase with magnitude. [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] present an analysis of scaling of stress drop with seismic moment and find a strong increase of maximum stress drop on the fault plane as a function of increasing moment. In contrast, average stress drop over the entire fault plane at most only slightly increases with increasing moment; the substantial scatter of average stress drop values in Figure 1 of [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] is consistent with average stress drop that is constant with moment. The [START_REF] Mai | On scaling of fracture energy and stress drop in dynamic rupture models: Consequences for near-source ground motions, Earthquakes: Radiated Energy and the Physics of Faulting[END_REF] results for maximum stress drop are consistent with first-order constraints on stochastic aspects of seismic source properties [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF][START_REF] Boore | Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF]. As fault area increases, the probability of observing a larger stress drop somewhere on the fault plane increases since stress drop must exhibit correlatedrandom variability over the fault to explain the first-order observations of seismic source properties inferred from ground motion recordings, such as the 2 spectral shape [START_REF] Andrews | A stochastic fault model, 2, Time-dependent case[END_REF][START_REF] Frankel | High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling strength on faults[END_REF]. However, for the moment range (6.5 < M < 7.5) that dominates the hazard at many sites, the stress drop averaged over the entire fault plane is generally found to remain relatively constant with magnitude [START_REF] Aki | Strong-motion seismology[END_REF][START_REF] Shaw | Constant stress drop from small to great earthquakes in magnitude-area scaling[END_REF], thus requiring average rise time to increase with increasing magnitude. These fundamental seismological constraints derived from analyses of many earthquakes require that the period that experiences peak response spectral amplitudes should increase with magnitude for some threshold magnitude. The results of the Boore et al. (1997) GMPEs suggest the threshold magnitude is about M 6.6 (Fig. 5.3b). That all five NGA GMPEs predict invariance of the period of peak spectral response amplitude for M > 6.6 to M 8.0 (example from M 6.6 to M 7.4 are shown in Fig. 5.3a) implies that stress drop increases strongly with increasing magnitude, which is inconsistent with current knowledge of seismic source properties. In contrast, the Boore et al. (1997) response-spectral-magnitude-period-dependent results are more consistent with available seismological constraints. It is important to understand why. Boore et al. (1997) implements Vs30 site factors in a quite different manner than the four NGA GMPEs that use Vs30. Boore et al. (1997) applied non-linear multi-stage regression using the observed data directly, with no deterministic remapping of data by Vs30 prior to regression. Except for their deterministic treatment of Vs30, [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] use a similar regression approach for Boore et al. (1997). Since [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] regress period-by-period, the linear site-response remapping (Fig. 5.1a) effectively swamps any signal associated with a period shift with increasing magnitude observed by Boore et al. (1997); a non-linear regression will operate on the largest signals. The deterministic linear amplification function in [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] becomes a very large signal (Figure 5.1a) when operating on data from Vs30=300 m/s sites. The other NGA GMPE regressions normalize response spectra by peak ground acceleration prior to regression, which Boore et al. (1997) suggest tends to reduce resolution of the period-amplitude response-spectra variations in multi-stage regression. Figs 5.1 and 5.2 illustrate why the NGA GMPEs predict PSA shapes that barely change with magnitude (Fig. 5.3a) and why the NGA GMPEs do not match the first-order characteristics of M > 6.6 near-fault PSA (Fig. 5.3c). It simply might be true that once nonlinear amplification occurs it is impossible to resolve differences between period shifts associated with source processes and site responses. Yet, implicitly the NGA GMPE non-linear regressions assume resolution of all possible response-spectral shape changes as a function of magnitude using deterministic site response amplification functions, an assumption [START_REF] Idriss | An NGA Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes[END_REF] does not find credible. In contrast, Boore et al. (1997) used the actual unmodified response spectral data in their multi-stage regression and obtained results compatible with existing seismological constraints. Unfortunately, this leaves us in a bit of a conundrum based on GMPE grading criteria suggested by [START_REF] Bommer | On the selection of ground-motion prediction equations for seismic hazard analysis[END_REF] and [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF], which clearly establish that NGA is a significant improvement for a wide range of applications than previous generation GMPEs, including Boore et al. (1997). A primary contributor to this conundrum about appropriate spectral behaviour for near-fault Vs30 > 900 m/s sites is the lack of near-fault ground motion data for Vs30 > 900 m/s (Table 6 andFigure 5.3c), providing a vivid real-world example of epistemic uncertainty.
Earthquake
The site amplification approach used in NGA is discussed by [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF], "The rationale for pre-specifying the site amplifications is that the NGA database may be insufficient to determine simultaneously all coefficients for the nonlinear soil equations and the magnitude-distance scaling, due to trade-offs occur between parameters, particularly when soil nonlinearity is introduced. It was therefore deemed preferable to "hard-wire" the soil response based on the best-available empirical analysis in the literature, and allow the regression to determine the remaining magnitude and distance scaling factors. It is recognized that there are implicit trade-offs involved, and that a change in the prescribed soil response equations would lead to a change in the derived magnitude and distance scaling. Note, however, that our prescribed soil response terms are similar to those adopted by other NGA developers who used different approaches; thus there appears to be consensus as to the appropriate level for the soil response factors." This consensus is both a strength and weakness of the NGA results. The weakness is that if there is a flaw in the deterministic site response approach, then all the NGA GMPEs that use Vs30 are adversely impacted. Ultimately, three data points (Table 6 and Fig. 5.3c) are insufficient for the data to significantly speak for themselves in this particular case. Consequently, one can argue for one interpretation (invariant spectral shape) or the other (spectral peaks shift to longer periods at M > 6.6), and while a Bayesian evidence analysis shows that limited available data support a spectral shift with increasing magnitude, without data from more earthquakes, an honest result is that large epistemic uncertainty remains a real issue for Vs30 > 900 m/s near-fault sites. Epistemic uncertainties can be rigorously accounted for in probabilistic ground motion analyses. However, it is necessary to develop a quantitative description of the epistemic uncertainties to accomplish this. Uncertainty in spectral shape as a function of magnitude, particular the period band of maximum acceleration response are important issues because many structures have fundamental modes of vibration at periods significantly longer than 0.2 s, the period the NGA GMPEs suggest that maximum acceleration responses will occur for M > 6.6 earthquakes at Vs30 > 900 m/s near-fault sites. We can reduce these site uncertainties and improve ground motion prediction, with the ground motion data that currently exist by collecting more quantitative information about site characteristics that more directly and robustly determine site amplification, like Vsdepth profiles. [START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF] showed through empirical statistical analyses that actual Vs30 measurements produced better performance than occurred at sites where Vs30 is postulated based on geology or other proxy data. Boore and Joyner (1997) suggested that quarter-wavelength approximation of [START_REF] Joyner | The effect of Quaternary alluvium on strong ground motion in the Coyote Lake, California earthquake of 1979[END_REF] would likely be a better predictor of site responses than Vs30. For a particular frequency, the quarter-wavelength approximation for amplification is given by the square root of the ratio between the seismic impedance (velocity times density) averaged over a depth corresponding to a quarter wavelength and the seismic impedance at the depth of the source. The analyses of this section suggest that the combination of Vs30 and its deterministic implementation in NGA is not the best approach. [START_REF] Thompson | Multiscale site-response mapping: a case study of Parkfield, California[END_REF] show that the quarter-wavelength approximation more accurately estimates amplification than amplification estimated using Vs30. Given the rapid growth in low-cost, verified passive measurement methods to quickly estimate robust Vs-depth to 50-100 m or more [START_REF] Stephenson | Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment[END_REF][START_REF] Boore | Comparisons of shear-wave slowness in the Santa Clara Valley, California, using blind interpretations of data from invasive and noninvasive methods[END_REF][START_REF] Miller | Advances in near-surface seismology and ground-penetrating radar: Introduction[END_REF][START_REF] O'connell | Interferometric multichannel analysis of surface waves (IMASW)[END_REF], it would greatly improve the prospects for substantial improvements in future GMPEs to acquire Vs-depth data for as much of the empirical ground motion database as possible to improve resolution of site amplification. These results illustrate how difficult it is to formulate a GMPE formulation and regression strategy a-priori, for a "single" parameter like Vs30. This analysis does not show that the NGA GMPEs are incorrect. Instead, it demonstrates some of the trade-offs, dependencies, and uncertainties that occur in the NGA GMPEs between Vs30 and spectral shape. This near-fault high Vs30 example illustrates that it is important to conduct independent analyses to determine which GMPEs are best suited for a particular application and to use multiple GMPEs, preferably with some measure of independence in their development to account for realistic epistemic GMPE uncertainties.
5.3
Near-fault application of NGA GMPEs and site-specific 3D ground motion simulations: Source and site within the basin In tectonically active regions near plate boundaries active faults are often located within or along the margins of sedimentary basins. Basins are defined by spatially persistent strong lateral and vertical velocity contrasts that trap seismic waves within the basin. Trapped seismic waves interact to amplify ground shaking and sometimes substantially increase the duration of strong shaking. Basin amplification effect is the result the combination of lateral and vertical variations in velocity that make the basin problem truly three-dimensional in nature and difficult to quantify empirically with currently available strong motion data. The basin problem is particularly challenging for estimating amplifications for periods longer than 1 s and sedimentary basin thicknesses exceeding about 3 km [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF].
Unfortunately, some of the largest urban populations in the world are located within basins containing active faults, including many parts of Japan [START_REF] Kawase | The cause of the damage belt in Kobe: "The basin-edge effect," constructive interference of the direct S-wave with the basin-induced diffracted/Rayleigh waves[END_REF][START_REF] Pitarka | Three-dimensional simulation of the near-fault ground motions for the 1995 Hyogo-Nanbu (Kobe), Japan, earthquake[END_REF][START_REF] Nied | Off the Pacific Coast of Tohoku Earthquake, Strong Ground Motion[END_REF], the Los Angeles and other basins in southern California [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and Seattle, Washington, [START_REF] Frankel | Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations[END_REF]. Consequently, estimation of long-period ground motions in sedimentary basins associated with near-fault faulting is an important practical need. Choi et al. (2005) used empirical and synthetic analyses to consider the effects of two types of basin situations. They denoted sites located in a basin overlying the source as having coincident source and site basin locations (CBL) and differentiated them from distinct source and site basin locations (DBL). They used pre-NGA GMPEs for "stiff-soil/rock", but modified to account for Vs30 using Choi and Stewart (2005) to regress for additional basin amplification factors as a function a scalar measure of basin depth, Z1.5, the depth to a shear-wave velocity of 1.5 km/s. Using ground motion data from southern and northern California basins, Choi et al. (2005) found strong empirical evidence that ground-motion amplification in coincident source and site basin locations (CBL) is significantly depthdependent at medium to long periods (T > 0.3 s). In contrast, They found that when the seismic source lies outside the basin margin (DBL), there is a much lower to negligible empirical evidence for depth-dependent basin amplification.
In support of NGA GMPE development [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] proposed a model for the effect of sedimentary basin depth on long-period response spectra. The model was based on the analysis of 3D numerical simulations (finite element and finite difference) of long-period 2-10 s ground motions for a suite of sixty scenario earthquakes (M 6.3 to M 7.1) within the Los Angeles basin region. [START_REF] Day | 3-D ground motion simulation in basins: Final report for Lifelines Project 1A03[END_REF] used a deterministic 3D velocity model for southern California (Magistrale et al., 2000) to calculate the wave responses on a grid and determine the amplification of basin sites as a function of Z1.5 in the 3D model. Being a purely synthetic model, but primarily concerned with ratios (amplification), it is relatively unimportant to consider the correlated-random effects on wave amplitude (Table 3) and phase (Table 4) to calculate to first-order amplification effects for shallow (< 2 km) and/or relatively fast basins.
In shallow and/or fast basins the additional stochastic basin path length difference between the shallow basin and bedrock paths is less than a couple wavelengths at periods > 1 s, so the effects of differential correlated-random path lengths on S-wave amplification are negligible (O'Connell, 1999a). For typical southern California lower-velocity basins deeper than 3 km both the 3D viscoelastic finite-difference simulations of O'Connell (1999a) and phase-screen calculations of Hartzell et al. (2005) show correlated-random velocity variations will significantly reduce estimated basin amplification relative to deterministic 3D models. The primary purpose of O'Connell's (1999a) investigations was to determine the likely amplification of higher-velocity rock sites where few empirical data exist (see Section 5.2), relative to the abundant ground motion recordings obtained from stiff soil sites. O'Connell (1999a) showed that basin amplification in > 3 km deep basins is reduced relative to rock as the standard deviation of correlated-random velocity variations increases because the mean-freepath scattering in the basins significantly increases relative to rock at periods of 1-4 s. Consequently, because [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] use a deterministic 3D velocity model, we expect that their estimated basin amplifications will generally correspond to upper bounds of possible mean amplifications for southern California basins deeper than 3 km, but provide accurate first-order estimates of basin amplification for shallower (< 2 km) basins.
Several NGA GMPEs worked to empirically evaluate and incorporate "basin effects" in some way, but it is important to note that none of the empirical NGA GMPEs explicitly consider 3D basin effects by separately considering data in coincident source and site basin locations (CBL) from other data as Choi et al. (2005) showed is necessary to empirically estimate 3D basin effects for coincident source and site basin locations. NGA GMPEs lack sufficient parameterization to make this distinction, thus lumping all sites, CBL, DBL, and sites not located in basins into common Z1.0 or Z2.5 velocity-depth bins. All these sites, no matter what their actual location relative to basins and sources are apportioned some "basin-effect" through their Vs30 site velocity, Z1.0, and Z2.5 "basin-depth" terms [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. It is important to understand that Z1.0 and Z2.5 are not empirically "basin-depth" terms, but "velocity-depth" terms. We use "velocity-depth" to refer to Z1.0 and Z2.5 instead of "basin-depth" because empirically, the NGA empirical GMPEs do not make the necessary distinctions in their GMPE formulations for these terms to actually apply to the problem of estimating 3D CBL amplification effects, the only basin case where a statistically significant empirical basin signal has been detected (Choi et al, 2005). [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] found empirical support for significant "velocity-depth" Z2.5 term after application of their Vs30 term, but only for sites where Z2.5 < 3 km, which roughly correspond to Z1.5 < 1.5. For Z2.5 > 3 km, [START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF] used the parametric 3D synthetic basin-depth model from [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] note that correlation between Vs30 and "basin" depth is sufficiently strong to complicate the identification of a basin effect in the residuals after having fit a regression model to Vs30. [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] found that to implement a velocity-depth term using Z1.5 would require removing the Vs30 site term from their GMPE because of the Z1.5-Vs30 correlation. Instead, [START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF] retained Vs30 at all periods and included a "velocitydepth" Z1.0 term to empirically capture the portion of velocity-depth amplification not fully accounted for by the correlation between Vs30 and Z1.0. Abrahamson and Silva ( 2008) used a similar Vs30 and Z1.0 parameterization approach for their GMPE. Since none of the NGA implementations of Z1.0 make the distinction whether a site is actually contained in a CBL or is not even in a basin, it is useful to evaluate the predictions of the four NGA GMPEs that implement Vs30, including the three NGA GMPEs that incorporate Z1.0 and Z2.5 velocitydepth terms, for four CBL sites along a portion of the North Anatolia Fault, where the fault is embedded below a series of connected 3D basins (Fig. Three hypocenters positions are used to evaluate forward, bilateral, and reverse rupture directivity (Fig. 5.4). These simulated ground motions are compared to NGA GMPE response spectra predictions from the four NGA GMPEs with Vs30, including the three with velocity-depth terms (Abrahamson and Silva, 2008;[START_REF] Campbell | NGA Ground Motion Model for the Geometric Mean Horizontal Component of PGA, PGV, PGD and 5% Damped Linear Elastic Response Spectra for Periods Ranging from 0.01 to 10 s[END_REF][START_REF] Chiou | An NGA Model for the Average Horizontal Component of Peak Ground Motion and Response Spectra[END_REF], and [START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] for periods of 1 s. The NGA results are modified to account for rupture directivity using [START_REF] Rodriguez-Marek | An empirical geotechnical seismic site response procedure[END_REF], Spudich andChiou (2008), and[START_REF] Rowshandel | Directivity correction for the next generation attenuation (NGA) relations[END_REF] to isolate residual 3D basin and directivity effects relative to the NGA-based empirical predictions. A 3D velocity model encompassing the eastern Marmara Sea and Izmit Bay regions was constructed to span a region including the fault segments of interest, the ground motion estimation sites, and local earthquakes and recording stations (Fig. 5.5). Synthetic waveform modeling of local earthquake ground motions was used to iteratively improve and update the 3D model. The initial 3D velocity model was constructed using published 1-D velocity model data [START_REF] Bécel | Moho, crustal architecture and deep deformation under the North Marmara Trough, from the SEISMARMARA Leg 1 offshore-onshore reflection-refraction survey[END_REF][START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF], tomographically assessed top of basement contours [START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF], seismic reflection profiles [START_REF] Carton | Seismic imaging of the threedimensional architecture of the Çınarcık Basin along the North Anatolian Fault[END_REF]Kurt and Yucesoy, 2009), Boguer gravity profiles [START_REF] Ates | Structural interpretation of the Marmara region, NW Turkey, from aeromagnetic, seismic and gravity data[END_REF], geologic mapping [START_REF] Okyar | Late quaternary seismic stratigraphy and active faults of the Gulf of İzmit (NE Marmara Sea)[END_REF]) and fault mapping [START_REF] Armijo | Asymmetric slip partitioning in the Sea of Marmara pull-apart: a clue to propagation processes of the North Anatolian Fault?[END_REF]. Additional understanding of the basin-basement contact was gained by assessment of seismic reflection data collected by the SEISMARMARA cruise and made available at http://www.ipgp.fr/~singh/DATA-SEISMARMARA/.
The empirical wavespeed and density relations from Brocher (2005) were used to construct 3D shear-wave and density models based on the initial 3D acoustic-wave model. Shear-wave velocities were clipped so that they were not less than 600 m/s to ensure that simulated ground motions would be accurate for periods > 0.7 s for the 3D variable grid spacing used in the finite-difference calculations. This initial 3D velocity model was used to generate synthetic seismograms to compare with recordings of local M 3.2-4.3 earthquakes recorded on the margins of the Sea of Marmara, Izmit Bay, and inland locations north of Izmit Bay to assess the ground motion predictive performance of the initial 3D model. Several iterations of forward modeling were used to modify the 3D velocity model to obtain models that produce synthetic ground motions more consistent with locally-recorded local earthquake ground motions. The resulting shear-wave surface velocities mimic the pattern of acoustic-wave velocities that are consistent to first-order with the 3D acoustic-wave tomography results for the eastern Marmara Sea from [START_REF] Bayrakci | Approach to the complex 3D upper-crustal seismic structure of the Sea of Marmara by artificial source tomography on a 2D grid of OBS[END_REF]. Following O'Connell (1999a) and Hartzell et al. (2005) the final 3D model incorporates 5% standard deviation correlated-random velocity variations to produce more realistic peak ground motion amplitudes than a purely deterministic model. Since there are three distinct geologic volumes in the 3D model, three independent correlatedrandomizations were used, one for the basin materials with a correlation length of 2.5 km, and one each for the basement north and south of the NAF that both used a correlation length of 5 km. Similar to [START_REF] Hartzell | Effects of 3D random correlated velocity perturbations on predicted ground motions[END_REF] we use a von Karman randomization with a Hurst coefficient close to zero and 5% standard deviation. Velocity variations are clipped to that shear-wave velocities are never smaller than 600 m/s to ensure a consistent dispersion limit for all calculations and randomized acoustic velocities were never larger than the maximum deterministic acoustic velocity to keep the same time step for all simulations. Realistic ground motion simulations require accounting for first-order anelastic attenuation, even at long periods [START_REF] Olsen | Estimation of Q for long-period(>2 sec) waves in the Los Angeles basin[END_REF]. The fourth-order finite-difference code employs the efficient and accurate viscoelastic formulation of [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF] that accurately A kinematic representation of finite fault rupture is used where fault slip (displacement), rupture time, and rise time are specified at each finite-difference grid node intersected by the fault. The 3D viscoelastic fourth-order finite-difference method of [START_REF] Liu | The effect of a low-velocity surface layer on simulated ground motion[END_REF]2006) was used to calculate ground motion responses from the kinematic finite fault rupture simulations. The kinematic rupture model mimics the spontaneous dynamic rupture behavior of a self-similar stress distribution model of [START_REF] Andrews | Dynamic simulation of spontaneous rupture with heterogeneous stress drop[END_REF]. The kinematic rupture model is also similar to the rupture model of Herrero and Benard (1994). Self-similar displacements are generated over the fault with rise times that are inversely proportional to effective stress. Peak rupture slip velocities evolve from ratios of 1:1 relative to the sliding (or healing peak) slip velocity at the hypocenter to a maximum ratio of 4:1. This form of slip velocity evolution is consistent with the dynamic rupture results of [START_REF] Andrews | Dynamic simulation of spontaneous rupture with heterogeneous stress drop[END_REF] that show a subdued Kostrov-like growth of peak slip velocities as rupture grows over a fault. The kinematic model used here produces slip models with 1/k 2 (k is wavenumber) distributions consistent with estimates of earthquake slip distributions [START_REF] Somerville | Characterizing crustal earthquake slip models for the prediction of strong ground motion[END_REF] and 2 ( is angular frequency) displacement spectra in the far-field. [START_REF] Oglesby | Stochastic fault stress: Implications for fault dynamics and ground motion[END_REF] and [START_REF] Schmedes | Correlation of earthquake source parameters inferred from dynamic rupture simulations[END_REF] used numerical simulations of dynamic fault rupture to show that rupture velocity, rise time, and slip are correlated with fault strength and stress drop, as well as each other. The kinematic rupture model used here enforces correlations between these parameters by using a common fractal seed to specify relationships between all these fault rupture parameters. [START_REF] Oglesby | Stochastic fault stress: Implications for fault dynamics and ground motion[END_REF], [START_REF] Guatteri | Strong ground motion prediction from stochastic-dynamic source models[END_REF][START_REF] Schmedes | Correlation of earthquake source parameters inferred from dynamic rupture simulations[END_REF] used dynamic rupture simulations to demonstrate that rupture parameter correlation, as implemented in the stochastic kinematic rupture model outlined here, is necessary to produce realistic source parameters for ground motion estimation. The fault slip variability incorporates the natural log standard deviation of strike-slip displacement observed by [START_REF] Petersen | Fault displacement hazard for strike-slip faults[END_REF] in their analyses of global measurements of strike-slip fault displacements. Consequently, although mean displacements are on the order of 1.5 m for the M 7.1 three-segment scenario earthquake, asperities within the overall rupture have displacements of up to 3-4 m. The [START_REF] Liu | Efficient modeling of Q for 3D numerical simulation of wave propagation[END_REF] slip velocity function is used with the specified fault slips and rise times to calculate slipvelocity time functions at each grid point. Three hypocenters were used to simulate forward, reverse, and bilateral ruptures relative to Izmit Bay sites (Fig. 5.4). To find an appropriate "median" randomization of the 3D velocity model, ten correlated-random 3D velocity models were created and then a single, threesegment randomized kinematic rupture model was used to simulate ten sets of ground motions. The randomized 3D model that most consistently produced nearly median motions across the five sites over the 1-10 s period band was used to calculate all the ground motion simulations for all two-segment and three-segment rupture scenarios. Ten kinematic randomizations were used for each case resulting in 60 rupture scenario ground motion simulations.
The simulated ground motions were post-processed to calculate acceleration response spectra for 5% damping. The geometric mean of [START_REF] Boore | GMRotD and GMRotI: Orientation-Independent Measures of Ground Motion[END_REF] (GMRotI50) was calculated from the two horizontal components to obtain GMRotI50 response spectra (SA).
Response spectral results are interpreted for periods longer than 1 s, consistent with the fourth-order finite-difference accuracy for the variable grid spacing, minimum shear-wave velocity of 600 m/s, and broad period influence of oscillator response [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. The four NGA ground motion prediction equations (GMPE) that implement Vs30 were used to calculate ground motion estimates at all four sites using the Z1.0 and Z2.5 below each site in the 3D synthetic velocity model, Vs30=600 m/s, the directivity corrections of [START_REF] Rodriguez-Marek | An empirical geotechnical seismic site response procedure[END_REF], [START_REF] Spudich | Directivity in NGA earthquake ground motions: Analysis using isochrone theory[END_REF], and Rowshandel (2010) equally weighted, and the three rupture hypocenters (forward, bilateral, and reverse directivity in Fig. 5.3) equally weighted. Site 4 was located away from basin-edge effects and in the shallow portion of the basin with the same Vs30=600 m/s, as sites 1-3 and consistent with a relatively linear site response making direct comparison of linear 3D simulated motions with empirical GMPE feasible. Site 4 horizontal spectra where estimated as the log-mean average of the set of six earthquake rupture scenarios (two-segment and three-segment rupture, and forward, bilateral, and reverse directivity) used in the 3D ground motion simulations. To obtain robust estimates of mean synthetic spectra, we omitted the two largest and smallest amplitudes at each period to estimate log-mean spectra for comparison. The Site 4 horizontal responses are comparable in amplitude to NGA predicted response spectra for periods > 1 s (Fig. 5.7a). The reduced synthetic responses between 1-2 s in Fig. 5.7 are an artifact of finite-difference grid dispersion similar to that noted by [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF]. The site 4 3D simulated responses are generally slightly less than the empirical GMPE median estimates over the 1-8 second period range, except for a small amplification at 3 seconds of < 10% (Fig. 5.7b). This confirmed that the 3D ground motion simulations and empirical NGA GMPE predict comparable spectral hazard at site 4 and establish site 4 as an appropriate reference point to compare to responses at sites 1-3 closer to the fault and within deeper portions of the basin. We use the empirical NGA GMPEs to estimate the amplitude effects of differential sourcesite distances on amplitudes relative to site 4 and changes in Z1.0 and Z2.5 between sites 1-3 and reference site 4. The three empirical directivity relations are used with equal weight to remove the differential directivity effects for two separate sets of rupture cases designed to determine if 3D basin amplification is dependent on rupture directivity. We consider in the first case, the two rupture scenarios away from the sites, to determine 3D basin amplification in the absences of forward rupture directivity. In the second case, we average all six rupture scenarios, four of which have strong forward rupture directivity, to see if any of the sites shows significantly 3D basin amplification in the case of solely reverse rupture direction ground motions. Empirical NGA distance, directivity, and Z1.0-Z2.5 sites 1-3 amplifications relative to reference site 4 are the lowest curves at the bottom of Fig. 5.8 and represent the sum total of the effects of all NGA GMPE terms related to differential distance, directivity, and Z1.0 and Z2.5 velocity-depth. Although sites 1-3 are much closer to the fault than site 4, the relative changes in amplitudes are much smaller than the proportional differences in site-source distances as a result of saturation, the condition enforced in NGA that ground motion amplitudes cease to increase as distance to the fault approaches zero. The directivity amplitude reduction from [START_REF] Rowshandel | Directivity correction for the next generation attenuation (NGA) relations[END_REF] for reverse rupture accounts for the dip in longer-period NGA differential responses at periods of about 5 s in Fig. 5.8. The most striking aspect of the NGA transfer functions is that although three of the four GMPE include Z1.0 or Z2.5 "basin-depth" terms, there is no hint of an empirical resonant 3D basin response, just slight steady increases of "amplification" with increasing period. The non 3Dbasin-like NGA differential amplification results are not surprising because the NGA basindepth formulation pools ground motion observations from all scales of basins and nonbasins in each Z1.0 and Z2.5 bin. Consequently, the NGA Vs30 and velocity-depth Z1.0 and Z2.5 basin terms do not capture any of the strongly period-dependent amplification associated with the site-specific basin of < 2 km total depth near the sites.
The residual site-specific synthetic 3D amplifications at sites 1-3 relative to reference site 4 are essentially independent of rupture direction (Fig. 5.8). Site 1 closest to the fault shows the largest amplification for case 2 with 2/3 forward rupture directivity, but the difference at site 1 between 2/3 forward rupture directivity basin amplification and reverse rupture basin amplification is < 10%. For sites 2 and 3 located slightly further from the fault, differences in case one and case two directivity 3D basin amplifications deviate < 4% from their mean peak amplifications. The remarkable result is that even in this case of a strike-slip fault embedded below the center of a basin and rupturing within basins continually along the entire rupture length, to first order 3D basin amplification is independent of rupture directivity/rupture direction. These 3D synthetic calculations shows that the three empirical directivity corrections applied with NGA GMPE effectively accounted for first-order directivity in this rather severe case of strike-slip fault rupture within a basin. The Izmit Bay basins are quite similar in width, depth, and velocity characteristics to the San Fernando Basin, one of the basins included in the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] 3D synthetic calculations to represent basin amplification in younger shallower basins. Thus, it is interesting to compare the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] synthetic amplification predictions calculated across a spectrum of shallow and deeper basins using a deterministic 3D velocity model with these simulations using a site-specific weakly-randomized 3D velocity model. We calculate the 3D simulation and [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] response ratios of sites 1-3 to site 4 using the Z1.5 values from the 3D simulation model in the [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] Z1.5-amplification relationships (Fig. 5.9). Both 3D synthetic approaches predict comparable peak amplifications at comparable periods (Fig. 5.9), with the site-specific 3D model predicting a more rapid decrease with in increasing period that reflects the details of the site-specific 3D model; [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] have a wider period range of larger amplification because they pooled basin amplifications from a wider range of basin configurations than representative of the site-specific 3D velocity structure. When soils are significantly less linear than clays with a plasticity index of 20, the fully nonlinear shear P-SV 2D investigations of [START_REF] O'connell | Influence of 2D Soil Nonlinearity on Basin and Site Responses[END_REF] suggest that combining the outputs of linear 3D simulations that omit the very-low-velocity basin with 1D nonlinear analyses to account for the very-low-velocity basins will produce comparable amplifications within the basin to full nonlinear 2D or 3D analyses. Linear 1D P-SV vertical analyses in the central portions of basins will typically provide appropriate vertical amplifications throughout most of the basin. Thus, it appears that it may be feasible in most of these cases to omit the shallow soft low-velocity regions from the top of basins from 3D linear or nonlinear analyses and use the outputs from linear 3D analyses with simplified 1D nonlinear SH and P-SV nonlinear amplification calculations to estimate realistic horizontal and vertical peak velocities and accelerations in the upper low-velocity soft soils. These results illustrate that at present, the NGA GMPE do not effectively estimate sitespecific 3D basin amplification for the most extreme case of a strike slip source and sites located within a closed basin. In such situations it is necessary to use site-specific 3D basin amplification calculations or compiled synthetic 3D generic basin amplification relations like [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF] to estimate realistic site-specific 3D basin amplification effects. However, the NGA GMPEs and associated empirical directivity relations are shown to effectively account for geometric spreading and directivity in the demanding application of source and site located within a closed basin and provide a robust means to extract residual 3D basin amplification relative to NGA GMPE predictions. This approach requires a suitable reference site in shallow portions of the basin that are not strongly influenced by basin effects or a site outside the basin.
In future GMPE development, the basin analyses of Choi et al. (2005), [START_REF] Day | Model for basin effects on long-period response spectra in southern California[END_REF], and this analysis suggest separate consideration and analysis of data that is within closed basins with faults beneath or adjacent to the basin is warranted to evaluate empirical evidence for systematic basin responses. Such analyses need to be done separately for ground motion observations outside of this specific basin configuration to discern the relative effects of velocity-depth versus basin-depth on parameters like Vs30, Z1.0, and Z1.5. We suggest that it is more appropriate and prudent to refer to Z1.0 and Z1.5 as velocity-depth terms, not basin-terms, since they will fail to account for significant systematic period-dependent 3D basin amplification in the cases of sources and sites located within low-velocity basins.
Conclusion and recommendations
Geologic seismic source characterization is the fundamental first step in strong ground motion estimation. Many of the largest peak ground motion amplitudes observed over the past 30 years have occurred in regions where the source faults were either unknown or major source characteristics were not recognized prior to the occurrence of earthquakes on them. The continued development of geologic tools to discern and quantify fundamental characteristics of active faulting remains a key strong ground motion estimation research need.
As [START_REF] Jennings | Engineering seismology, In: "Earthquakes: Observation, Theory, and Interpretation[END_REF] noted, by the early 1980s efforts to develop empirical ground motion prediction equations were hampered not only by the insufficient recordings of ground motions to constrain the relationships between magnitude, distance, and site conditions, but insufficient physical understanding of how to effectively formulate the problem. Strong ground motion estimation requires both strong motion observations and understanding of the physics of earthquake occurrence, earthquake rupture, seismic radiation, and linear and nonlinear wave propagation. In sections 2-4 we provided an overview of the physics of strong ground motions and forensic tools to understand their genesis. The physics are complex, requiring understanding of processes operating on scales of mm to thousands of km, most of the physical system is inaccessible, and the strong motion observations are sparse. As O'Connell (1999a) and Hartzell et al. (2005) showed, surface ground motion observations alone are insufficient to constrain linear and nonlinear amplification and seismic source properties. The observational requirements to understand the earthquake system and how ground motions are generated are immense, and require concurrent recording of ground motions at the surface and at depth. These observations have only recently been undertaken at a comprehensive large scale. In Japan, the National Research Institute for Earth Science and Disaster Prevention (NIED) operates K-NET (Kyoshin Network) with 660 strong motion stations. Each station records triaxial accelerations both at the surface and at sufficient depth in rock to understand the physics of earthquake fault rupture and to directly observe linear and nonlinear seismic wave propagation in the shallow crust. These borehole-surface data have provided fundamental new constraints on peak ground motions (Aoi et al., 2008), direct observation of nonlinear wave propagation, and new constraints on ground motion variability (Rodriguez-Marek et al., 2011). It will be necessary to expand the deployment of K-NET scale networks to other tectonically active regions like the western United States, to make real long-term progress understanding and significantly improving our ability to predict strong ground shaking. The synergy between earthquake physics research and strong ground motion estimation is based on ground motion observations and geologic knowledge.
The need for new recordings of strong ground motions in new locations is clear, but there is immensely valuable information yet to be extracted from existing strong ground motion data. One of single biggest impediments to understanding strong ground motions is the lack of site velocity measurements for most of the current strong ground motion database (Chiou et al., 2008;[START_REF] Kaklamanos | Model validations and comparisons of the next generation attenuation of ground motions (NGA-West) project[END_REF]. The last 10 years has seen an explosion in the development and successful application of rapid, inexpensive, and non-invasive methods to measure site shear-wave velocities over depths of 50-1000 m that can provide site amplification estimates accurate to on the order of 10-20% [START_REF] Stephenson | Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment[END_REF][START_REF] Boore | Comparisons of shear-wave slowness in the Santa Clara Valley, California, using blind interpretations of data from invasive and noninvasive methods[END_REF]. Using the large borehole-surface station network in Japan, Rodriguez-Marek et al. (2011) showed that the difference in the single-station standard deviation of surface and borehole data is consistently lower than the difference in ergodic standard deviations of surface and borehole data. This implies that the large difference in ergodic standard deviations can be attributed to a poor parameterization of site response. Vs30 does not constrain frequency-dependent site amplification because literally, an infinite number of different site velocity-depth profiles can have the same Vs30. Even given geologic constraints on near-surface material variability, the scope of distinct velocity profiles and amplification characteristics that share a common Vs30 is vast. Ironically, the implementation of Vs30 in four of the NGA GMPE produced significant uncertainties in spectral shape as a function of magnitude as illustrated in Section 5.1. Vs30 also trades off with other velocity-depth factors (Section 5.2). We propose that one of the most valuable new strong ground motion datasets that can be obtained now is measurement of site shearwave velocity profiles at the sites of existing strong ground motion recordings. These measurements would provide a sound quantitative basis to constrain frequency-dependent linear-site amplification prior to regression and reduce uncertainties in ground motion estimations, particularly spectral shape as a function of site conditions. As Rodriguez-Marek et al. ( 2011) note, reduction of exaggerated ground motion variability results in more realistic ground motion estimates across widely differing sites in probabilistic analyses.
The analyses of Choi et al. (2005) and section 5.2 suggest that accounting for positions of ground motion recordings and earthquake inside or outside of closed basins may provide a path forward to improve the ability of future empirical GMPE to accurately estimate responses within basins.
the convolution of the time evolution of the slip-time functions, responses between the fault and the site (Figure3.1) as,
Fig. 3
3 Fig. 3.1. Schematic diagram of finite-fault rupture ground motion calculations. Three discrete subfault elements in the summation are shown. Rings and arrows emanating from the hypocenter represent the time evolution of the rupture. The Green functions actually consist of eight components of ground motion and three components of site ground velocities. Large arrows denote fault slip orientation, which is shown as predominantly reverse slip with a small component of right-lateral strike slip. Hatched circles schematically represent regions of high stress drop.
Fig. 3
3 Fig. 3.2. Schematics of line source orientations for strike-slip (a) and thrust faults (c) and (e) relative to ground motion sites (triangles). Black arrows show the orientation of the faults, red arrows show fault rupture directions, and blue arrows show shear-wave propagation directions (dashed lines) to the sites. Discrete velocity contributions for seven evenly-spaced positions along the fault are shown to the right of each rupture model (b, d, f) as triangles with amplitudes (heights) scaled by the radiation pattern. The output ground motions for each fault rupture are shown in (g). Isochrone velocity, c, is infinity in (d), is large, but finite, in (f), and decreases as the fault nears the ground motion site in (b).
Fig. 4
4 Fig. 4.1. Hyperbolic model of the stress-strain space for a soil under cyclic loading. Initial loading curve has a hyperbolic form, and the loading and unloading phases of the hysteresis path are formed following Masing's criterion.
Figure
Figure 4.1 shows a typical stress-strain curve with a loading phase and consequent hysteretic behavior for the later loading process. There have been several attempts to describe mathematically the shape of this curve, and among those models the hyperbolic is one of the easiest to use because of its mathematical formulation as well as for the number of parameters necessary to describe it[START_REF] Ishihara | Soil Behavior in Earthquake Geotechnics[END_REF][START_REF] Kramer | Geothechnical Earthquake Engineering[END_REF][START_REF] Beresnev | Nonlinear site response -a reality?[END_REF]
Fig. 4
4 Fig. 4.2. Borehole transfer functions computed at KiK-net station TTRH02 in Japan. The orange shaded area represents the 95% confident limits of the transfer function using weakmotion events (PGA < 10cm/s 2 ). The solid line is the transfer function computed using the October 2000 Tottori mainshock data.
Fig. 4
4 Fig. 4.3. Surface and borehole records of the 1995 Kobe earthquake at Port Island (left), and the 1993 Kushiro-Oki earthquake at Kushiro Port (right). The middle panel shows the shear wave velocity distribution at both sites.
Fig. 4
4 Fig. 4.4. Schematic figure for the multishear mechanism. The plane strain is the combination of pure shear (vertical axis) and shear by compression (horizontal axis) (after Towhata andIshihara, 1985).
Figure 4.7 shows the accelerograms (left) and the corresponding response spectra (right). The observed data are shown with no filtering, whereas the computed data are low-pass filtered at 10 Hz.The computed accelerogram shows the transition from high-frequency content between 0 and 15 sec to the intermittent spiky behavior after 15 sec. The response spectra show that the computed accelerogram accurately represents the long periods; yet, the short periods are still difficult to model accurately. This is the challenge of nonlinear simulations; the fit should be as broadband as possible.
Fig. 4
4 Fig. 4.6. The top panel shows the computed strain time history at the middle of the borehole. Middle panels show the computed stress by trial-and-error using the multispring model in order to find the best dilatancy parameters. Bottom panels indicate the computed stress time history from acceleration records (after Bonilla et al., 2005).
The nonlinear properties were simplified to a depth-independent plasticity index (PI) of 20% for the NOAH2D calculations. Overall the 2D synthetic nonlinear horizontal motions provide a good fit to the acceleration response spectra(Figs. 4.8a and 4.8d) and acceleration seismograms(Figs. 4.8b and 4.8e). The 2D synthetic horizontal velocities match the observed velocity seismograms well, except in the early portion of the record where the translation ("fling") associated with permanent displacement that dominates early portions of the observed seismograms(Figs. 4.8c and 4.8f). Synthetic vertical responses were calculated for each horizontal-vertical component pair which is a crude approximation to total 3D wavefield. The east component is nearly fault-normal and has the largest peak accelerations and velocities of the two horizontal components, so the eastvertical combination probably best corresponds to the dominant P-SV responses. Except for the obvious asymmetry in both the acceleration and velocity vertical seismograms, both the north-vertical and east-vertical 2D nonlinear synthetic vertical surface motions provide a good fit to the observed acceleration response spectra(Figures 4.9a and 4.9d), acceleration seismograms (Figures 4.9b and 4.9eb), and velocity seismograms (Figures 4.9cand 4.9f). Since station IWTH25 is located in the deformed hangingwall of a reverse fault in rugged topography, it is clear that even these 2D nonlinear calculations are a crude approximation to the field conditions and complex incident wavefield associated with the finite fault rupture. However, the 2D nonlinear calculations summarized in Figs. 4.8 and 4.9 for station IWTH25 clearly show that the 2D P-SV nonlinear approach of[START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF] provide a sound basis to evaluate first-order nonlinear horizontal and vertical nonlinear responses, even for cases of extremely large incident accelerations and velocities.
Fig. 4
4 Fig. 4.8. Observed and simulated surface ITWH25 horizontal response spectra (a,d), and acceleration (b,e), and velocity (c,f) time histories for the north (a-c) and east (d-f) components.
Fig. 4.9. Observed and simulated surface ITWH25 vertical response spectra (a,d), and acceleration (b,e), and velocity (c,f) time histories using the north-vertical (a-c) and eastvertical (d-f) components.
Fig. 5
5 Fig. 5.1. Boore and Atkinson (2008) amplification functions (a) and original Vs30=300 m/s (black) and "Vs30=915 m/s remapped observed" responses spectra (red) (b) for M=7.0, distance of 2 km, and PGA=0.45 g.
Fig. 5
5 Fig. 5.2. Campbell and Bozorgnia (2008) amplification functions (a) and original Vs30=300 m/s (black) and "Vs30=915 m/s remapped observed" responses spectra (red) (b) for M=7.0, distance of 2 km, and PGA=0.45 g.
Fig. 5.3. Boore and Atkinson (2008) (a) and Boore et al. (1997) (b) response spectra normalized by peak ground acceleration for Vs30=900 m/s. The geometric mean spectral accelerations from the three observed Vs30 > 900 m/s ground motions in Table6is compared to the mean[START_REF] Boore | Ground-Motion Prediction Equations for the Average Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods Between 0.01s and 10.0 s[END_REF] andBoore et al. (1997) estimates in (c).
Fig. 5.4. North Anatolia Fault segments and sites for 3D ground motion modeling.
Fig. 5
5 Fig. 5.5. Shear-wave (Vs) cross sections through the 3D velocity model along profiles shown in map view in Fig. 5.4.
Fig. 5
5 Fig. 5.6. Near-median bilateral three-segment rupture synthetic velocity seismograms for the five sites shown in Figs. 5.4 and 5.5a.
Fault-normal peak velocities decrease from sites 1-3 close to the fault and near the deeper portion of the basin (Fig. 5.5a) toward the shallow basin (site 4 in Figs. 5.4a and Fig. 5.6), and bedrock outside the basin (site 5 in Figs. 5.4a and Fig. 5.6).
Fig. 5
5 Fig. 5.7. Site 4 3D synthetic and NGA GMPE mean response spectra (a) and the 3D/NGA GMPE ratio (b).
Fig. 5
5 Fig. 5.8. Mean and reverse-rupture only residual 3D basin amplifications for sites 1-3 relative to reference site 4 with NGA differential site amplitude correction functions.
Fig. 5
5 Fig. 5.9. Mean 3D site-specific simulation 3D amplification and Day et al. (2008) 3D amplifications for sites 1-3 relative to reference site 4.
Table 1 list factors influencing source amplitudes,
source phase, S ij . Table 2 lists factors influencing
Table 2
2
. Seismic Source Phase Factors (
ij ) Oglesby et al.
Table 3
3
lists factors influencing propagation amplitudes, G kij (). Table 4 lists factors influencing propagation phase, ij . Large-scale basin structure can substantially amplify and
extend durations of strong ground motions
velocity materials near the surface amplify ground motions for frequencies > /(4*h), where h is the thickness of near-surface low velocity materials. Coupled interface modes can amplify and extend durations of ground motions.
Nonlinear soil responses, G kij (equivalent linear) u N , ,
G u t
G u t
kij N , (fully nonlinear) Depending on the dynamic soil properties and pore pressure responses, nonlinear soil responses can decrease intermediate-and high-frequency amplitudes, amplify low-and high-frequency amplitudes, and extend or reduce duration of large amplitudes. The equivlanet linear approximation is G u kij N , . The fully nonlinear form, kij N , , can incorporate any time-dependent behavior such as pore-pressure responses. Frequency independent attenuation,
Table 4 .
4 Seismic Wave Propagation Phase Factors (
and reproducible using 1D nonlinear site response modeling[START_REF] O'connell | Assessing Ground Shaking[END_REF]. However, the surface vertical peak acceleration exceeded 3.8g, exceeding the maximum expected amplification, based on the site velocity profile between the borehole and the surface accelerometers, and current 1D linear or nonlinear theories of soil behavior[START_REF] O'connell | Assessing Ground Shaking[END_REF]. In particular, application of the nonlinear approach of shear-modulus reduction advocated and tested byBersenev et al. (2002) to predict nonlinear vertical responses, failed to predict peak vertical accelerations in excess of 2g[START_REF] O'connell | Assessing Ground Shaking[END_REF]). Further, Aoi et al. (2008) observed largest upward accelerations at the surface that were 2.27 times larger than the largest downward accelerations, a result not reproduced using 1D approaches to approximate soil nonlinearity. The 2D nonlinear wave propagation implementation ofBonilla et al. (
.Aoi et al. (2008) propose a conceptual model for this asymmetry. Their model uses a loose soil with nearly zero confining pressure near the surface. The soil particles separate under large downward acceleration, and in this quasi free-fall state, the downward accelerations at the surface only modestly exceed gravity. Conversely, large upward accelerations compact the soil and produce much larger upward accelerations. Aoi et al. (2008) report three cases of these anomalous large vertical acceleration amplifications in a search of 200,000 strong motion recordings.Hada et al. (2009) successfully reproduce the strong vertical asymmetric accelerations at IWTH25 with a simple 1D discrete element model, a model that is not a rigorous model of wave propagation.Yamada et al. (2009a) interpret the large upward spikes in acceleration as slapdown phases, which are also typically observed in near-field recordings of nuclear explosion tests. Our focus here is not the asymmetry of the IWTH25 vertical accelerations recorded at the surface, but showing that the simple total stress plane-
strain model of soil nonlinearity in
[START_REF] Bonilla | 1D and 2D linear and non linear site response in the Grenoble area[END_REF]
reproduces both the first-order peak horizontal and vertical velocities and accelerations and acceleration response spectra at station IWTH25 using the borehole motions at 260 m depth as inputs.
Yamada et al. (2009b)
conducted geophysical investigations at the site and found lower velocities in the top several meters than reported in
Aoi et al. (2008)
. Trail-and-error modeling was used to obtain the final refined velocity model consistent with the results of
Yamada et al. (2009b)
; a lowest-velocity first layer of about 2 m thickness and shear-wave velocity on the order of 200 m/s was required to produce the maximum horizontal spectral responses observed near 10 Hz.
Table 5 .
5 NGA Near-Fault Strike-Slip Ground Motions
(name) Date (day,mon,yr) M Station Vs30 (m/s) JB Fault Distance (km)
Parkfield 28 Jun. 1966 6.1 Cholame 2WA 185 3.5
Imperial Valley 15 Oct. 1979 6.5 El Centro Array #7 212 3.1
Superstition Hills 24 Nov. 1987 6.6 Parachute 349 1.0
Erzincan 13 Mar. 1992 6.9 95 Erzincan 275 2.0
Landers 28 Jun. 1993 7.3 Lucerne 665 1.1
Kobe, Japan 16 Jan. 1995 6.9 KJMA 312 0.6
Kocaeli, Turkey 17 Aug. 1999 7.4 Yarimca 297 2.6
Kocaeli, Turkey 17 Aug. 1999 7.4 Sakarya 297 3.1
Duzce, Turkey 12 Nov. 1999 7.1 Duzce 276 8.1
Geometric Mean 6.9 299 2.1
Acknowledgments
This paper is dedicated to the memory of William Joyner, who generously participated in discussions of directivity, wave propagation, site response, and nonlinear soil response, and who encouraged us to pursue many of the investigations presented here. David Boore kindly read the initial draft and provided suggestions that improved it. The authors benefited from helpful discussions with David Boore, Joe Andrews, Paul Spudich, Art Frankel, Dave Perkins, Chris Wood, Ralph Archuleta, David Oglesby, Steve Day, Bruce Bolt, Rob Graves, Roger Denlinger, Bill Foxall, Larry Hutchings, Ned Field, Hiro Kanamori, Dave Wald, and Walt Silva. Shawn Larson provided the e3d software, and Paul Spudich provided isochrone software. Supported by U.S. Bureau of Reclamation Dam Safety Research projects SPVGM and SEIGM and USGS award no. 08HQGR0068. The National Information Center |
00176642 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2007 | https://shs.hal.science/halshs-00176642/file/52-bornes2005_04_07.pdf | Elyès Jouini
email: [email protected]
Marie Chazal
email: [email protected]
Equilibrium Pricing Bounds on Option Prices
Keywords: Option bounds, equilibrium prices, conic duality, semi-infinite programming OR Subjects: Finance: Asset pricing. Programming: Infinite dimensional. Utility/preference: Applications Area of Review: Financial engineering
come
Introduction
A central question in finance consists of finding the price of an option, given information on the underlying asset. We investigate this problem in the case where the information is imperfect. More precisely, we are interested in determining the price of an option without making any distributional assumption on the price process of the underlying asset. It is well known that, in a complete financial market, by the no-arbitrage condition, the price of an option is given by the expectation of its discounted payoff under the risk-neutral probability, i.e. the unique probability measure that is equivalent to the historical one, and under which the discounted price processes of the primitive assets are martingales. The identification of this pricing probability requires the perfect knowledge of the primitive assets dynamics. Hence, in our restricted information context, one cannot use the exact pricing rule. But, one can always search for a bounding principle for the price of an option.
One question is how to compensate part of the lack of information on the underlying asset dynamics ? Assuming (lowly) knowledge on investors' preferences, i.e. risk aversion, and using equilibrium arguments, one obtains a qualitative information of the risk-neutral probability density, on which our bounding rule is based. It has a great advantage from an empirical point of view since it requires no market data. Our rule also uses a quantitative information on the underlying asset but only on its price at maturity, as it is done in the pioneer works of [START_REF] Lo Lo | Semi-parametric upper bounds for option prices and expected payoffs[END_REF].
Lo initiated a literature on semi-parametric bounds on option prices. He derived upper bounds on the prices of call and put options depending only on the mean and variance of the stock terminal value under the risk-neutral probability : he obtained a closed-form formula for the bound as a function of these mean and variance. This work has been extended to the case of conditions on the first and the nth moments, for a given n, by [START_REF] Grundy | Option prices and the underlying asset's return distribution[END_REF]. [START_REF] Bertsimas | On the relation between option and stock prices: an optimization approach[END_REF] generalized these results to the case of n ≥ 2 moments restrictions. When the payoff is a piecewise polynomial, the bounding problem can be rewritten, by considering a dual problem, as a semi-definite programming problem and thus can be solved from both theoretical and numerical points of view. [START_REF] Gotoh | Bounding option prices by semidefinite programming: a cutting plane algorithm[END_REF] proposed an efficient cutting plane algorithm which solves the semi-definite programming problem associated to the bound depending on the first n moments. According to their numerical results, the upper bound of Lo is significantly tightened by imposing more than 4 moments conditions. Since the mean of the terminal stock discounted price under the martingale measure is given by the current stock price, the first moment condition is totally justified. However, the knowledge of the n ≥ 2 moments under the risk-neutral probability is a little illusive. We restrict ourselves to put constraints on the two first risk-neutral moments and use some qualitative information on the risk-neutral measure in order to improve the bound of Lo. In Black-Scholes like models the variance of the stock price is the same under the true and the risk-neutral probabilities. This provides then a justification for the knowledge of the second moment under the risk-neutral probability.
The restriction that we put on the martingale measure comes from equilibrium and hence preferences considerations : in an arbitrage-free and complete market with finite horizon T , the equilibrium can be supported by a representative agent, endowed with one unit of the market portfolio, that maximises the expected utility U of his terminal wealth X T under his budget constraint. The first order condition implies that the Radon-Nikodym density with respect to the true probability measure of the martingale measure, dQ dP , is positively proportional to U ′ (X T ). Under the usual assumption that agents are risk-averse, the utility function U is concave. It is therefore necessary that the density dQ dP is a nonincreasing function of the terminal total wealth X T . When the derivative asset under consideration is written on the total wealth or on some index seen as a proxy of the total wealth, one can restrict his attention to a pricing probability measure that has a nonincreasing Radon-Nikodym density with respect to the actual probability measure (remark that in the Black-Scholes model, the risk-neutral density satisfies this monotonicity condition if and only if the underlying drift is upper than the risk-free rate, which is a necessary and sufficient condition for the stock to be positively held). This ordering principle on the martingale probability measure with respect to the underlying asset price has been introduced by [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF]. Together with [START_REF] Ritchken | On option pricing bounds[END_REF], they launched an important part of the literature on bounding option prices by taking into account preferences properties as for instance risk-aversion. [START_REF] Bizid | Pricing of nonredundant asset in a complete market[END_REF] and [START_REF] Jouini | Continuous time equilibrium pricing of nonredundant assets[END_REF] obtained, in different settings, that this ordering principle is a necessary condition for options prices to be compatible with an equilibrium.
Following their terminology, we call "equilibrium pricing upper bound" on the price of an option maturing at the terminal date, a bound that is obtained under the restriction that the Radon-Nikodym density of the pricing probability measure is in reverse order with the underlying terminal value (see also [START_REF] Jouini | Convergence of the equilibrium prices in a family of financial models[END_REF] for the definitions of equilibrium prices, equilibrium pricing intervals in incomplete markets and their convergence properties).
As an example, B
P &R := sup{E Q [ψ(S T )], Q : E Q [S T ] = S 0 , dQ/dP ց w.r.t. S T } is
an equilibrium pricing upper bound on the price of an option with payoff ψ(S T ), when we only know the distribution of the terminal stock price S T , under the true probability measure P. We obtain that, for the call option,
B P &R = S 0 E P [S T ] E P [ψ(S T )]
. This expression has already been obtained as a bound on the price of a call option, starting from different considerations, by [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF], [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] and [START_REF] Ritchken | On option pricing bounds[END_REF]. [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF] obtained it as the minimum price for the call above which there exists a portfolio, made up of the stock and the riskless asset, of which the terminal value dominates, in the sense of second order stochastic dominance, the terminal value of some portfolio with the same initial wealth but made of call units. [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] derived it as the upper bound on a call option arbitrage price, for stock price distributions such that the normalized conditional expected utility for consumption is nonincreasing in the stock price. [START_REF] Ritchken | On option pricing bounds[END_REF] derived the same upper bound, with a finite number of states of the world, by restricting the state-contingent discount factors to be in reverse order with the aggregate wealth which is itself assumed to be nondecreasing with the underlying security price. When interpreting the state j discount factor as the discounted marginal utility of wealth of the representative agent in state j, this restriction corresponds to the concavity of the representative utility function. The concavity assumption accounts for risk-aversion and means that agent have preferences that respect the second order stochastic dominance principle. By extension, in an expected-utility model, preferences are said to respect the nth order stochastic dominance rule if the utility function is such that its derivatives are successively nonnegative and nonpositive up to nth order. [START_REF] Ritchken | Stochastic Dominance and Decreasing Absolute Risk Averse Option Pricing Bounds[END_REF], [START_REF] Basso | Option pricing bounds with standard risk aversion preferences[END_REF] proposed the application of such rules to put additional restrictions of the state discount factors and thus improve Ritchken's bounds.
These works are also to be related to more recent results, in a continuous state of the world framework, by e.g. [START_REF] Constantinides | Stochastic dominance bounds on derivatives prices in a multiperiod economy with proportional transaction costs[END_REF] who derived stochastic dominance upper (lower) bounds on the reservation write (purchase) price of call and put options in a multi-period economy and in the presence of transaction costs.
Our main contribution is to provide an equilibrium pricing upper bound for the price of a European call option, given a consensus on the actual distribution of the underlying terminal value and given its second risk-neutral moment. The novelty is in combining moment constraints and the monotonicity condition on the Radon-Nikodym density of the risk-neutral probability with respect to the true probability.
We adopt a conic duality approach to solve the constrained optimization problem corresponding to our bounding problem. By the use of some classical result in moments theory, given in [START_REF] Shapiro | On duality theory of conic linear problems[END_REF], we obtain some sufficient condition for strong duality and existence in the dual problem to hold, for derivative assets defined by general payoff functions. Explicit bounds are derived for the call option, by solving the dual problem which is a linear programming problem with an infinite number of constraints. This also allows us to solve the primal problem. We observe on some numerical example that Lo's bound is at least as tightened by the qualitative restriction on the risk-neutral probability measure as by the quantitative information on the third and fourth risk-neutral moments of the underlying asset.
The paper is organized as follows. Section 1 is devoted to the equilibrium pricing upper bound formulation. The duality results are provided in Section 2 and the equilibrium pricing upper bound for the call option is derived in Section 3. We provide a numerical example in Section 4 and finally make concluding remarks. All proofs are given in a mathematical appendix.
The model formulation
We consider a financial market with a finite horizon T , with assets with prices defined on a given probability space (Ω, F, P). One of these asset is riskfree. We assume, without loss of generality and for sake of simplicity, that the riskfree rate is 0. The market is assumed to be arbitrage-free, complete and at the equilibrium. Hence there exists a probability measure Q, equivalent to P, under which the assets prices processes are martingales. Since the market is at equilibrium, the Radon-Nikodym density
d Q
dP is a nonincreasing function of the terminal total wealth or equivalently of the terminal value of the market portfolio. We want to put an upper bound on the price of an option written on the market portfolio or on on some index, which can be seen as a proxy of the market portfolio.
We denote by m the price of the underlying asset at time 0 and by S T its price at the terminal time. We assume that m ∈ R + . The price S T is assumed to be a nonnegative random value on (Ω, F, P) which is square integrable under P and Q. We suppose that its distribution under P has a density with respect to the Lebesgue measure, which is known.
This density is denoted by f and it is assumed to be positive on [0, ∞). We denote by
p 1 ∞ 0 xf (x)dx and p 2 ∞ 0 x 2 f (x)dx (1)
the first and second moments of S T under P.
We have m = E Q[S T ] and we set δ
:= E Q[S 2 T ].
We further assume that S T is an increasing function of the terminal value of the market portfolio. Hence, there exists a function ḡ which is positive and nonincreasing on (0, ∞)
such that d Q dP = ḡ(S T
) and such that the functions f ḡ, xf ḡ and x 2 f ḡ are in L 1 (0, ∞) and satisfy
∞ 0 f (x)ḡ(x)dx = 1 , ∞ 0 xf (x)ḡ(x)dx = m and ∞ 0 x 2 f (x)ḡ(x)dx = δ .
(2)
Given a payoff function ψ such that the functions ψf and ψf ḡ are in L 1 (0, ∞), we denote by X the vector space generated by the nonnegative measures µ on ([0, ∞), B([0, ∞))), such that the functions ψf , f , xf and x 2 f are µ-integrable. We assume that 0 is a Lebesgue point of both ψf and f , i.e.
lim r→0 1 r (0,r) |ψ(x)f (x) -ψ(0)f (0)|dx = lim r→0 1 r (0,r) |f (x) -f (0)|dx = 0.
The space X therefore contains the Dirac measure at 0, δ 0 . Let C be the convex cone of X generated by δ 0 and by the elements µ of X that have nonnegative and nonincreasing densities on (0, ∞).
We put the following upper bound on the equilibrium price of an option with payoff
ψ(S T ) (P ) sup µ∈C m,δ ∞ 0 ψ(x)f (x)dµ(x)
where C m,δ is the set of µ ∈ C which satisfy
∞ 0 f (x)dµ(x) = 1 , ∞ 0 xf (x)dµ(x) = m and ∞ 0 x 2 f (x)dµ(x) = δ .
We denote by val(P ) the value of problem (P ).
Remark 1.1 Let G be the set of nonnegative, nonincreasing functions g on (0, ∞) such that ψf g, f g, xf g and x 2 f g are in L 1 (0, ∞). Any element µ of C can be decomposed as follows: dµ = αdδ 0 + gdx where α ∈ R + and g ∈ G.
Remark 1.2 One can always assume that ψ(0) = 0. Indeed, if ( P ) is the problem associated to ψψ(0) then, it is clear that val(P ) = val( P ) + ψ(0). Therefore, in the sequel, we work under the assumption that ψ(0) = 0 .
The dual problem formulation
In this section, we formulate the dual problem of (P ). Let X ′ be the vector space generated by ψf , f , xf and x 2 f . The spaces X and X ′ are paired by the following bilinear form
(h, µ) ∈ X ′ × X -→ ∞ 0 h(x)dµ(x) .
Let us introduce the polar cone of C:
C * = {h ∈ X ′ | ∞ 0 h(x)dµ(x) ≥ 0 , ∀µ ∈ C} .
In all the sequel, when considering v ∈ R 3 , we will denote v (v 0 , v 1 , v 2 ).
It is clear that for all λ ∈ R 3 such that λ 0 f + λ 1 xf + λ 2 x 2 fψf ∈ C * , and for all measure µ ∈ C m,δ we have
∞ 0 ψ(x)f (x)dµ(x) ≤ λ 0 + λ 1 m + λ 2 δ .
It is therefore natural to consider the following problem (D) inf
λ∈R 3 λ 0 + λ 1 m + λ 2 δ subject to λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * .
We denote by val(D) the value of problem (D) and by Sol(D) the set of solutions to (D),
i.e.
Sol(D) {λ ∈ R 3 | λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * and λ 0 + λ 1 m + λ 2 δ = val(D)} .
From Proposition 3.4 in [START_REF] Shapiro | On duality theory of conic linear problems[END_REF], we have some strong duality between the two problems under the condition given in the following proposition.
Let In Proposition 2.2 below, we determine F , we check that (1, m, δ) is in F and we provide some sufficient condition for (1, m, δ) to be in Int(F ). For this purpose, we first introduce a function ξ, by means of which we express F .
F v ∈ R 3 | ∃µ ∈ C : v = ∞ 0 f (x)dµ(x), ∞ 0 xf (x)dµ(x), ∞ 0 x 2 f (x)dµ(x) .
We will prove (see Lemma A.3) that, for all r ∈ (0, p 2 /p 1 ], there exists a unique
ξ(r) ∈ (0, ∞] such that ξ(r) 0 x 2 f (x)dx = r ξ(r) 0 xf (x)dx . (3)
Moreover, we have ξ(r) < ∞ ⇐⇒ r < p 2 /p 1 and
x 0 u 2 f (u)du > r x 0 uf (u)du ⇐⇒ x ∈ (ξ(r), ∞].
We define
W v ∈ (0, ∞) 3 | v 1 /v 2 ≥ p 1 /p 2 , v 1 /v 0 ≤ ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx . ( 4
) Proposition 2.2 (i) F = (R + × {0} × {0}) ∪ W . (ii) (1, m, δ) ∈ W . (iii) If m/δ > p 1 /p 2 then (1, m, δ) ∈ Int(W ).
The proof is given in the mathematical appendix, Section A.
λ 0 + λ 1 m + λ 2 δ (5) subject to x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 .
The proof is given in the mathematical appendix, Section A.
3 The upper bound determination for the call option
In this section, we calculate val(P ) in the case of a European call option with strike K > 0:
in this section we put
ψ(x) = (x -K) + , for all x ≥ 0 ,
where we use the notation (x -K) + max{x -K, 0}.
Remark 3.1 Since for all x ≥ 0, we have 0 ≤ ψ(x) ≤ x and since for all measure
µ ∈ C m,δ , ∞ 0 xf (x)dµ(x) = m, we have val(P) ≤ m .
The value of problem (P ) is therefore finite. In this framework, Proposition 2.1, means that the proposition "val(P ) = val(D) and Sol(D) is non-empty and bounded" is equivalent to the condition (1, m, δ) ∈ Int(F ).
We start with considering the case where m/δ = p 1 /p 2 .
Theorem 3.1 If m/δ = p 1 /p 2 then the set Sol(D) is non-empty, we have
val(P ) = val(D) = (m/p 1 ) ∞ 0 ψ(u)f (u)du
and the measure µ defined by
dµ (1 -(m/p 1 ))dδ 0 + (m/p 1 )1 (0,∞) dx is in Sol(P ).
The proof is given in the mathematical appendix, Section B.
From Remark 2.1, we see that it remains to consider the case where m/δ > p 1 /p 2 . In that case, the value of (D) depends on several parameters that we now present. When
m/δ > p 1 /p 2 , we can consider x ξ (δ/m) ( 6
)
where ξ is defined by (3): it is the unique positive real number satisfying
x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx .
We introduce another parameter x m which also depends on the risk-neutral moments m and δ and on the true density f . We will prove (see Lemma B.1) that when m/δ > p 1 /p 2 there exists a unique x m ∈ (0, ∞) such that
xm 0 xf (x)dx = m xm 0 f (x)dx .
Moreover, we have
x 0 uf (u)du > m x 0 f (u)du ⇐⇒ x ∈ (x m
, ∞] and x > x m . We are now in position to provide the result for the case where m/δ > p 1 /p 2 . Since, from Remark 3.1, the value of (P ) is finite, we know by Remark 2.1, that (P ) and (D) are in strong duality and existence holds for the dual problem. For sake of simplicity, we use the following notation
I(x) x 0 f (u)du , M (x) x 0 uf (u)du , ∆(x) x 0 u 2 f (u)du , x ≥ 0 . ( 7
)
Let us also write d(x)
x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du. Theorem 3.2 Let us assume that m/δ > p 1 /p 2 . (i) If d(x) > 0 or if d(x) = 0 and x > K then val(P ) = val(D) = m x 0 uf (u)du x 0 ψ(u)f (u)du
and the measure µ defined by
dµ 1 - m x 0 uf (u)du dδ 0 + m x 0 uf (u)du 1 (0,x) dx is in Sol(P ). (ii) If d(x) < 0 or if d(x) = 0 and x ≤ K then there exists (x 0 , x 1 ) ∈ R + × R + such that x 0 ∈ (0, min{x m , K}) and x 1 ∈ (max{x, K}, ∞) , (8)
M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] + m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] , (9)
(x 2 0 -δ)[I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] + (x 0 -m)[I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] = x 1 0 ψ(u)f (u)du x 1 -x 0 ψ(x 1 ) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] . (10)
We have
val(P ) = val(D) = M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) x 1 0 ψ(u)f (u)du
and the measure µ defined by
dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx
is in Sol(P ), for any couple (x 0 , x 1 ) ∈ R + × R + which satisfies conditions ( 8), ( 9) and ( 10).
The proof is given in the mathematical appendix, Section B.
Notice that, in light of the proof of Theorem 3.2, it can be seen that the alternative between "d(x) > 0 or d(x) = 0 and x > K" and "d(x) < 0 or d(x) = 0 and x ≤ K" corresponds to an alternative concerning the properties of the solutions to problem (D),
i.e. according to Proposition 2.3 concerning the solutions to problem (5). Under the first condition, all solutions to problem ( 5) are such that exactly on constraint is binding.
Under the second condition, all solutions are such that exactly two constraints are binding.
It can be seen that the first condition amounts to say that x is smaller than the smallest positive point for which there exists λ satisfying the constraints of problem ( 5) and such that one exactly of these constraints is binding at this point.
To put an end to this section, we recall the bound on the call option price derived by [START_REF] Levy | Upper and lower bounds of put and call option value: stochastic dominance approach[END_REF], [START_REF] Perrakis | Option pricing bounds in discrete time[END_REF] and Ritchken (1984). In our framework it is given by
B P &R sup µ∈Cm ∞ 0 ψ(x)f (x)dµ(x) .
where C m is the set of measures µ in C satisfying
∞ 0 f (x)dµ(x) = 1 and ∞ 0 xf (x)dµ(x) = m .
Proposition 3.1 We have
B P &R = (m/p 1 ) ∞ 0 ψ(x)f (x)dx .
The proof is given in the mathematical appendix, Section B.
Numerical Example
In this section we observe on some numerical example how the bound of Lo on the call option, i.e.
B Lo sup {Q | E Q [S T ]=m , E Q [S 2 T ]=δ} E Q [(S T -K) + ] ,
can be improved by imposing the equilibrium pricing rule, i.e by considering probability measures that have Radon-Nikodym densities with respect to the true one which decrease with the stock terminal value.
Following some example of [START_REF] Gotoh | Bounding option prices by semidefinite programming: a cutting plane algorithm[END_REF], we can report the bound that they obtained by imposing up to fourth moments conditions :
B 4 sup {Q | E Q [S T ]=m , E Q [S 2 T ]=δ , E Q [S 3 T ]=m 3 , E Q [S 4 T ]=m 4 } E Q [(S T -K) + ]
and thus compare the improvement of Lo's bound entailed by the additional moments conditions to the one entailed by the qualitative restriction on the pricing probability measure.
The example uses the framework of the Black-Scholes model. The market contains one riskfree asset with rate of return r ≥ 0 and one stock following a log-normal diffusion with drift µ ∈ R and volatility σ ∈ R * . The discounted stock price process (S t ) t∈[0,T ] satisfies, for all t ∈ [0, T ], S t = S 0 exp{(µrσ 2 /2)t + σW t }, and there exists a probability measure Q equivalent to the true one under which (S t ) t∈[0,T ] is a martingale. Its Radon-Nikodym density with respect to the historical probability measure is given by
L T = exp -((µ -r)/2σ) 2 T -((µ -r)/σ)W T . It is easy to see that L T = S T S 0 -µ-r σ 2 exp - µ -r 2 + (µ -r) 2 2σ 2 T .
The density L T is therefore a nonincreasing function of the stock terminal value if and only the drift µ is greater than the riskfree rate r.
To follow the example presented in Gotoh and Konno, we set the horizon time T to 24/52, the riskfree rate to 6% and the drift µ to 16%. The stock price at time 0 is fixed to 400, i.e. m = S 0 = 400. We provide the bounds B Lo , B 4 , B P &R and val(P ) as well as the Black-Scholes price BS, for a call option with strike K, for several values of the strike K. We also let variate the volatility σ and hence δ, i.e. the corresponding moment of order 2 under Q of S T . We also provide the relative deviation of each bound B from the Black-Scholes price: e = (B -BS)/BS.
We Here again, the equilibrium pricing rule permits to tighten the bound on the call option price (which is given by the current stock price) more significantly than the risk-neutral moment of order 2 restriction.
Here should be inserted Table 1.
Concluding remarks
We observe on the numerical example that adding the equilibrium pricing constraints provides, in general, a better bound than the one obtained by adding information on the risk-neutral moments. This encourages us to carry on this work for options with more general payoffs. As it is done by [START_REF] Basso | Option pricing bounds with standard risk aversion preferences[END_REF] in the case of a finite probability space and without restriction on moments, it would also be of interest to take into account stronger restrictions on preferences such as decreasing absolute risk-aversion, decreasing absolute prudence and so on, with or without putting restrictions on moments and in the context of a general probability space.
Also notice that the equilibrium pricing rule can also be valid for a European option expiring at date t lower than the terminal time T . Typically, consider an arbitrage-free and complete financial market, with one risky asset S, which distributes some dividend D. The price at time 0 of a European option with maturity t and payoff ψ(S t ) is given by
E Q [ψ(S t )] = E P [ψ(S t )M t ],
where
M t := E P [ dQ dP | F t ]
is the martingale probability measure density with respect to P, conditionally on the information at time t. Since the economy is supported by a representative agent, endowed with one unit of the market portfolio, which maximizes some utility of its consumption c and terminal wealth, a necessary condition for equilibrium is that the agent's optimal consumption rate c t is a nonincreasing function of the state price density M t (see e.g. [START_REF] Karatzas | Optimization problems in theory of continuous trading[END_REF]). Since at the equilibrium, the consumption process c t must equal the cumulative dividend process D t , if we assume that the stock price is an increasing function of this dividend, we obtain that the stock price is a nonincreasing function of the state price density. This last assumption is justified by [START_REF] Jouini | A class of models satisfying a dynamical version of the CAPM[END_REF]. They show that for a large class of utility functions, there always exist equilibria satisfying this monotonicity condition.
It is possible to derive option prices bounds given other option prices. For example D.
Bertsimas and I. [START_REF] Popescu | A semidefinite programming approach to optimal moment bounds for distributions with convex properties[END_REF] derived closed form bounds on the price of a European call option, given prices of other options with the same exercise date but different strikes on the same stock. It seems reasonable to assume that, for liquidity reasons, the prices of 1 to 3 near-the-money call options, e.g. with strikes between 70% and 130% of the current stock price, are known. Given this information, one can seek for bounds on the equilibrium prices of the call options for other strikes values. This permits to put bounds on the smile, which constitutes a way to separate unrealistic from realistic stochastic volatility models that are used in practice.
Finally, we have set our bounding option prices principle in the case of complete markets in order to use properly the equilibrium condition that provides the decreasing feature of the Radon-Nikodym density of the risk-neutral probability measure with respect to the terminal value of the market portfolio. But, under some circumstances, one can argue that in an incomplete market, this latter necessary condition for the pricing probabilities to be compatible with an equilibrium still holds. Of course, in the incomplete market case, the equivalent martingale measure is not unique and there is no reason for the second moment of the underlying asset to be the same under all martingale probability measures. However, one can assume that an upper bound on this second moment under any martingale measure is known. Our bounding principle could then be extended to the incomplete market case, by establishing, for example, that our bound increases with the second moment constraint. This should be the case for the call option and more generally, for derivatives with convex payoffs.
Mathematical Appendix
A Proofs of the results stated in Section 2
In order to shorten and make clear the proofs of Propositions 2.2 and 2.3, we state the five following lemmas. But the reader can directly read the proofs of Propositions 2.2 and 2.3 in Sections A.2 and A.3.
A.1 Technical Lemmas
The following lemma permits, in particular, to obtain the simple formulation of problem (D) given in Proposition 2.3.
Lemma A.1 Let h ∈ L 1 (0, ∞).
The following statements are equivalent.
(i) For any function g which is nonnegative and nonincreasing on (0, ∞) and such that hg ∈ L 1 (0, ∞), we have
x 0 h(u)g(u)du ≥ 0, for all x ≥ 0. (ii) x 0 h(u)du ≥ 0, for all x ≥ 0.
Proof Let h ∈ L 1 (0, ∞). It is clear that (i) implies (ii). Conversely, let us assume that x 0 h(u)du ≥ 0 , for all x ≥ 0 .
(A.1)
Let g be a function satisfying the requirements of (i) and let x ∈ (0, ∞). For any n ∈ N * , consider {x 0 , • • • , x n } the regular subdivision of [0, x], with x 0 = 0 and
x n = x. Let us set, for all u ∈ [0, x], g n (u) n i=1 g(x i )1 (x i-1 ,x i ] (u).
It is easy to see that, if g is continuous at some u ∈ (0, x) then the sequence (g n (u)) n converges towards g(u). Since g is nonincreasing, it has a countable number of discontinuities and hence the sequence (g n ) n∈N * converges to g a.e. on [0, x]. One can further check that 0 ≤ g n ≤ g on [0, x], for all n. Consequently, the sequence (hg n ) n∈N * converges to hg a.e. on [0, x] and satisfies:
|hg n | ≤ |hg| on [0, x], for all n. Since hg ∈ L 1 (0, ∞), it
follows from the dominated convergence theorem that
x 0 h(u)g(u)du = lim n→∞ x 0 h(u)g n (u)du . (A.2)
By rewriting g n in the following form g n = g(x n )1 (0,xn] + n i=1 (g(x i-1 )g(x i ))1 (0,x i-1 ] we obtain:
x 0 h(u)g n (u)du = g(x n ) x 0 h(u)du + n i=1 (g(x i-1 ) -g(x i )) x i-1 0 h(u)du. Since
g is nonnegative and nonincreasing on (0, ∞), it then follows from (A.1) that, for all n,
x 0 h(u)g n (u)du ≥ 0. Finally, by (A.2), we have
x 0 h(u)g(u)du ≥ 0 , for all x ≥ 0. This completes the proof of Lemma A.1.
The following properties of the functions M/I, ∆/I and ∆/M , where I, M and ∆ are defined in (7), will be used in the sequel. They are easy to obtain by derivation.
Lemma A.2 The functions x -→ M (x)/I(x), x -→ ∆(x)/I(x) and x -→ ∆(x)/M (x)
are derivable and increasing on (0, ∞). Now, we prove the existence of the function ξ presented in (3).
Lemma A.3 For all r ∈ (0, p 2 /p 1 ], there exists a unique ξ(r) ∈ (0, ∞] such that ξ(r)
0 x 2 f (x)dx = r ξ(r) 0 xf (x)dx . Moreover x 0 u 2 f (u)du > r x 0 uf (u)du ⇐⇒ x ∈ (ξ(r), ∞],
and the function r -→ ξ(r) is continuous on (0, p 2 /p 1 ).
Proof Let r ∈ (0, p 2 /p 1 ] and let φ be the function defined on R + by φ(x) = x 0 (u 2ru)f (u)du. Since f is positive, φ is decreasing on (0, r) and increasing on (r, ∞). As φ is continuous and satisfies φ(0) = 0, lim x→∞ φ(x) = p 2rp 1 > 0 when r < p 2 /p 1 or lim x→∞ φ(x) = 0 when r = p 2 /p 1 , it follows that there exists a unique ξ ∈ (0, ∞] such that φ < 0 on (0, ξ), φ(ξ) = 0 and φ > 0 on (ξ, ∞]. We clearly have ξ(r) < ∞ ⇐⇒ r < p 2 /p 1 .
Noticing that r = ∆(ξ(r))/M (ξ(r)) for all r ∈ (0, p 2 /p 1 ) and that, by Lemma A.2, the function ∆/M is continuous and increasing on (0, ∞), we obtain, from the inverse function theorem, that ξ is continuous on (0, p 2 /p 1 ). This ends the proof of Lemma A.3.
The following technical result is used in the proof of Proposition 2.2. x 0 (a + bu + cu 2 )f (u)du. By construction, P (y) = 0. Let us check that P (x) ≥ 0 for all x ≥ 0. Since P (0) = P (y) = 0 and f > 0, there exists z ∈ (0, y), such that a + bz + cz 2 = 0. Since a > 0 and c > 0, we have a + bx + cx 2 > 0 on [0, z) ∪ (y, ∞) and a + bx + cx 2 < 0 on (z, y) . It follows that P is increasing on [0, z] and on [y, ∞) and decreasing on (z, y). Since it satisfies P (0) = P (y) = 0, this proves that P (x) ≥ 0, for all x ≥ 0. This ends the proof of Lemma A.4.
A.2 Proof of Proposition 2.2
Proof of Proposition 2.2 (i) We prove that F = (R + × {0} × {0}) ∪ W .
Step I. Let us prove that (R
+ × {0} × {0}) ∪ W ⊂ F . Let v ∈ (R + × {0} × {0}) ∪ W and
consider the measure µ defined by:
dµ (v 0 /f (0))dδ 0 if v ∈ R + × {0} × {0}, dµ v 0 -v 1 R ξ(v 2 /v 1 ) 0 f (x)dx R ξ(v 2 /v 1 ) 0 xf (x)dx 1 f (0) dδ 0 + v 1 R ξ(v 2 /v 1 ) 0 xf (x)dx 1 (0,ξ(v 2 /v 1 )) dx, if v ∈ W . One can check that µ ∈ C and (v 0 , v 1 , v 2 ) = ∞ 0 f dµ, ∞ 0 xf dµ, ∞ 0 x 2 f dµ and hence v ∈ F . Step II. Let us prove that F ⊂ (R + × {0} × {0}) ∪ W . Let v ∈ F and µ ∈ C be such that v = ∞ 0 f dµ, ∞ 0 xf dµ, ∞ 0
x 2 f dµ . By Remark 1.1 there exists α ∈ R + and g ∈ G such that dµ = αdδ 0 + gdx. We have:
(v 0 , v 1 , v 2 ) = αf (0) + ∞ 0 f (x)g(x)dx, ∞ 0 xf (x)g(x)dx, ∞ 0 x 2 f (x)g(x)dx . (A.4)
Let us denote by |{g > 0}| the Lebesgue measure of {g > 0}. If |{g > 0}| = 0 then g = 0 a.e. and hence, v = (αf (0), 0, 0) ∈ R + × {0} × {0}.
Let us now consider the case where |{g > 0}| > 0. In that case, it is clear that
v ∈ (0, ∞) 3 . Let us prove that v 1 /v 2 ≥ p 1 /p 2 . (A.5)
Consider the function h defined on (0, ∞) by h(x) x (p 2 /p 1x) f (x). By construction, ∞ 0 h(x)dx = 0 and since f is positive, the function x -→
x 0 h(u)du is increasing on (0, p 2 /p 1 ) and decreasing on (p 2 /p 1 , ∞). It follows that x 0 h(u)du ≥ 0, for all x ≥ 0. Then, by Lemma A.1, we have x 0 h(u)g(u)du ≥ 0, for all x ≥ 0 and hence, by letting x tend to ∞, (p 2 /p 1 ) v 1v 2 ≥ 0. We have proved (A.5).
Let us prove that
v 1 /v 0 ≤ R ξ(v 2 /v 1 ) 0 xf (x)dx R ξ(v 2 /v 1 ) 0 f (x)dx . When v 2 /v 1 = p 2 /p 1 , since ξ(p 2 /p 1 ) = ∞, this amounts to prove that v 1 /v 0 ≤ p 1 . (A.6)
As above, we can apply Lemma A.1 to the function h 1 defined on (0, ∞) by h 1 (x) = (p 1x) f (x) and to the function g in order to obtain that x 0 h 1 (u)g(u)du ≥ 0 for all x ≥ 0 and hence, by passing to the limit when x tend to ∞, p 1 (v 0αf (0))v 1 ≥ 0.
Since αf (0) ≥ 0, that proves (A.6).
From (A.5) we know that, when |{g > 0}| > 0 we always have v 1 /v 2 ≥ p 1 /p 2 . That proves that, when v 1 /v 2 = p 1 /p 2 , we have v ∈ W . It remains to prove that it is also true when v 1 /v 2 > p 1 /p 2 . So, we assume that v 1 /v 2 > p 1 /p 2 and prove that
v 1 /v 0 ≤ ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx . (A.7)
For sake of readability, we write ξ = ξ(v 2 /v 1 ). Since ξ ∈ (0, ∞), we can consider the real numbers, a > 0, b ∈ R and c > 0, given by Lemma A.4, which are such that x 0 (a + bu + cu 2 )f (u)du ≥ 0, for all x ≥ 0 and ξ 0 (a + bu + cu 2 )f (u)du = 0. Recall that, by Lemma A.3, we have
ξ 0 x 2 f (x)dx = (v 2 /v 1 ) ξ 0 xf (x)dx. Therefore ξ 0 (a + bu + cu 2 )f (u)du = ξ 0 xf (x)dx v 1 a v 1 ξ 0 f (x)dx ξ 0 xf (x)dx + bv 1 + cv 2 ,
and hence
a v 1 ξ 0 f (x)dx ξ 0 xf (x)dx + bv 1 + cv 2 = 0 . (A.8)
We now show that av 0 + bv 1 + cv 2 ≥ 0. With (A.8), this will prove (A.7).
We have
x 0 (a + bu + cu 2 )f (u)du ≥ 0, for all x ≥ 0. Therefore, by Lemma A.1, we have x 0 (a + bu + cu 2 )f (u)g(u)du ≥ 0 for all x ≥ 0 and hence, by letting x tend to ∞, a(v 0 -αf (0))+bv 1 +cv 2 ≥ 0. Since a > 0 and αf (0) ≥ 0, it follows that av 0 +bv 1 +cv 2 ≥ 0.
We have obtained that if v 1 /v 2 > p 1 /p 2 then v ∈ W .
Finally we proved that F ⊂ (R + × {0} × {0}) ∪ W . This completes Step II and hence proves Proposition 2.2 (i).
Proof of Proposition 2.2 (ii) By definition of ḡ (see 2), we have
(1, m, δ) = ∞ 0 f (x)ḡ(x)dx, ∞ 0 xf (x)ḡ(x)dx, ∞ 0 x 2 f (x)ḡ(x)dx
and ḡ is positive and nonincreasing on (0, ∞).
Hence (1, m, δ) ∈ F \ {R + × {0} × {0}}, i.e. (1, m, δ) ∈ W . Proof of Proposition 2.2 (iii) Let us prove that when m/δ > p 1 /p 2 , we have (1, m, δ) ∈ Int(W ). We show that m < R ξ(δ/m) 0 xf (x)dx R ξ(δ/m) 0 f (x)dx . Since ξ(δ/m) ∈ (0, ∞), from Lemma A.4, there exists a ′ > 0, b ′ ∈ R, c ′ > 0 such that we have x 0 (a ′ + b ′ u + c ′ u 2 )f (u)du ≥ 0 , x ≥ 0 and ξ(δ/m) 0 (a ′ + b ′ u + c ′ u 2 )f (u)du = 0 . (A.9)
Then by Lemma A.1, we have ) for some large M . Hence, from the above inequalities, we deduce that:
x 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du ≥ 0 , for all x ≥ 0 . Since f > 0 and ḡ > 0 on (0, ∞) and c ′ > 0, the function x -→ x 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du is increasing on [M, ∞
∞ 0 (a ′ + b ′ u + c ′ u 2 )f (u)ḡ(u)du > 0 and hence a ′ + b ′ m + c ′ δ > 0. Now, using the fact that ξ(δ/m) 0 x 2 f (x)dx = (δ/m) ξ(δ/m) 0 xf (x)dx, we deduce from (A.9) that a ′ m ξ(δ/m) 0 f (x)dx ξ(δ/m) 0 xf (x)dx + b ′ m + c ′ δ = 0 . Then, since a ′ > 0, it follows that m < R ξ(δ/m) 0 xf (x)dx R ξ(δ/m) 0 f (x)dx . Thus (1, m, δ) is in the following subset of W : O v ∈ (0, ∞) 3 | v 1 /v 2 > p 1 /p 2 , v 1 /v 0 < ξ(v 2 /v 1 ) 0 xf (x)dx ξ(v 2 /v 1 ) 0 f (x)dx .
From Lemma A.3, the function ξ is continuous on (0, p 2 /p 1 ) and takes values in (0, ∞).
Therefore O is an open set and (1, m, δ) ∈ Int(W ). The proof of Proposition 2.2 is completed.
A.3 Proof of Proposition 2.3
Let us prove that the value and the set of solutions to problem (D) coincide respectively with the value and the set of solutions to the following problem:
min λ∈R 3 λ 0 + λ 1 m + λ 2 δ subject to x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 .
It suffices to check that, for all λ ∈ R 3 , the following statements are equivalent.
λ 0 f + λ 1 xf + λ 2 x 2 f -ψf ∈ C * . (A.10) x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du ≥ 0 , for all x ≥ 0 . (A.11) Let λ ∈ R 3 . (A.10) holds if and only if ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)dµ(x)
≥ 0 for all µ ∈ C. By Remark 1.1, this amounts to the condition
α(λ 0 -ψ(0))f (0) + ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)g(x)dx ≥ 0 , for all α ∈ R + , g ∈ G .
But, f (0) > 0 and ψ(0) = 0. It follows that (A.10) holds if and only a)
λ 0 ≥ 0 , b) ∞ 0 [λ 0 + λ 1 x + λ 2 x 2 -ψ(x)]f (x)g(x)dx ≥ 0 . (A.12)
Since by assumption the functions ψf , f , xf and x 2 f are in L 1 (0, ∞), it is clear that G contains the set {1 (0,x) , x > 0}. Hence, in (A.12), b) implies a). It follows that (A.12) implies (A.11). Conversely, let us assume that (A.11) holds. Let g ∈ G. Then, from Lemma A.1, we have (A.12). We have therefore obtained that the conditions (A.10) and (A.11) are equivalent. This ends the proof of Proposition 2.3.
B Proofs of the results stated in Section 3
In this section, we solve problem (P ) in the case of the call option. For this purpose, we use problem (D). For sake of simplicity we introduce the following notation. For λ ∈ R 3 , we denote by G λ the function defined on R + by
G λ (x) x 0 [λ 0 + λ 1 u + λ 2 u 2 -ψ(u)]f (u)du , for all x ≥ 0 (B.1)
and we set
A {λ ∈ R 3 | G λ (x) ≥ 0 , ∀ x ≥ 0 } . (B.2)
With this notation and Proposition 2.3, we know that problem (D) can be formulated as follows min λ∈A λ 0 + λ 1 m + λ 2 δ .
In the sequel, we will work only with this formulation of problem (D).
The proof of Theorem 3.2 relies on the study of the binding constraints of problem (D). So, we introduce a notation for the set of positive real numbers where some of the
constraints {G λ (x) ≥ 0, x > 0} are binding. For λ ∈ R 3 , we set bind(λ) { x ∈ (0, ∞) | G λ (x) = 0 } .
As in the previous section, we begin with stating some lemmas that allow us to shorten the proofs of the main results (Theorems 3.1 and 3.2). But the reader can go directly to the proofs of the theorems in Sections B.2 and B.3.
B.1 Technical Lemmas
We first show that the parameter x m introduced before the statement of Theorem 3.1 is well defined.
Lemma B.1 Let us assume that m/δ > p 1 /p 2 . Then, there exists a unique x m ∈ (0, ∞)
such that xm 0 xf (x)dx = m xm 0 f (x)dx
and we have
x 0 uf (u)du > m x 0 f (u)du ⇐⇒ x ∈ (x m , ∞]. Moreover x > x m
where we recall that x ξ(δ/m).
Proof
We begin with proving that m < p 1 . Since m/δ > p 1 /p 2 , from Proposition 2.2 (iii) we know that (1, m, δ) ∈ Int(W ) and hence that m <
R x 0 xf (x)dx R x 0 f (x)dx , i.e. m < M (x)/I(x). From Lemma A.2, the function M/I is increasing on (0, ∞). Hence we have m < R ∞ 0 xf (x)dx R ∞ 0 f (x)dx , i.e. m < p 1 .
Let us consider the function φ defined on R + by φ(x)
x 0 (um)f (u)du. The function φ is continuous on R + . It is decreasing on (0, m), increasing on (m, ∞) and satisfies: φ(0) = 0 and lim x→∞ φ(x) = p 1m > 0. It follows that there exists a unique
x m ∈ (0, ∞) such that φ < 0 on (0, x m ), φ(x m ) = 0 and φ > 0 on (x m , ∞]. Finally, since m < R x 0 xf (x)dx R x 0 f (x)dx we have x > x m .
This completes the proof of Lemma B.1. We now state some basic properties of the sets A and bind(λ) for λ ∈ A.
Lemma B.2 (o) A ⊂ R + × R 2 . (i) Let λ ∈ A.
The set bind(λ) has at most two elements.
(ii) Let λ ∈ A. If λ 2 ≤ 0 then bind(λ) = ∅. (iii) Let λ ∈ A. If λ 2 > 0 then lim x→∞ G λ (x) > 0. (iv) Let λ ∈ A. If bind(λ) = {x 0 , x 1 } with x 0 < x 1 then λ 0 > 0, λ 1 < 0, λ 2 > 0 and x 0 < K < x 1 . Conversely, let λ ∈ R 3 . If λ 0 > 0, λ 1 < 0, λ 2 > 0 and bind(λ) = {x 0 , x 1 } with 0 < x 0 < x 1 and G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0 then λ ∈ A.
The proof of the lemma is essentially based on the fact that, for λ ∈ A, the set bind(λ) is included in the set of G λ 's minima and hence, since f is positive, in the set of the points where the parabola x -→ λ 0 + λ 1 x + λ 2 x 2 intersects the graph of x -→ ψ(x) = (x -K) + .
Since it is quite long but basic, the proof is omitted. One can have a good intuition on these results and their proofs with a graphical study of the possible intersections of the parabola and the call payoff.
Lemma B.3 Let us assume that
m/δ > p 1 /p 2 . If λ is a solution to problem (D) then the set bind(λ) is non-empty.
Proof Let λ be a solution to problem (D). We assume that bind(λ) = ∅ and obtain a contradiction with the optimal feature of λ. By assumption, we have G λ (x) > 0, for all x > 0. Since m/δ = p 1 /p 2 , there exist a, b ∈ R such that 1 + am + bδ < 0 and 1 + ap 1 + bp 2 > 0 .
(B.3) For all ε > 0, by setting λ ε 0 λ 0 + ε, λ ε 1 λ 1 + εa and λ ε 2 λ 2 + εb, we have:
λ ε 0 + λ ε 1 m + λ ε 2 δ < λ 0 + λ 1 m + λ 2 δ.
Let us prove that there exists ε > 0 such that λ ε (λ ε 0 , λ ε 1 , λ ε 2 ) ∈ A. We write
G ε G λ ε . By construction we have G ε (x) = G λ (x) + εH(x)
where
H(x) x 0 (1 + au + bu 2 )f (u)du .
Since f is positive and since, from the second row of system (B.3), lim x→∞ H(x) =
(1 + ap 1 + bp 2 ) > 0, there exists η > 0 and
X ≥ η such that H ≥ 0 on [0, η] ∪ [X, ∞).
Since G λ is nonnegative, this implies that for all ε > 0,
G ε ≥ 0 on [0, η] ∪ [X, ∞) . (B.4)
Since G λ is continuous and positive on (0, ∞), it is bounded from below by some constant
M > 0 on [η, X].
Since the function H is continuous, and thus bounded on [η, X], it follows that there exists ε > 0 such that, for all
x ∈ [η, X], G ε (x) = G λ (x) + εH(x) ≥ M + εH(x).
This last inequality together with (B.4) prove that λ ε is in A and achieve the proof of Lemma B.3.
From Lemmas B.2 (i) and B.3, we know that, at the optimum for problem (D), there exists at least one and at most two positive real numbers where some constraints are binding. In the following lemma, we provide a necessary condition on the value of problem (D) under which a solution λ is such that exactly one constraint is binding at some positive real number.
Lemma B.4
Let us assume that m/δ > p 1 /p 2 . Let λ be a solution to problem (D) such that bind(λ) = {y}. Then
λ 0 = 0 , y = x and val(D) = λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du .
Besides we have, for all x ≥ 0, G ε (x) = G λ (x) + εH(x) where H is defined by
H(x) a x 0 f (u)du + b x 0 uf (u)du + c x 0 u 2 f (u)du .
From (B.6), there exists a neighborhood (α, β) of y where H > 0. It follows that, for all ε > 0 and for all x ∈ (α, β),
G ε (x) ≥ G λ (x) ≥ 0 . (B.8)
Since bind(λ) = ∅, from Lemma B.2 (ii), we have λ 2 > 0 and then by Lemma B.2 (iii),
lim x→∞ G λ (x) > 0. Hence G λ > 0 on (0, ∞] \ {y}. As it is continuous, it is therefore bounded from below by some positive constant on [η, α] ∪ [β, ∞]. Since the functions f , xf and x 2 f are in L 1 (0, ∞), the function H is bounded. Thus there exists ε ∈ (0, ε 0 ) such that for all x ∈ [η, α] ∪ [β, ∞), G ε (x) = G λ (x) + εH(x) ≥ 0 . (B.9)
It follows from (B.7), (B.8) and (B.9) that G ε ≥ 0 on R + , i.e. λ + ε(a, b, c) ∈ A. This ends the proof of (B.5).
Let us now prove that y = x and that
val(D) = λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du . (B.10)
Using the same kind of arguments as above, one can deduce from the optimal feature of λ that, for all (a, b, c) ∈ (0, ∞) × R 2 , we have
a y 0 f (u)du + b y 0 uf (u)du + c y 0 u 2 f (u)du > 0 =⇒ a + bm + cδ ≥ 0 .
This implies that y satisfies m y 0 u 2 f (u)du = δ y 0 uf (u)du and hence, by definition of x, that y = x. Since λ 0 = 0, we have
G λ (x) = 0 ⇔ λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du = x 0 ψ(u)f (u)du
and then it is easy to see that (B.10) holds. This concludes the proof of Lemma B.4.
We now provide a lower bound for the value of problem (D) in the case where m/δ > p 1 /p 2 . Lemma B.5 If m/δ > p 1 /p 2 then for all λ ∈ A we have
λ 0 + λ 1 m + λ 2 δ ≥ m x 0 uf (u)du x 0 ψ(u)f (u)du ,
with strict inequality when λ 0 > 0.
Proof Let λ ∈ A. Recall that x satisfies x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx. We therefore have
λ 0 + λ 1 m + λ 2 δ = m x 0 uf (u)du λ 0 x 0 uf (u)du m + λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du . But, from Lemma B.1, R x 0 uf (u)du m > x 0 f (u)du. Since, from Lemma B.2 (o), λ 0 ≥ 0, it follows that λ 0 + λ 1 m + λ 2 δ ≥ m R x 0 uf (u)du λ 0 x 0 f (u)du + λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du ≥ m R x 0 uf (u)du x 0 ψ(u)f (u)du
where the first inequality is strict when λ 0 > 0 and the second one holds because λ ∈ A.
This ends the proof of Lemma B.5.
In the following lemma we give a necessary and sufficient condition for the lower bound, given in Lemma B.5, to be attained in problem (D).
Recall that d(x) = x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du. Lemma B.6 Assume that m/δ > p 1 /p 2 . Then, there exists (λ 1 , λ 2 ) ∈ R 2 which satisfies (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du
if and only if d(x) > 0 or d(x) = 0 and x > K.
Proof Let (λ 1 , λ 2 ) ∈ R 2 and set λ (0, λ 1 , λ 2 ). Using the fact that x 0 x 2 f (x)dx = (δ/m) x 0 xf (x)dx we obtain the following equivalences
λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du ⇔ x 0 (λ 1 u + λ 2 u 2 -ψ(u))f (u)du = 0 ⇔ G λ (x) = 0 . Since λ ∈ A ⇔ G λ ≥ 0 it follows that: λ ∈ A and λ 1 m + λ 2 δ = m R x 0 uf (u)du x 0 ψ(u)f (u)du, if and only if, λ ∈ A and x is minimum of G λ with G λ (x) = 0, which is equivalent to, λ ∈ A, G λ (x) = 0 and G λ ′ (x) = 0.
Consequently, since f is positive, we have the equivalence between the existence of
(λ 1 , λ 2 ) ∈ R 2 such that we have (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m x 0 uf (u)du x 0 ψ(u)f (u)du
and the existence of a solution (λ 1 , λ 2 ) ∈ R 2 to the system
λ 1 x 0 uf (u)du + λ 2 x 0 u 2 f (u)du = x 0 ψ(u)f (u)du λ 1 x + λ 2 x2 = ψ(x) (B.11)
which satisfies (0, λ 1 , λ 2 ) ∈ A.
Since x > 0, the determinant of the system (B.11) is positive and hence the system has a unique solution. Let (λ 1 , λ 2 ) be this solution. In order to conclude it remains to prove that (0, λ 1 , λ 2 ) ∈ A ⇐⇒ d(x) > 0 or d(x) = 0 and x > K .
From (B.11), (λ 1 , λ 2 ) satisfies
λ 1 x2 x 0 uf (u)du -x x 0 u 2 f (u)du = x2 x 0 ψ(u)f (u)du -ψ(x) x 0 u 2 f (u)du (B.12) λ 2 x x 0 u 2 f (u)du -x2 x 0 uf (u)du = x x 0 ψ(u)f (u)du -ψ(x) x 0 uf (u)du . (B.13)
Let us check that when d(x) < 0 or d(x) = 0 and x ≤ K, we have (0, λ 1 , λ 2 ) / ∈ A. We have
x 2 x 0 uf (u)du -x x 0 u 2 f (u)du > 0 , for all x > 0 . (B.14)
Therefore when d(x) < 0, by (B.12) we have λ 1 < 0 and hence (0, λ 1 , λ 2 ) / ∈ A. Indeed, for small enough x we would have
G (0,λ 1 ,λ 2 ) (x) = x 0 (λ 1 u + λ 2 u 2 )f (u)du < 0.
In the case where d(x) = 0 and x ≤ K, we have λ 1 = 0 from (B.12) and λ 2 = 0 from (B.13) and (B.14), hence (0, λ 1 , λ 2 ) = (0, 0, 0) / ∈ A.
Now we assume that d(x) > 0 or d(x) = 0 and x > K and prove that (0, λ 1 , λ 2 ) ∈ A.
We first prove that λ 1 ≥ 0 and λ 2 > 0. Since, in that case, d(x) ≥ 0, from (B.12) we have λ 1 ≥ 0. Let us prove that λ 2 > 0. From (B.14), it suffices to prove that the right-hand term in (B.13) is negative. By construction, if x ≤ K then d(x) = 0. Since here d(x) > 0 or d(x) = 0 and x > K, we have in any case x > K and thus, r(x)
x x 0 ψ(u)f (u)du -ψ(x) x 0 uf (u)du = x K (x(u -K) -u(x -K))f (u)du -(x -K) K 0 uf (u)du = -K x K (x -u)f (u)du -(x -K) K 0 uf (u)du < 0 (B.15)
This proves that λ 2 > 0.
We are now in position to prove that (0, λ 1 , λ 2 ) ∈ A. Let us write λ = (0, λ 1 , λ 2 ).
Since
λ 1 ≥ 0, λ 2 > 0 and ψ = 0 on [0, K], it is clear that G λ ≥ 0 on [0, K]. On (K, ∞),
the function G λ is piecewise monotone, it is nondecreasing (resp. nonincreasing) on the intervals where the polynomial p(x) = λ 1 x + λ 2 x 2 -(x -K) is nonnegative (resp nonpositive). Since λ 1 ≥ 0 and λ 2 > 0, we have p(K) = λ 1 K + λ 2 K 2 > 0 and lim x→∞ p(x) = ∞.
Besides, from the second row of system (B.11), we have p(x) = 0. Let us prove that there exists y ∈ (K, x) such that p(y) = 0. Assume to the contrary that p = 0 on (K, x). Since p(K) > 0, we then have p > 0 on (K, x) and hence G λ is increasing on (K, x). Since G λ is continuous, this contradicts the fact that G λ (K) > 0, G λ (x) = 0. So, there exists y ∈ (K, x) such that p(y) = 0, p > 0 on [K, y) ∪ (x, ∞) and p < 0 on (y, x). The function G λ is therefore increasing on [K, y), decreasing on (y, x) and increasing on (x, ∞). Since G λ (K) > 0 and G λ (x) = 0, it follows that G λ (x) ≥ 0, for all x ≥ K. It ensues that G λ ≥ 0 on R + and hence λ ∈ A. This completes the proof of Lemma B.6.
We now provide a necessary condition for a solution λ to problem (D) to be such that exactly two constraints are binding at some positive real numbers.
Lemma B.7 Let us assume that m/δ > p 1 /p 2 . Let λ be a solution to problem (D) such that bind(λ) = {x 0 , x 1 } with x 0 < x 1 . Then there exists (α, β) ∈ (0, ∞) 2 such that
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ and we have val(D) = λ 0 + λ 1 m + λ 2 δ = β x 1 0 ψ(u)f (u)du.
Proof Let λ be a solution to problem (D) such that bind(λ) = {x 0 , x 1 } with x 0 < x 1 .
From Lemma B.2 (iv), we have x 0 < K < x 1 , λ 0 > 0, λ 1 < 0 and λ 2 > 0. Since λ 0 > 0 and λ 2 > 0, we can use the same kind of arguments as in the proof of Lemma B.4 in order to deduce from the optimal feature of λ that, for all (a, b, c) ∈ R 3 , if
a x 0 0 f (u)du + b x 0 0 uf (u)du + c x 0 0 u 2 f (u)du > 0 and a x 1 0 f (u)du + b x 1 0 uf (u)du + c x 1 0 u 2 f (u)du > 0 then a + bm + cδ ≥ 0.
From Farkas Lemma, this implies that there exists (α, β) ∈ R + 2 such that
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ . (B.16)
We have already remarked, in the proof of Lemma B.4 that for fixed i, the vectors
x i 0 f (u)du , x i 0 uf (u)du,
x i 0 u 2 f (u)du and (1, m, δ) can not be linearly dependent. We therefore have α > 0 and β > 0.
Let us check that val(D
) = λ 0 + λ 1 m + λ 2 δ = β x 1 0 ψ(u)f (u)du. From (B.16), the fact that G λ (x 0 ) = G λ (x 1 ) = 0 and x 0 < K we obtain val(D) = λ 0 + λ 1 m + λ 2 δ = α x 0 0 ψ(u)f (u)du + β x 1 0 ψ(u)f (u)du = β x 1 0 ψ(u)f (u)du .
This ends the proof of Lemma B.7.
Lemma B.8
Let us assume that m/δ > p 1 /p 2 . Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < x 1 . The system
α x 0 0 f (u)du + β x 1 0 f (u)du = 1 α x 0 0 uf (u)du + β x 1 0 uf (u)du = m α x 0 0 u 2 f (u)du + β x 1 0 u 2 f (u)du = δ (B.17)
has a solution (α, β) ∈ (0, ∞) × (0, ∞) if and only if x 0 and x 1 satisfy the following conditions
x 1 ∈ (x, ∞)x 0 ∈ (0, x m ) and x 1 ∈ (x, ∞) , (B.18) M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] . (B.19)
Under these conditions, we have
β = M (x 0 ) -mI(x 0 ) I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )
.
Proof Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < x 1 . We first prove that the system (B.17) has a solution (α, β) ∈ R 2 if and only if x 0 and x 1 satisfy (B.19). For sake of simplicity, we set I i = I(x i ), M i = M (x i ) and ∆ i = ∆(x i ), for i = 0, 1. Since 0 < x 0 < x 1 and the functions M/I and ∆/M are increasing on (0, ∞) (see Lemma A.2), we have
I 0 M 1 -I 1 M 0 > 0 and M 0 ∆ 1 -M 1 ∆ 0 > 0 . (B.20)
It follows that the system made of the first (resp. last) two rows of (B.17) has a unique solution (ᾱ, β) ∈ R 2 (resp. (α, β) ∈ R 2 ). Thus, the system (B.17) has a solution (α, β)
∈ R 2 if and only if (ᾱ, β) = (α, β). We have (ᾱ, β) =
M 1 -mI 1 I 0 M 1 -I 1 M 0 , M 0 -mI 0 I 1 M 0 -I 0 M 1 and (α, β) = m∆ 1 -δM 1 M 0 ∆ 1 -M 1 ∆ 0 , m∆ 0 -δM 0 M 1 ∆ 0 -M 0 ∆ 1 .
One can check that these couples coincide if and only if x 0 and x 1 satisfy (B.19). Under this condition, we have
(α, β) = m∆ 1 -δM 1 M 0 ∆ 1 -M 1 ∆ 0 , M 0 -mI 0 I 1 M 0 -I 0 M 1 .
From (B.20), it then follows that, (α, β) is in (0, ∞) 2 if and only if m∆ 1 -δM 1 > 0 and M 0 -mI 0 < 0. But, from Lemmas A.3 and B.1, we have m∆(x) -δM (x) > 0 ⇔ x > x and M (x) -mI(x) < 0 ⇔ x < x m .
Finally, we have obtained that, for (x 0 , x 1 ) ∈ R 2 such that 0 < x 0 < x 1 , the system (B.17) has a solution (α, β) ∈ (0, ∞) 2 if and only if x 0 and x 1 satisfy (B.19) and
x 0 ∈ (0, x m ) and x 1 ∈ (x, ∞). This ends the proof of Lemma B.8.
Lemma B.9 Let (x 0 , x 1 ) ∈ R 2 be such that 0 < x 0 < K < x 1 . There exists λ ∈ A such that bind(λ) = {x 0 , x 1 } if and only if
x 1 0 ψ(u)f (u)du ((x 1 -x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = x 0 [I(x 0 )∆(x 1 ) -∆(x 0 )I(x 1 )] + x 2 0 [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +M (x 1 )∆(x 0 ) -M (x 0 )∆(x 1 ) . (B.21)
Proof Let (x 0 , x 1 ) R 2 be such that 0 < x 0 < K < x 1 . We first prove that the system below has a solution λ ∈ R 3 if and only if (x 0 , x 1 ) satisfy condition (B.21).
λ 0 + λ 1 x 0 + λ 2 x 2 0 = 0 λ 0 + λ 1 x 1 + λ 2 x 2 1 = ψ(x 1 ) λ 0 I(x 0 ) + λ 1 M (x 0 ) + λ 2 ∆(x 0 ) = 0 λ 0 I(x 1 ) + λ 1 M (x 1 ) + λ 2 ∆(x 1 ) = x 1 0 ψ(u)f (u)du . (B.22)
Here again, for sake of simplicity, we set I(x I ) = I i , M (x i ) = M i and ∆ i = ∆(x i ), for i = 0, 1. Let us prove that the system made of the first three rows of (B.22) has a unique solution. Let d be its determinant. We prove that d > 0. After a few calculations we obtain
d := 1 x 0 x 2 0 1 x 1 x 2 1 I 0 M 0 ∆ 0 = (x 1 -x 0 )I 0 ∆ 0 I 0 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 .
By Jensen's inequality, we have
∆ 0 I 0 = R x 0 0 u 2 f (u)du R x 0 0 f (u)du ≥ R x 0 0 uf (u)du R x 0 0 f (u)du 2 = M 0 I 0 2 . Hence ∆ 0 I 0 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 ≥ M 0 I 0 2 -(x 0 + x 1 ) M 0 I 0 + x 1 x 0 = x 0 -M 0 I 0
x 1 -M 0 I 0 . Since x 0 < x 1 and M 0 I 0 < x 0 , it follows that d > 0. Therefore the system (B.22) has a solution if and only if the solution to the system made of the first 3 equations, that we denote by λ, is a solution to the fourth. One can obtain λ in function of x 0 and x 1 as follows
λ 0 = x 0 (x 0 M 0 -∆ 0 ) ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] , (B.23) λ 1 = ∆ 0 -x 2 0 I 0 ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] , (B.24) λ 2 = (x 0 I 0 -M 0 ) ψ(x 1 ) (x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] . (B.25)
One can check that λ satisfies
λ 0 I 1 + λ 1 M 1 + λ 2 ∆ 1 = x 1 0 ψ(u)f (u)du
if and only if x 0 and x 1 satisfy (B.21). We therefore have obtained that the system (B.22) has a solution λ ∈ R 3 if and only if x 0 and x 1 satisfy condition (B.21).
We have remarked that (
x 1 -x 0 )[∆ 0 -(x 0 + x 1 )M 0 + x 1 x 0 I 0 ] > 0. Since x 1 > K,
we have ψ(x 1 ) > 0 and since x 0 > 0, we have x 0 M 0 -∆ 0 > 0, ∆ 0x 2 0 I 0 < 0 and
x 0 I 0 -M 0 > 0. Thus, from (B.23), (B.24) and (B.25), when the system (B.22) has a solution λ, this solution satisfies λ 0 > 0, λ 1 < 0 and λ 2 > 0.
We are now in position to prove the equivalence stated in the lemma. First notice that, using the fact that x 0 < K, it is easy to see that λ ∈ R 3 satisfies (B.22) if and only
if G λ (x 0 ) = G λ (x 1 ) = G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0.
Let us assume that there exists λ ∈ A such that bind(λ) = {x 0 , x 1 }. Then G λ ′ (x 0 ) = G λ ′ (x 1 ) = 0 and thus, from what precedes, x 0 and x 1 satisfy (B.21). Conversely, if x 0 and x 1 satisfy (B.21) then there exists some λ ∈ R 3 which is solution to system (B.22) and such that λ 0 > 0, λ 1 < 0 and λ 2 > 0. From Lemma B.2 (iv) , it follows that λ ∈ A.
This ends the proof of Lemma B.9.
B.2 Proof of Theorem 3.1
Let us assume that m/δ = p 1 /p 2 . We first prove that val
(D) ≥ (m/p 1 ) ∞ 0 ψ(u)f (u)du (B.26) Let λ ∈ A. Since (1, m, δ) is in F and m/δ = p 1 /p 2 , by Proposition 2.2 we have 0 < m ≤ p 1 . By Lemma B.2 (o) we know that λ 0 ≥ 0. Hence we have λ 0 + λ 1 m + λ 2 δ ≥ (m/p 1 ) (λ 0 + λ 1 p 1 + λ 2 p 2 ).
We then obtain (B.26) by using the equality λ
0 + λ 1 p 1 + λ 2 p 2 = ∞ 0 (λ 0 +λ 1 x+λ 2 x 2 )f (x)dx and the fact that λ ∈ A and hence ∞ 0 (λ 0 +λ 1 u+λ 2 u 2 )f (u)du- ∞ 0 ψ(u)f (u)du ≥ 0.
It remains to prove that the lower bound in (B.26) is attained. Admit for the moment that the function Ψ defined on R + by Ψ(0) = 0 and Ψ
(x) = x 0 ψ(u)f (u)du x 0 uf (u)du for x > 0 is nondecreasing. Then, for λ 0, ∞ 0 ψ(u)f (u)du p 1 , 0 , we have λ 0 + λ 1 m + λ 2 δ = (m/p 1 ) ∞ 0 ψ(u)f (u)du and for all x ≥ 0, G λ (x) = (m/p 1 ) ∞ 0 ψ(u)f (u)du x 0 uf (u)du - x 0 ψ(u)f (u)du ≥ 0 , so that λ ∈ A.
(D) = (m/p 1 ) ∞ 0 ψ(u)f (u)du .
In order to check that val(P ) = val(D), one first notice that by construction val(P ) ≤ val(D) and hence val(P ) ≤ (m/p 1 ) ∞ 0 ψ(x)f (x)dx. 0ne second show that the measure µ defined by dµ (1m/p 1 ) dδ 0 + (m/p 1 ) 1 (0,∞) dx is in C m,δ and satisfies
∞ 0 ψf dµ = (m/p 1 ) ∞ 0 ψ(u)f (u)du.
It remains to prove what we have admitted above, i.e. that the function Ψ is nondecreasing on R + . Using the fact that f is positive, it is easy to check that sign[Ψ
′ (x)] = sign[-r(x)] with r(x) = x x 0 ψ(u)f (u)du -ψ(x)
x 0 uf (u)du, for all x ∈ R + . We have r ≡ 0 on [0, K] and we already saw that r < 0 on (K, ∞), see (B.15). This proves that Ψ is nondecreasing on R + . The proof of Theorem 3.1 is completed.
B.3 Proof of Theorem 3.2
We assume that m/δ > p 1 /p 2 . We know from Remark 3.1 that the value of problem (P ) is finite. We then deduce from Remark 2.1 that strong duality holds between the primal and dual problems: Proof of Theorem 3.2 (ii) We now assume that d(x) < 0 or d(x) = 0 and x ≤ K.
Let λ be a solution to problem (D). From Lemmas B.2 (i) and B.3, we know that the set bind(λ) is not empty and has at most two elements. We prove that it contains exactly two elements. Assume to the contrary that bind(λ) = {y} for some y ∈ (0, ∞). Then by Let us write bind(λ) = {x 0 , x 1 } with 0 < x 0 < x 1 . By Lemma B.2 (iv) we have 0 < x 0 < K < x 1 . Then, from Lemmas B.7 and B.8 we deduce that x 0 and x 1 satisfy x 0 ∈ (0, min{x m , K}), x 1 ∈ (max{x, K}, ∞) and M (x 0 )∆(x 1 ) -M (x 1 )∆(x 0 ) = δ [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +m [I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] (B.28) and that val(D) = λ 0 + λ 1 m + λ 2 δ = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
x 1 0 ψ(u)f (u)du .
Finally, by Lemma B.9, x 0 and x 1 satisfy We just proved that, when d(x) < 0 or d(x) = 0 and x ≤ K, there exists (x 0 , x 1 ) ∈ R 2 which satisfies conditions (8), ( 9) and (10). It remains to prove that we have val(D) = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
x 1 0 ψ(u)f (u)du , and that the measure µ defined by dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx is in Sol(P ), for any couple (x 0 , x 1 ) ∈ R 2 which satisfies the conditions (8), ( 9) and (10).
Let (x 0 , x 1 ) be such a couple. Then, on the one hand, by ( 8) and ( 9) and from Lemma B.8, there exists (α, β) ∈ (0, ∞) 2 such that α It follows that, for all v ∈ A,
v 0 + v 1 m + v 2 δ = α x 0 0 (v 0 + v 1 u + v 2 u 2 )f (u)du + β x 1 0 (v 0 + v 1 u + v 2 u 2 )f (u)du ≥ α x 0 0 ψ(u)f (u)du + β x 1 0 ψ(u)f (u)du . (B.30)
On the other hand, by ( 9) and ( 10), and from Lemma B.9, there exists λ ∈ A such that bind(λ) = {x 0 , x 1 }. The equality therefore holds for λ, i.e. Finally, it is easy to check that the measure µ defined by dµ M (x 1 ) -mI(x 1 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) 1 (0,x 0 ) + M (x 0 ) -mI(x 0 ) M (x 0 )I(x 1 ) -I(x 0 )M (x 1 ) 1 (0,x 1 ) dx is in C m,δ and that we have
Proposition 2. 1
1 If (1, m, δ) ∈ Int(F ) then val(P ) = val(D). If this common value is further finite, then the set of solutions to (D) is non-empty and bounded. Conversely, if val(D) is finite and the set of solutions to (D) is non-empty and bounded then (1, m, δ) ∈ Int(F ).
Lemma A. 4
4 For every y > 0, there exist a > 0, b ∈ R and c > 0 such that x 0 (a + bu + cu 2 )f (u)du ≥ 0 for all x ≥ 0 and y 0 (a + bu + cu 2 )f (u)du = 0 . Proof Let y > 0. Let us fix a > 0. The system (b, c) because y 2 y 0 uf (u)duy y 0 u 2 f (u)du > 0. From (A.3) we have c > 0 because a > 0 and y y 0 u 2 f (u)duy 2 y 0 uf (u)du < 0. Let us denote by P the function defined on R + by P (x)
problem (D) has at least one solution. We can therefore use optimality conditions on some solution to problem (D) in order to prove the theorem.Proof of Theorem 3.2 (i) Let us assume that d(x) > 0 or d(x) = 0 and x > K. Then by Lemma B.6, there exists (λ1 , λ 2 ) ∈ R 2 such that (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = )f (u)du .Then, it is easy to see that the measure µ defined by u)f (u)du. Hence, µ ∈ Sol(P ). This ends the proof of Theorem 3.2 (i).
Lemma B.4, we have λ 0 = 0, y = x and val(D) = m R x 0 uf (u)du x 0 ψ(u)f (u)du. So, we have λ = (0, λ 1 , λ 2 ) ∈ A and λ 1 m + λ 2 δ = m R x 0 uf (u)du x 0 ψ(u)f (u)du.From Lemma B.6, this can happen only in the case where d(x) > 0 or d(x) = 0 and x > K. We conclude that bind(λ) contains exactly two elements.
)f (u)du ((x 1x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = x 0 [I(x 0 )∆(x 1 ) -∆(x 0 )I(x 1 )] + x 2 0 [I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] +M (x 1 )∆(x 0 ) -M (x 0 )∆(x 1 )i.e. by (B.28), f (u)du ((x 1x 0 )/ψ(x 1 )) [∆(x 0 ) -(x 0 + x 1 )M (x 0 ) + x 0 x 1 I(x 0 )] = (x 0m)[I(x 0 )∆(x 1 ) -I(x 1 )∆(x 0 )] + (x 2 0δ)[I(x 1 )M (x 0 ) -I(x 0 )M (x 1 )] .
0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 ) . (B.29)
λ 0 +
0 λ 1 m + λ 2 δ = α x 0 0 [λ 0 + λ 1 u + λ 2 u 2 ]f (u)du + β f (u)du .It ensues then from (B.30), from the fact that x 0 < K and from (B.29)0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )
f (x)dµ(x) = M (x 0 ) -mI(x 0 ) I(x 0 )M (x 1 ) -I(x 1 )M (x 0 )x 1 0 ψ(u)f (u)du , so that µ ∈ Sol(P ). This ends the proof of Theorem 3.2 (ii) and completes the proof of Theorem 3.2.B.4 Proof of Proposition 3.1As the proof is very similar to the one of Theorem 3.1, we only give a sketch of it. First it is shown thatsup µ∈Cm ∞ 0 ψ(x)f (x)dµ(x) ≤ inf λ∈A 2 λ 0 + λ 1 m , where A 2 is the set of λ ∈ R 2 satisfying x 0 [λ 0 + λ 1 uψ(u)]f (u)du ≥ 0 for all x ∈ R + .It is easy to see that, if λ ∈ A 2 then λ 0 ≥ 0. Then recalling that m ≤ p 1 , one shows that, for all λ ∈ A 2 , we haveλ 0 + λ 1 m ≥ (m/p 1 ) (λ 0 + λ 1 p 1 ) ≥ (m/p 1 ) ∞ 0 ψ(x)f (x)dx .The proof is completed in the same way as the proof of Theorem 3.1 by considering
observe on table 1, that in general, val(P ) is much smaller than B 4 . This is false in 2 cases, where the strikes and the volatility are low (K = 300 or 350 and σ = 20%), but the values of val(P ) and B 4 are very close to each other. Hence, this example shows that when we consider equilibrium pricing probability measures, there is no need to put
(unrealistic) additional risk-neutral moments restrictions to improve Lo's bound. The bound that we obtain is very satisfactory since the relative deviation from the Black-Scholes price is less than 5%, expect in 4 cases among 15 where it is between 11% and 22%. The average relative deviation is about 6% whereas it is about 24% for B 4 and 48% for B Lo . Also notice that B P &R is much smaller than B Lo .
That proves that the lower bound in (B.26) is attained, i.e. that problem (D) has a solution and its value is given by val
Table 1 .
1 Black-Scholes price, equilibrium bound with 2 moment constraints, equilibrium bound with 1 moment constraint (Perrakis and Ryan), bound with 4 moment constraints, bound with 2 moment constraints (Lo), for different strike prices and volatilities.
σ K BS val(P ) (e) B P &R (e) B 4 (e) B Lo (e)
Proof We start with proving that λ 0 = 0. Assume for the moment that the following result holds: if λ 0 > 0, then, for all (a, b, c
Thus, if λ 0 > 0 then the vectors (1, m, δ) and We now prove the result that we have assumed above i.e. if λ 0 > 0 then (B.5) holds for all (a, b, c) ∈ R 3 . Let (a, b, c) ∈ R 3 be such that
Let us prove that there exists ε > 0 such that λ + (εa, εb, εc) ∈ A. Since λ is a solution to problem (D), it will follow that
i.e. a + bm + cδ ≥ 0 and hence, (B.5) will be proved.
Let ε > 0. For simplicity, we write G ε G λ+ε (a,b,c) . We have
Since λ 0 > 0, there exists ε 0 > 0 such that for all ε ∈ [0, ε 0 ], λ 0 + εa ≥ λ 0 /2 > 0. Since f is positive, it follows that there exists η > 0 such that, for all ε ∈ [0, ε 0 ],
G ε ≥ 0 on [0, η) . (B.7) |
01766425 | en | [
"sde.be",
"sde.es",
"sde.mcg"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01766425/file/Hoy%20et%20al%20JAE%20REVISIONS%20FINAL.pdf | Sarah R Hoy
Alexandre Millon
Steve J Petty
D Philip Whitfield
Xavier Lambin
email: [email protected]
Food availability and predation
Keywords: Accipiter gentilis, breeding decisions, breeding propensity, clutch size, juvenile survival, life-history trade-offs, northern goshawk, reproductive strategies, Strix aluco, tawny owl
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
conditions (e.g. food availability or predation) varies according to its intrinsic attributes (e.g. age, previous allocation of resources towards reproduction).
2. We used 29 years of reproductive data from marked female tawny owls and natural variation in food availability (field vole) and predator abundance (northern goshawk) to quantify the extent to which extrinsic and intrinsic factors interact to influence owl reproductive traits (breeding propensity, clutch size and nest abandonment).
3.
Extrinsic and intrinsic factors appeared to interact to affect breeding propensity (which accounted for 83% of the variation in owl reproductive success). Breeding propensity increased with vole density, although increasing goshawk abundance reduced the strength of this relationship. Owls became slightly more likely to breed as they aged, although this was only apparent for individuals who had fledged chicks the year before.
4.
Owls laid larger clutches when food was more abundant. When owls were breeding in territories less exposed to goshawk predation, 99.5% of all breeding attempts reached the fledging stage. In contrast, the probability of breeding attempts reaching the fledging stage in territories more exposed to goshawk predation depended on the amount of resources an owl had already allocated towards reproduction (averaging 87.7% for owls with clutches of 1-2 eggs compared to 97.5% for owls with clutches of 4-6 eggs).
Introduction
Understanding how different factors influence reproductive decisions is a central issue in ecology and conservation biology, as the number of offspring produced is a key driver of population dynamics [START_REF] Nichols | Estimation of sexspecific survival from capture-recapture data when sex is not always known[END_REF][START_REF] Sedinger | Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans[END_REF]. The impact of some extrinsic factors on reproductive decisions, such as food availability, are well understood (reviewed in [START_REF] White | The role of food, weather and climate in limiting the abundance of animals[END_REF]. In contrast the impact of others, such as predation risk is more equivocal, even when the same predator and prey species are examined [START_REF] Sergio | Intraguild predation in raptor assemblages: a review[END_REF]. Quantifying the indirect effect of predation risk on prey reproductive decisions under natural conditions is difficult, but merits further investigation as it can theoretically destabilize predator-prey dynamics, under certain circumstances [START_REF] Kenward | Breeding suppression and predator-prey dynamics[END_REF].
Furthermore, despite the influence of food availability and predation risk on reproductive success being extensively studied, the extent to which these two extrinsic factors interact to affect reproductive decisions remains poorly understood (but see [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF].
Food availability is frequently reported to have a positive influence on the proportion of individuals in the population breeding and the number of offspring produced [START_REF] Arcese | Effects of population density and supplemental food on reproduction in song sparrows[END_REF][START_REF] Pietiäinen | Seasonal and individual variation in the production of offspring in the Ural owl, Strix uralensis[END_REF][START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. However, breeding individuals and individuals producing more offspring per breeding attempt are often more vulnerable to predation compared to non-breeding individuals [START_REF] Magnhagen | Predation risk as a cost of reproduction[END_REF][START_REF] Hoogland | Selective predation on Utah prairie dogs[END_REF] or those producing fewer offspring [START_REF] Ercit | Egg load decreases mobility and increases predation risk in female black-horned tree crickets (Oecanthus nigricornis)[END_REF]. Consequently, in years when predation risk is high, individuals of long-lived iteroparous species may attempt to minimize their vulnerability to predation by: i) refraining from breeding [START_REF] Spaans | Dark-bellied Brent geese Branta bernicla bernicla forego breeding when arctic foxes Alopex lagopus are present during nest initiation[END_REF]; ii) reducing the number or quality of offspring [START_REF] Doligez | Clutch size reduction as a response to increased nest predation rate in the collared flycatcher[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]; or iii) abandoning the breeding attempt at an early stage [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF][START_REF] Chakarov | Mesopredator release by an emergent superpredator: a natural experiment of predation in a three level guild[END_REF]. Indeed, experimental studies have shown that individuals respond to variation in predation risk by making facultative decisions to alter their allocation of resources towards reproduction, so as to reduce their own, or their offspring's vulnerability to predators [START_REF] Ghalambor | Fecundity-survival trade-offs and parental risktaking in birds[END_REF][START_REF] Doligez | Clutch size reduction as a response to increased nest predation rate in the collared flycatcher[END_REF][START_REF] Fontaine | Parent birds assess nest predation risk and adjust their reproductive strategies[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]. However, according to life history theory, such changes in reproductive strategies should arise only when the losses incurred from not breeding, or not completing a breeding attempt, are compensated for by future reproductive success [START_REF] Stearns | The Evolution of Life Histories[END_REF]).
This intrinsic trade-off between current reproductive success and future reproductive potential is thought to be an important factor shaping reproductive decisions [START_REF] Stearns | The Evolution of Life Histories[END_REF].
For many long-lived species, the strength of this trade-off is thought to vary over an individual's lifetime [START_REF] Proaktor | Age-related shapes of the cost of reproduction in vertebrates[END_REF], as both survival-and reproduction-related traits are age-dependant, often declining in later life [START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Furthermore, changes in extrinsic conditions can also cause the strength of this intrinsic trade-off to vary, via their influence on survival probabilities and ultimately the individual's future reproductive potential [START_REF] Barbraud | Environmental conditions and breeding experience affect costs of reproduction in Blue Petrels[END_REF][START_REF] Hamel | Maternal characteristics and environment affect the costs of reproduction in female mountain goats[END_REF]. Consequently, an individual's reproductive response to changes in extrinsic conditions is predicted to vary according to their intrinsic attributes, with individuals becoming increasingly committed to their current reproductive attempt as they age, to compensate for the decline in future breeding prospects [START_REF] Clutton-Brock | Reproductive effort and terminal investment in iteroparous animals[END_REF]. However, few studies have examined whether intrinsic and extrinsic factors interact to explain variation in reproductive success (but see [START_REF] Wiklund | The adaptive significance of nest defence by merlin, Falco columbarius, males[END_REF][START_REF] Kontiainen | Aggressive ural owl mothers recruit more offspring[END_REF][START_REF] Rauset | Reproductive patterns result from age-related sensitivity to resources and reproductive costs in a mammalian carnivore[END_REF], despite theory predicting such a link [START_REF] Williams | Natural selection, the cost of reproduction, and a refinement of lack's principle[END_REF][START_REF] Ricklefs | On the evolution of reproductive strategies in birds: Reproductive effort[END_REF].
In this study, we used 29-years of breeding data collected on an intensively monitored population of individually identifiable female tawny owls (Strix aluco) to examine the extent to which owl reproductive decisions varied in relation to two extrinsic factors, natural variation in the abundance of their main prey (field vole, Microtus agrestis; Petty 1999), and their main predator (a diurnal raptor, northern goshawk, Accipiter gentilis; [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF].
In another study site, predation by diurnal raptors was found to account for 73% of natural tawny owl mortality after the fledging stage, when parents are still provisioning food for their young [START_REF] Sunde | Diurnal exposure as a risk sensitive behaviour in tawny owls Strix aluco ?[END_REF] and in our study site predation on adult owls was biased towards breeding females [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. It is expected that breeders and parents of larger broods spend more time hunting to provision food for their offspring, which may make these parents more exposed to predation by goshawks. Consequently, in years when predation risk is high, individuals may attempt to minimise their vulnerability to predation by reducing the amount of resources they allocate towards reproduction (breeding less frequently or laying smaller clutches). However, as the seasonal peak in goshawk predation on tawny owls occurs after owls have already initiated breeding attempts [START_REF] Petty | The decline of common kestrels Falco tinnunculus in a forested area of northern England: the role of predation by Northern Goshawks Accipiter gentilis[END_REF], the main response of individuals to variation in predation risk may manifest itself as an increased tendency to abandon breeding attempts at an early stage. Therefore in this study we examined how three different reproductive decisions: i) breeding propensity; ii) clutch size; and iii) whether breeding attempts were completed to the fledging stage varied in relation to fluctuations in food availability and predation risk.
We also investigated whether owl reproductive decisions were related to the following intrinsic attributes, current and previous allocation of resources towards breeding (clutch size and reproductive success the year before, respectively) and the age of the individual, as lifehistory theory predicts an intrinsic trade-off between current and future allocation of resources towards reproduction [START_REF] Williams | Natural selection, the cost of reproduction, and a refinement of lack's principle[END_REF], as survival and reproductive rates are agedependent in tawny owls [START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF].
Changes in extrinsic conditions are also likely to affect the probability of offspring being recruited into the breeding population, via their effect on juvenile owl survival [START_REF] Sunde | Diurnal exposure as a risk sensitive behaviour in tawny owls Strix aluco ?[END_REF][START_REF] Sunde | Predators control post-fledging mortality in tawny owls, Strix aluco[END_REF][START_REF] Koning | Long-term study on interactions between tawny owls Strix aluco , jackdaws Corvus monedula and northern goshawks Accipiter gentilis[END_REF][START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. Thus, the influence of extrinsic conditions on juvenile survival should influence the adaptive basis for reproductive decisions, for instance, how beneficial it is to allocate resources towards a reproductive attempt. Consequently, we also examined how juvenile survival varied in relation to temporal fluctuations in food availability and predation risk.
Methods
Study site and owl monitoring
Tawny owl reproduction has been continuously monitored in a 176 km² central section of Kielder Forest (55°13′N, 2°33′W) since 1979, using nest boxes [START_REF] Petty | Value of nest boxes for population studies and conservation of owls in coniferous forests in Britain[END_REF]. Kielder
Forest, mainly planted with Sitka Spruce (Picea sitchensis), lacks natural tree cavities, therefore owls breed almost exclusively in nestboxes [START_REF] Petty | Value of nest boxes for population studies and conservation of owls in coniferous forests in Britain[END_REF]. Each year, all nest boxes were checked for occupancy, to record clutch size, the number of chicks fledging and to ring chicks. Tawny owls do not breed every year after becoming reproductively active and only breed once per year, but can re-lay if the first breeding attempt fails early (during laying or the early incubation period; [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF]). In such cases, we only included the second breeding attempt, such that each individual contributed only one breeding attempt per year to our analysis. In some cases, the monitoring of a nestbox resulted in owls abandoning their breeding attempts. We therefore excluded all such breeding attempts (N = 51/965) from all our analyses. Breeding females were captured every year using a modified angler's landing net which was placed over the entrance of the nestbox, when their chicks were 1-2 weeks old. The identity of breeding females was established from their metal ring numbers, and any unmarked breeding females (entering the population as immigrants) were ringed upon capture so that they would subsequently be individually identifiable. Tawny owls are highly site faithful, and in our study site >98% remained in the same territory where they first started breeding [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF]). Therefore we determined the identity of a female occupying a territory when no breeding took place or when the breeding attempt failed prior to trapping in the following way. When the same female was recorded breeding in a territory both before and after the year(s) where no female was caught, we assumed the same individual was involved.
However, when different females were recorded either side of a year(s) when females were not caught, we deemed the identity of the breeder unknown and excluded such breeding attempts from our analyses. A total of 914 breeding attempts took place between 1985 and 2013 where the identity of the female was known, or could be assumed in 89% of cases (N = 813).
Analysis
To determine the extent to which owl breeding decisions were affected by fluctuating extrinsic and intrinsic factors, we examined: i) breeding propensity, ii) clutch size and iii) whether breeding attempts were completed using generalised linear mixed effect models (GLMM) with the appropriate error structure in R version 3.0.3 (R Core Development Team 2014). The identity of the breeding female and the year of a breeding attempt were fitted as random effects to account for individuals breeding in more than one year, and any residual temporal variation in response variables not attributable to the fitted temporal covariates of interest (food availability and predation risk). In all analyses both the additive and 2-way interactive effects of fixed effect covariates were tested. We visually checked for any residual spatial-autocorrelation in all response variables not explained by the covariates included in the selected best models using correlograms [START_REF] Zuur | Mixed Effects Models and Extensions in Ecology with R[END_REF].
We examined causes of variation in breeding propensity by analysing whether an individual bred or did not breed each year after becoming reproductively active, up until its last recorded breeding attempt (fitted as a binary covariate). We examined breeding propensity in this way for the following reasons. We excluded first-time breeding attempts as the breeding propensity of such attempts would necessarily be one and this may bias the results. We did not include the years prior to the first breeding attempt because there is no way to identify a new recruit in a territory before it first bred and it was unknown whether individuals had made a facultative decision not to breed the year(s) before they first bred, or whether they were incapable of breeding regardless of extrinsic conditions. Furthermore, some individuals were only recorded breeding once, thus we had no way of determining whether such individuals were alive and had decided to not to breed in the subsequent year(s) after their only recorded breeding attempt or whether these individuals were dead. When at least one egg was laid in a territory known to be occupied by a particular female, we recorded that as a breeding attempt. Less than 2% (N= 5) of the 268 different females recorded breeding in Kielder Forest were known to have skipped breeding for three or more consecutive years.
Therefore, we assumed an individual was dead if it had not been re-captured in the last 3 years of the study (i.e. after 2010). In this analysis, we excluded all individuals that could not be assumed dead or were known to be alive (i.e. were recorded breeding) in 2013 (N = 40), to remove any bias that unknown non-breeding events occurring in the last few years of the study period could induce.
To determine the extent to which owls adjust the amount of resources they allocate towards reproduction in response to variation in food availability and predation risk, we modelled variation in clutch size. In addition, we examined the decision or capability to continue with a breeding attempt by classifying each breeding attempt as "complete", if at least one chick fledged, or "incomplete" if not (fitted as a binary covariate). These two analyses were based on a different dataset to that used for the breeding propensity analysis, as it contained all breeding attempts by all known individuals (N = 241), including first time-breeders between 1985-2013.
Measures of food availability and predation risk
Field voles are the main year-round prey of tawny owls in Kielder Forest, representing on average 62% of prey brought to the nestbox (N = 1423; Petty 1999). As tawny owls are vole specialists in our study site, variation in the abundance of alternative food sources probably had only a limited impact on owl breeding decisions. Field vole densities were monitored in spring and autumn at 17-21 sites within the owl monitoring area, every year since 1985 (for methods see Lambin, Petty, & MacKinnon 2000). Vole densities in the spring and autumn were positively correlated (r = 0.65, N = 27, P <0.001). The amount of vole prey available in early spring (prior to egg laying) has previously been shown to affect owl reproduction; in years of high food availability more pairs attempted to breed and clutch sizes were larger [START_REF] Petty | Ecology of the Tawny Owl Strix Aluco in the Spruce Forests of Northumberland and Argyll[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Therefore, spring vole densities were used as a proxy for owl food availability in all analyses. Field vole densities were asynchronous but spatially structured across Kielder Forest (i.e. travelling waves; [START_REF] Lambin | Spatial asynchrony and periodic travelling waves in cyclic populations of field voles[END_REF]. However, this pattern has changed over time with a gradual loss of spatial structure [START_REF] Bierman | Changes over time in the spatiotemporal dynamics of cyclic populations of field voles (Microtus agrestis L.)[END_REF].
Such changes in prey spatial synchrony may affect how easy it is for owls to predict the amount of food available in their territory, and hence influence their reproductive decisions. Therefore, we also examined the extent to which tawny owl breeding decisions were affected by changes in the spatial synchrony of field vole densities. To do so, we first calculated spatial variation in field vole densities as the coefficient of variation (standard deviation divided by the mean) in spring vole densities between survey sites, each year. However, spatial variation in vole densities may be less important in years when food is abundant, compared to when it is scarce. Therefore, we classified years as either being of low overall food abundance if the averaged spring vole density was below the median value for all years, or high if not. We then included an interaction between spatial variation in vole densities and the categorical covariate of overall vole densities to test this hypothesis.
Northern goshawks (hereafter goshawks) have been continuously monitored since the first breeding attempt in 1973 [START_REF] Petty | Goshawks Accipiter gentilis. The Atlas of Breeding Birds in Northumbria[END_REF]. Each year occupied goshawk homeranges were identified and over the last 40 years the number of occupied home-ranges has increased from one to 25-33. Goshawks are known predators of tawny owls, with breeding female owls being three times more likely to be killed than adult males; predation is also heavily biased towards juveniles [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Goshawk dietary data collected in Kielder Forest suggests that as the breeding population of goshawks increased, the mean number of owls killed each year by goshawks has also increased. An average of 5 [3-8; 95% CI] owls were killed each year when less than 15 goshawk home-ranges were occupied, compared to an average of 159 [141-176; 95% CI] owls killed each year when more than 24 goshawk home-ranges were occupied (see Appendix S1). Consequently, as predation on owls has increased with the abundance of goshawks in the forest, we used the total number of occupied goshawk home-ranges in a 964 km² area of Kielder Forest as a proxy of temporal variation in predation risk. However, as goshawks were monitored over a larger area than tawny owls, we also used an additional proxy of temporal variation in predation risk. Local goshawk abundance was measured as the number of goshawk home-ranges whose nest sites were within 5.8 km (the estimated goshawk foraging distance) of the owl monitoring area, calculated in the same way described in [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Spatial variation in predation risk has also been found to influence reproductive decisions [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF]. Therefore, we investigated the extent to which owl reproductive decisions varied in relation to two spatial proxies of predation risk: (i) distance from an owl's nest to the nearest goshawk nest site; and (ii) the location of an owl's territory in relation to all goshawks nest sites, (i.e. connectivity of an owl territory to all goshawk nest sites). The connectivity measure of predation risk takes into account all goshawk nest sites, but weights the influence each goshawk nest site has on this index of predation risk, according to its distance from the focal owl nest site (for further details and method see Appendix S2). These spatial covariates of predation risk were calculated for each owl territory, every year. Although common buzzards Buteo buteo are abundant in our study site and are known to kill tawny owls [START_REF] Mikkola | Owls killing and killed by other owls and raptors in Europe[END_REF]), we did not include buzzards in any of our analyses of owl predation risk. This was because dietary data showed us that buzzard predation on owls in our study site was negligible (unpublished data). None of the temporal proxies of food availability were significantly correlated with the temporal covariates of predation risk. However, no two proxies of predation risk or two proxies of food availability were included in the same model as they were collinear (see Appendix S3 for all cross correlation coefficients). All temporal and spatial covariates were standardised (had a mean of 0 and a standard error of 1) to enable their effect sizes to be compared.
Intrinsic attributes
When testing the hypothesis that the response of an individual to changes in extrinsic conditions varied according to age, we used the number of years elapsed since the individuals first recorded breeding attempt, because the exact age of 94 breeding females entering the population as adult immigrants was unknown. However, most (89%) female owls had commenced breeding by the time they were 3 years old [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF] and there had been no change in the mean age at first reproduction over the study period, neither for immigrants nor local recruits entering the owl population (unpublished data).
Consequently, the number of years elapsed since an individual's first recorded breeding attempt is closely related to its age, and the length of an individual's breeding lifespan is also highly correlated with actual lifespan (r = 0.91; N = 163). We tested the hypothesis that previous investment in reproduction influenced an individual's current reproductive decisions in relation to changes in predation risk and food availability by fitting a binary covariate reflecting whether a female owl had successfully raised offspring to the fledgling stage the previous year. Lastly, we investigated whether the likelihood of an individual completing a breeding attempt to the fledging stage was related to clutch size, taking clutch size as a proxy for the extent to which an individual had already allocated resources towards the current reproductive attempt. All descriptive statistics are shown with the standard deviation (SD).
Juvenile survival
As recapture data were not available for male owls in all years, our analysis of juvenile owl survival was based on female owls only, ringed as chicks between 1985 and 2012 (N=1,082), with the last recapture of individuals in 2013. The sex of individuals never recaptured as adults or sexed as chicks using DNA was unknown, as juvenile owls cannot be accurately sexed without molecular analyses. However, the sex ratio of chicks born in our study site was even 1:1 (N =312, over 4 years; Appleby et al. 1997). Consequently, we randomly assigned half the number of chicks born each year minus the number known to be female as females, as done in previous analyses [START_REF] Nichols | Estimation of sexspecific survival from capture-recapture data when sex is not always known[END_REF][START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. The rest of these chicks were assumed to be males and excluded from the analysis. Owls were only recaptured when breeding and owls usually starting to breeding between the ages 1-4 (89% before age 3; [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. Recapture probabilities were therefore modelled as time-dependent and age-specific [(1, 2-3, 4+)] as done in [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]. This analysis was carried out in E-SURGE version 1.9.0 [START_REF] Choquet | Program E-SURGE: A Software Application for Fitting Multievent Models[END_REF]. Goodness-of-fit tests were carried out in U-CARE 2.3.2 [START_REF] Choquet | U-CARE 2.2 User's Manual[END_REF]. In this analysis only, rather than using spring vole densities (measured in March) as the measure of food availability, we used autumn densities of field voles (measured in September-October), as they have previously been shown to be more closely related to changes in juvenile tawny owl survival [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF][START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Temporal proxies of predation risk were the same as those used in the previous analyses. Spatial proxies of predation risk were calculated as before, but using the natal nestbox, and were modelled as an individual covariate. Model selection in all of the above analyses was based on Akaike's information criterion corrected for small sample size (AICc; [START_REF] Burnham | Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd Editio[END_REF].
Results
Breeding propensity
When averaged across years, the probability of a female breeding after becoming reproductively active was 0.78 ± 0.17 (range: 0.21-0.99). Variation in breeding propensity appeared most strongly related to changes in extrinsic conditions (Table 1). In years when local goshawk abundance was relatively low (fewer than 10 home-ranges occupied) breeding propensity increased from an average of 0.33 ± 0.18, when food availability was also low, to an average of 0.95 ± 0.06 in years of high food availability (Fig. 1a). However, in years when goshawk abundance was high the relationship between breeding propensity and food availability was less apparent (Fig. 1a). Breeding propensity also appeared to vary according to intrinsic attributes (proxies of age and previous allocation of resources to reproduction); however the association between breeding propensity and intrinsic attributes was much weaker in comparison to the relationship with extrinsic factors (Fig. 1b; Table 1; Appendix S4). Breeding propensity was estimated to increase slightly as owls aged. However this trend was only observed for individuals who had successfully fledged chicks the year before.
Clutch size
Owl clutch size averaged 2.85 ± 0.82 (range: 1-6; N = 850), with 92.8% of clutches containing 2-4 eggs. The largest clutches were laid in years of high spring vole densities with clutch size increasing from an average of 2.38 [2.28-2.48; 95% CI] in years when vole densities were below 50 voles ha -1 to 2.98 [2.82-3.14; 95% CI] in years when vole densities were above 150 voles ha -1 (Fig. 2). There was no evidence to suggest that variation in clutch size was related to predation risk or female age (Table 2; Appendix S5).
Completing a breeding attempt to the fledging stage
On average, 96% of breeding attempts (N = 813) were completed. Clutch size and connectivity to goshawk nest sites explained the most variation in whether a breeding attempt was completed (Table 3; Appendix S6). Irrespective of clutch size, the percentage of a breeding attempts observed to reach the fledging stage was close to 100% (N = 193/194) for owls breeding in territories not well connected to goshawk nest sites, hence less exposed to predation (i.e. in territories not in close proximity to many goshawks nest sites; Fig. 3).
However, for owls breeding in territories relatively well connected to goshawk nest sites, hence exposed to predation (i.e. in close proximity to several goshawk nest sites in that year) the probability of breeding attempts being completed decreased from 97.5% (N = 39/40 breeding attempts) when owls had clutches containing four or more eggs to 87.7% (N = 57/65 breeding attempts) when clutches contained 1-2 eggs (Fig 3).
Juvenile survival
Juvenile survival averaged 0.18 ± SE 0.02. Autumn vole densities explained the most variation in juvenile survival (slope on logit scale: β = 0.42 ± 0.1; %Deviation = 34.5).
Juvenile survival was estimated to increase with autumn vole densities (Appendix S7
). There was no evidence of a relationship between juvenile owl survival and any proxy of predation risk (Table 4).
Discussion
In this study we examined how reproduction in female tawny owls (breeding propensity, clutch size and nest abandonment) was influenced by both extrinsic (food availability and predation risk) and intrinsic factors (age, previous and current allocation of resources towards reproduction) and any interactions between these factors. Our main findings were as follows: i) breeding propensity was highest in years when food (field vole densities in spring) was abundant and predation risk (goshawk abundance) was low. However, in years when goshawk abundance was relatively high the association between breeding propensity and food availability was less apparent. Breeding propensity also appeared to be related to intrinsic attributes (but to a lesser extent than extrinsic factors), as owls which had successfully fledged chicks the year before were slightly more likely to breed as they aged compared to owls which had not fledged chicks. ii) Clutch size was positively associated with spring vole densities but was unrelated to predation risk or any intrinsic attributes examined.
iii) On average 96% of breeding attempts were completed, however owls with small clutches (1-2 eggs), and breeding in territories more exposed to goshawk predation, were less likely to complete their breeding attempt compared to owls with larger clutches breeding in less exposed territories. iv) Juvenile owl survival was positively correlated with food availability in the autumn but was unrelated to predation risk. Overall, these findings represent rare evidence about how extrinsic and intrinsic factors interact to shape reproductive decisions in a long-lived iteroparous predator.
Breeding propensity
Breeding propensity was closely correlated with food availability (measured as field vole densities in spring) in the early years of the study, when predation risk (goshawk abundance) was relatively low (Fig. 1a). However, as predator abundance increased over the study period, the positive effect of food availability on breeding propensity diminished. These results indicate that breeding propensity is not purely constrained by the amount of food available prior to the breeding season. They also suggest that owls may be capable of assessing changes in predation risk and make facultative decisions about whether to allocate resources to reproduction, as shown for other species [START_REF] Sih | The effects of predators on habitat use, activity and mating behaviour of a semi-aquatic bug[END_REF][START_REF] Candolin | Reproduction under predation risk and the trade-off between current and future reproduction in the threespine stickleback[END_REF][START_REF] Ghalambor | Fecundity-survival trade-offs and parental risktaking in birds[END_REF][START_REF] Zanette | Perceived Predation Risk Reduces the Number of Offspring Songbirds Produce per Year[END_REF]. Unfortunately, we were unable to determine the exact nature of the link between food availability, predation risk and the observed changes in owl reproduction, as our approach was necessarily correlative, given the spatial scale of the processes considered. Therefore, we cannot rule out the possibility that changes other than average vole density in spring or goshawk abundance may have co-occurred to cause the observed variation in breeding propensity. However, we also examined whether changes in the spatial dynamics of food availability and predation risk were related to breeding propensity. Life history theory predicts that individuals should only forgo breeding when the cost of not breeding is compensated for by future reproductive gains [START_REF] Stearns | The Evolution of Life Histories[END_REF]). An analysis of breeding female owl survival in our study site suggests that it was lowest in years when goshawk abundance was relatively high and owl food availability was low (unpublished data). Consequently, we suggest that the higher breeding propensity observed in years when goshawks were abundant and food was scarce could plausibly reflect that these environmental conditions (being adverse for owls for a number of consecutive years towards the end of the study period) have made intermittent breeding a less beneficial strategy, as the cost of not breeding now is less likely to be compensated for in the future.
We also found evidence suggesting that a detectable but relatively small amount of variance in breeding propensity was associated with the age of the female owl and their previous allocation of resources towards reproduction, as breeding propensity increased slightly with age for females which had fledged chicks the previous year. This could indicate that some individuals are inherently of "high quality" and do not face a strong trade-off between current and future investment in reproduction. While, the effect sizes were relatively small in comparison with the strength of the correlations between breeding propensity and extrinsic conditions (food availability and predation risk; Fig. 1), our results demonstrate the dual intrinsic and extrinsic influence on the decision to reproduce.
Clutch size
The strong positive effect of food availability on clutch size is concordant with results from several other studies (Fig. 2; [START_REF] Ballinger | Reproductive strategies: food availability as a source of proximal variation in a lizard[END_REF][START_REF] Crawford | The influence of food availability on breeding success of African penguins Spheniscus demersus at Robben Island, South Africa[END_REF][START_REF] Lehikoinen | The impact of climate and cyclic food abundance on the timing of breeding and brood size in four boreal owl species[END_REF]).
However, we found no evidence of an association between clutch size and any proxy of predation risk. Due to the latitude of our study site, nights are relatively long prior to the breeding season. Hence, there is little overlap in the activity-periods of nocturnal tawny owls and diurnal goshawks, compared to late spring and summer when nights are relatively short.
Furthermore, female goshawks are thought to leave Kielder Forest in winter, returning in February, just prior to owls laying (unpublished data). Therefore, predation risk for owls might potentially be relatively low prior to the breeding season, when female owls are building up the body reserves needed for breeding, which could, in part, explain why we found no evidence of a relationship between clutch size and predation risk.
Completing a breeding attempt to the fledging stage
As predicted by life-history theory, individuals who had allocated more towards reproduction (e.g. by laying larger clutches), were more likely to continue their breeding attempt to the fledging stage, a finding consistent with previous studies (e.g. [START_REF] Delehanty | Effect of clutch size on incubation persistence in male Wilson's Phalaropes (phalaropus tricolor[END_REF].
Predation risk was the only extrinsic predictor of whether breeding attempts reached the fledging stage, with individuals breeding in territories more exposed to predation risk being less likely to complete a breeding attempt (Fig. 3); a result congruent with another study examining the effect of spatial variation in predation risk on reproductive success [START_REF] Sergio | Spatial refugia and the coexistence of a diurnal raptor with its intraguild owl predator[END_REF]. Goshawks start displaying over territories and building nests in late March and April in the UK [START_REF] Kenward | Breeding suppression and predator-prey dynamics[END_REF], hence are likely to become even more conspicuous to owls, after owls have already committed to breeding. Furthermore, predation risk for both adult and fledgling owls increased throughout the breeding season [START_REF] Petty | The decline of common kestrels Falco tinnunculus in a forested area of northern England: the role of predation by Northern Goshawks Accipiter gentilis[END_REF][START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Therefore, the tendency of owls not to complete breeding attempts in territories where predation risk is presumably high, is consistent with females (having already commenced breeding), attempting to reduce their own vulnerability to predation as the breeding season progresses. Alternatively, as 23% of breeders which did not complete a breeding attempt were never recaptured in the study site again, the higher failure rates in territories well connected to areas of high goshawk activity could also reflect that some parents in those territories were predated by goshawks and hence were unable to complete the breeding attempt.
Juvenile survival
Our analysis confirmed that juvenile owl survival was positively related to food availability [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF][START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF][START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Estimates of juvenile owl survival were lowest in low vole years (Appendix S7). If mothers were able to predict the food conditions that their offspring would experience they should be less inclined to allocate resources towards reproduction in low vole years, due to the reduced probability of these offspring being recruited into the population. This may in part explain why individuals allocated relatively few resources towards reproduction (i.e. smaller clutch sizes) in years when food was scarce.
Reproductive strategies in relation to changing environmental conditions
A reproductive strategy can be defined as the set of decisions which influence the number of offspring an individual produces. Owl breeding strategies appeared to change in response to extrinsic conditions. Individuals allocated more resources towards reproduction (in terms of breeding propensity and clutch size) in years when food was abundant (Fig. 1 & Fig. 2).
Although we found no evidence to support our prediction that owls would attempt to minimise their vulnerability to predation by breeding less frequently or laying smaller clutches in years when predation risk was high, we did find evidence to suggest that owls responded to changes in predation risk, by making facultative decisions about whether to continue with their breeding attempt. However, the observed increase in incomplete nesting attempts with increasing predation risk also be partly due to parent(s) being killed, hence being unable to complete the breeding attempt, rather than a facultative decision not to continue the attempt. There was no year-to-year collinearity between our temporal covariates of predation risk and food availability. However, when averaged over a larger time scale (5 years) these covariates were correlated, and hence both environmental conditions changed simultaneously in opposite ways, with spring vole densities decreasing and predation risk increasing over the course of the study period. Therefore, we were unable to fully disentangle the effects of food availability and predation risk on owl breeding decisions. As the overall percentage of failed breeding attempts was very low (4% on average), the main reproductive decisions influencing reproductive output were primarily breeding propensity then clutch size. Indeed, the proportion of the population breeding and average clutch size explained 83% and 16% of the total variation in annual reproductive success (measured as the average number of chicks fledged per occupied owl territory) of the tawny owl population respectively. Whereas whether breeding attempts were completed only explained 0.1% of the total variation in reproductive success (see Appendix S8). Consequently, food availability seemed to have a greater impact on breeding propensity than changing predation risk (Fig. 1, Table 1) and be the main extrinsic factor driving variation in reproductive output, thus shaping reproductive strategies in tawny owls. However, the strength of the relationship between reproductive output and food availability weakened as predation risk increased.
As food availability declined (specifically as vole populations switched from high to low amplitude cycles; [START_REF] Cornulier | Europe-wide dampening of population cycles in keystone herbivores[END_REF] and predation risk increased, tawny owls seemed to breed more frequently, but invested less per breeding attempt. By spreading reproductive effort more evenly across years, a 'bet-hedging' reproductive strategy, minimises variation in reproductive success, and can actually increase an individual's fitness in certain situations [START_REF] Slatkin | Hedging one's evolutionary bets[END_REF][START_REF] Starrfelt | Bet-hedging-a triple trade-off between means, variances and correlations[END_REF]. Consequently, given that owl survival was lowest in years when food was scarce and goshawk abundance was high, our results could reflect that owls have switched from an intermittent reproductive strategy of saving resources to invest more in one, or a few reproductive attempts in the future, to a 'bet-hedging' reproductive strategy.
Together our results suggest that extrinsic conditions and intrinsic attributes have a combined and interactive effect on reproductive decisions. Changes in extrinsic conditions, particularly food availability, were the main factors shaping owl reproductive decisions, as the association between intrinsic attributes and owl breeding decisions was relatively weak in comparison.
This could in part be due to environmental variation in this system being relatively high because of the cyclical dynamics of vole populations, and the relatively recent recovery of an apex predator, thus swamping the contribution of intrinsic attributes to reproductive strategies. Although many of our results were in line with previous studies and theoretical predictions, our comprehensive approach highlights the complex nature of how intrinsic and extrinsic trade-offs act in combination to shape tawny owl reproduction. Furthermore, the length of this study has enabled us to provide some empirical evidence, albeit correlative, of long-lived predators altering their life-history strategies in response to changes in multiple interacting environmental factors.
Fig. 1 .
1 Fig. 1. Variation in the probability of adult female tawny owls breeding in relation to changes
Fig. 3 .
3 Fig. 3. The mean proportion of tawny owl breeding attempts which were observed to reach
Table 1 .
1 Parameter estimates and model selection examining how tawny owls breeding 685 propensity varies in relation to fluctuations in predation risk (total goshawk abundance; local 686 goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the 687 owl was nesting from the nearest predator) and food availability (spring vole densities; spatial 688 variation in vole densities across the study site). Breeding propensity was also analysed in 689 relation to whether the individual had successfully bred the previous year and the number of 690 years elapsed since the owl first started breeding (a measure of age). The most parsimonious
691
692 model is emboldened.
Model np Estimate SE ΔAICc
1. Null 3 27.99
2. Total goshawk 4 0.40 0.24 27.37
3. Local goshawk 4 0.45 0.25 27.08
4. Connectivity to goshawks 4 -0.03 0.12 29.97
5. Nearest goshawk 4 0.05 0.10 29.75
6. Spring voles density 4 1.09 0.26 16.20
7. Categorical spring vole density (CSV) 6 -0.83 0.56 23.87
Spatial variation in vole densities (SVVD) -0.62 0.44
CSV x SVVD 0.03 0.60
8. Breeding success previous year (BS) 4 0.34 0.22 27.81
9. Years since 1 st reproduction (Y1st) 4 0.07 0.03 24.45
10. Spring voles 5 1.14 0.23 10.84
+ Local goshawk 0.51 0.18
11. Spring voles (SV) 6 1.15 0.23 6.29
+ Local goshawk (LG) 0.14 0.21
SV x LG -0.68 0.26
12. Breeding success previous year 5 0.34 0.23 24.33
+ Years since 1st reproduction 0.07 0.03
13. Breeding success previous year 6 -0.30 0.35 21.03
Years since 1st reproduction -0.01 0.05
BS x Y1st 0.14 0.06
14. Breeding success previous year 9 -0.34 0.35 0
Years since 1st reproduction -0.01 0.05
BS x Y1st 0.13 0.06
Spring voles 1.17 0.23
Local goshawk 0.13 0.22
SV x LG -0.69 0.26
693
694
Table 2 .
2 Parameter estimates and model selection to determine whether variation in tawny owl investment in reproduction (clutch size) was related to proxies of predation risk (total goshawk abundance; local goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting from the nearest predator), food availability (spring vole densities; spatial variation in vole densities across the study site) and intrinsic attributes (whether the individual had successfully bred the previous year and the number of years since the individuals first breeding attempt). The most parsimonious model is highlighted in bold.
Model np Estimate SE ΔAICc
1. Null 3 17.11
2. Total goshawk 4 -0.035 0.032 17.99
3. Local goshawk 4 -0.017 0.033 18.88
4. Connectivity to goshawk 4 0.007 0.024 19.04
5. Nearest goshawk 4 -0.007 0.022 19.02
6. Spring vole density 4 0.125 0.023 0.00
7. Categorical spring vole density (CSV) 6 -0.130 0.059 6.52
Spatial variation in vole densities (SVVD) -0.068 0.036
CSV x SVVD -0.020 0.060
8. Breeding success previous year 4 0.028 0.046 18.75
9. Years since 1st reproduction 4 0.002 0.006 18.97
Table 4 .
4 Model selection for annual survival of female tawny owls in their first year of life 714 between 1985 and 2013 in relation to predation risk (total goshawk abundance; local goshawk 715 abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting 716 from the nearest predator) and food availability (autumn vole density). Recapture probability was 717 modelled as [a(1,2-3,4+)+t]. The most parsimonious model is emboldened.
718
Models
Acknowledgments
We thank B. Sheldon and two anonymous reviewers for all their helpful comments on a previous version of the manuscript. Our thanks also go to M. Davison, B. Little, P. Hotchin, D. Anderson and all other field assistants for their help with data collection and Forest Enterprise, particularly Tom Dearnley and Neville Geddes for facilitating work in Kielder Forest. This work was partly funded by Natural Research Limited and a Natural Environment Research Council studentship NE/J500148/1 to SH and grant NE/F021402/1 to XL. Forest
Research funded all the fieldwork on goshawks, tawny owls and field voles during 1973-1996. In addition, we are grateful to English Nature and the BTO for issuing licences to visit goshawk nest sites.
Data accessibility
All data associated with the study which is not given in the text is available in the Dryad Digital Repository. http://dx.doi.org/10.5061/dryad.6n579.
Table 3. Model estimates and selection for analyses investigating the relationship between the probability of tawny owl breeding attempts being completed to the fledgling stage and proxies of predation risk (total goshawk abundance; local goshawk abundance; connectivity of the owls territory to all predator nest sites; distance the owl was nesting from the nearest predator), food availability (spring vole densities; spatial variation in vole densities across the study site) and attributes intrinsic to the breeder (whether they had successfully bred the previous year and the number of years since their first breeding attempt) and the breeding attempt (clutch size). The most parsimonious model is emboldened.
Model
Supporting Information
The following supporting information is available for this article:
Appendix S1: Estimating the number of tawny owls killed each year by the goshawk population.
Appendix S2: Method used to calculate the connectivity measure of predation risk for each owl territory. |
01766430 | en | [
"sde.be",
"sde.es",
"sde.mcg"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766430/file/25_Lieury%20Millon%20et%20al%202017%20Mam%20Biol_Ageing%20in%20vixens.pdf | Nicolas Lieury
Nolwenn Drouet-Hoguet
Sandrine Ruette
email: [email protected]
Sébastien Devillard
Michel Albaret
Alexandre Millon
Reproductive senescence in the red fox Rural populations of the red fox Vulpes vulpes show little evidence of reproductive senescence
Keywords: Litter size, Vulpes vulpes, Placental scar count, Embryo count, Reproductive senescence
The ageing theory predicts fast and early senescence for fast-living species. We investigated whether the pattern of senescence of a medium-sized, fast-living and heavily-culled mammal, the red fox (Vulpes vulpes), fits this theoretical prediction. We used cross-sectional data from a large-scale culling experiment of red fox conducted over six years in five study sites located in two regions of France to explore the age-related variation in reproductive output. We used both placental scars and embryos counts from 755 vixens' carcasses aged by the tooth cementum method (age range: 1-10), as proxies for litter size. Mean litter size per vixen was 4.7 ± 1.4. Results from Generalized Additive Mixed Models revealed a significant variation of litter size with age. Litter size peaked at age 4 with 5.0 ± 0.2 placental scars and decreased thereafter by 0.5 cubs per year. Interestingly, we found a different age-specific variation when counting embryos which reached a plateau at age 5-6 (5.5 ± 0.2) and decreased slower than placental scars across older ages, pointing out embryo resorption as a potential physiological mechanism of reproductive senescence in the red fox. Contrary to our expectation, reproductive senescence is weak, occurs late in life and takes place at an age reached by less than 11.7% of the population such that very few females exhibit senescence in these heavily culled populations.
Introduction
Senescence, or ageing is the gradual deterioration of physical condition and cellular functioning, which results in a decline in fitness with age [START_REF] Kirkwood | Why do we age?[END_REF][START_REF] Sharp | Reproductive senescence in a cooperatively breeding mammal[END_REF]. Ageing can be expressed as a reduction in survival probability and/or a deterioration of reproductive efficiency, including decrease in the probability to give birth and reduced litter size. It is now recognized that both reproductive and actuarial senescence are widespread in the wild. Senescence rate greatly vary across individuals [START_REF] Bouwhuis | Individual variation in rates of senescence: natal origin effects and disposable soma in a wild bird population[END_REF], populations [START_REF] Lemaître | Early-late life trade-offs and the evolution of ageing in the wild[END_REF] and species [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Life-history theory provides a framework for predicting the variability of ageing across species. Major life-history traits, such as the age at first reproductive event, reproductive lifespan and number and size of offspring, vary across species, even when bodysize is controlled for [START_REF] Bielby | The fast-slow continuum in mammalian life history: an empirical reevaluation[END_REF][START_REF] Gittleman | Carnivore life history patterns: Allometric, phylogenetic, and ecological associations[END_REF][START_REF] Harvey | Life history variation in Primates[END_REF][START_REF] Read | Life history differences among the eutherian radiations[END_REF][START_REF] Stearns | The influence of size and phylogeny on patterns of covariation among life-history traits in the mammals[END_REF]. Such response led to the concept of "fast-slow continuum" of life-history variations, which categorises species from short-lived and highly reproductive species to long-lived species showing reduced reproductive output [START_REF] Cody | A general theory of clutch size[END_REF][START_REF] Cole | The population consequences of life history phenomena[END_REF][START_REF] Dobzhansky | Evolution in the tropics[END_REF][START_REF] Gaillard | An analysis of demographic tactics in bird and mammals[END_REF][START_REF] Lack | The significance of clutch size[END_REF][START_REF] Promislow | Living fast and dying young: A comparative analysis of life-history variation among mammals[END_REF][START_REF] Read | Life history differences among the eutherian radiations[END_REF][START_REF] Stearns | The influence of size and phylogeny on patterns of covariation among life-history traits in the mammals[END_REF]. As synthesised by [START_REF] Gaillard | Life Histories, Axes of Variation[END_REF], the fast-slow continuum can be interpreted as the range of possible solutions to the trade-off between reproduction and survival. The variation in ageing pattern along the continuum of senescence has been assessed by [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. These authors showed that both agespecific mortality and fertility patterns were strongly heterogeneous among vertebrates. Using data from 20 populations of intensively monitored vertebrates, they concluded that ageing is influenced by the species' position on the fast-slow continuum, which sets the principles of a continuum of senescence that predicts fast and early senescence for fast-living species [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF].
The red fox Vulpes vulpes is a medium-sized carnivore, known to have a fast reproductive rate with high productivity and early sexual maturity [START_REF] Englund | Some aspects of reproduction and mortality rates in Swedish foxes (Vulpes vulpes), 1961 -63 and 1966 -69[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. According to the life history theory of ageing, red fox is therefore expected to display an early and fast senescence. To date, the demography of red fox has been mainly studied in anthropogenic contexts, and evidence of senescence in this species is mixed [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF][START_REF] Cavallini | Reproduction of the red fox Vulpes vulpes in Central Italy[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF].
In France, red fox are hunted or even culled when locally classified as a pest species preying upon farmed and game species. Between 2002 and 2011, we conducted a fox culling experiment to measure the impact of removals on fox population dynamics in two rural regions [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF]. This landscape-scale experiment thus provided a unique opportunity to study the age-specific variation in reproduction. We addressed the variation in reproductive output with age, expecting an early onset of senescence. Recent papers have recommended a better comprehension of heterogeneity among life history traits in the wild, so as to improve the detection of cryptic senescence and its underlying mechanisms [START_REF] Hewison | Phenotypic quality and senescence affect different components of reproductive output in roe deer[END_REF][START_REF] Massot | An integrative study of ageing in a wild population of common lizards[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF]. Thus, looking at a single reproductive trait might be misleading regarding senescence. Therefore we analysed two proxies of litter size (counts of placental scars and embryos) which may shed light on the underlying physiology of reproductive senescence.
Material and Methods
Study area and data collection
Data were obtained from culling campaigns performed as part of a large-scale culling experiment of the red fox in two French regions over six years [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF]. The carcasses of 899 vixens were collected in five distinct rural study areas (average size: 246 ± 53 km²; Fig. 1). All sites were located in the same range latitude: in Brittany (sites A, B and C; ≥10 km apart; 48°10'N, 03°00'W) and Champagne (sites D and E separated by the Seine River; 48°40'N, 04°20'E). Brittany landscape was dominated by bocage mixing farming and arable lands, with little forested area. In contrast, Champagne sites presented open field systems (mostly cereals and vineyard) and a larger forest cover compared to Brittany. The study took place from 2002 to 2011 but was not synchronous across all five sites. Hunting occurred between October and February, and trapping occurred between December and April.
Culling at the den occurred in April. Night shooting occurred only in sites D-E between December and May (see Lieury et al., 2015 for details).
Reproductive parameters
An estimation of the litter size could be made for 755 reproductive females with undamaged uterus among the 899 vixens collected (84%; Table 1). We used the number of embryos and the number of placental scars as two proxies for litter size (respectively on 394 and 361 individuals). When counting embryos only prenatal losses during early-pregnancy stages are considered while with placental scar counts, all losses between implantation and birth are taken into account. For pregnant females (i.e. females which were culled from February to April), embryos were counted. For the others, uteri were collected 12-48 h after the death of the animal, and soaked in water before freezing and stored at -20°C until examination. Uteri horns were opened longitudinally and examined for placental scars [START_REF] Elmeros | Placental scar counts and litter size estimations in ranched red foxes (Vulpes vulpes)[END_REF][START_REF] Lindström | Placental scar in the red fox (Vulpes vulpes L.) revisited[END_REF]. When the evaluation of litter size was questionable, we used a staining method to facilitate the identification of active placental scars [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF].
The staining method allows for the identification of atypical scars, i.e. with a singular aspect when compared to others from the same uterus or from other uteri at the same period of examination. However it does not permit the distinction of scars that could have persisted from earlier pregnancies from those that have been due to resorption or abortion [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. So, we did not estimate resorption rates from atypical placental scars counts.
Age determination and age classes
The age of foxes at death was determined from the carcasses based on the number of annual growth lines visible in the tooth cementum, the date of death and the expected date of birth on April 1 st [START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF]. Canine teeth, or premolar teeth when canines were unavailable or damaged, were extracted from the lower jaw following Matson's laboratories (Milltown, MT, USA) procedures [START_REF] Harris | Age determination in the red fox (Vulpes vulpes) -an evaluation of technique efficiency as applied to as sample of suburban fixes[END_REF]. Foxes were assigned to age-classes based on their recruitment into the adult population on February 1 st of the year following birth (i.e. at the age of 10 months old). Animals between 10 and 22 months of age were classified as ageclass 1 (yearlings) whereas older ones were classified as age-class 2, 3, and up to 10.
Modelling and data analysis
Although the Poisson distribution has been often applied to the counts of offspring such as litter size, the Gaussian distribution actually fits better such reproductive data that are typically associated with a narrower variance than expected under a Poisson distribution (Devenish-Nelson et al., 2013a;[START_REF] Mcdonald | A Comparison of regression models for small counts[END_REF]. We thus developed a model for age-dependent variation in litter size accounting for both among-sites and among-years variability [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF]Devenish-Nelson et al., 2013b;[START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF] with a Gaussian distribution of error.
We used generalized additive mixed models (GAMM; [START_REF] Wood | Generalized additive models: an introduction with R[END_REF] to explore the relationship between vixen age and litter size without a priori hypothesis on its shape [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. Year and geographic area (study sites, 'Site', or region, 'Region') were tested as random factors to account for their potential confounding effects on litter size. Litter size may indeed depend on i) variations in habitat quality among sites or regions, ii) inter-annual variations in climate conditions or resources availability and iii) spatio-temporal variations of population densities between sites or regions.
Finally, we also tested the effect of the type of measure for litter size (i.e. placental scars vs. embryos) by adding a fixed effect 'Type' in the model.
We thus developed a full GAMM for the variations of litter size (LS) as follows: LS = s(Age)×Type + Age|Site + 1|Year,
The bars indicate the addition of a random effect of the 'Year' on the intercept (1) or of the site on the slope (Age). The parameterization s(Age)×Type denotes that the non-linear effect of vixen age was modelled independently for each type of the litter size proxy 'Type'.
Following [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF], we first started from the full random model and evaluated whether the age-specific variation in LS was similar among sites (random parameterisations: Age|Site vs. 1|Site), whether the spatial variation among regions was negligible when compared to the spatial variation among sites (1|Region vs. 1|Site) and whether the random effect of the year (1|Year) was important. According to [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF], parameters were estimated using Restricted Maximum Likelihood (REML) for random effects and Maximum Likelihood (ML) for fixed effects. Model selection was based on the AICc (Akaike Information Criterion corrected from small sample size; [START_REF] Burnham | Model selection and multimodel inference: a practical information-theoric approach[END_REF]. Once the random effects were selected, we performed an AICc-based model selection of fixed effects [START_REF] Zuur | Mixed effects models and extensions in ecology with R[END_REF] to test whether the type of measure affected age-specific variation in LS.
Finally, we estimated the rate of senescence by using least-squares linear regression models fitted through the mean values of each litter size, from the onset of senescence onwards, as predicted by the most parsimonious GAMM. Each point was weighted by the inverse of the variance so as to account for the small number of individuals in the oldest age classes.
All analyses were carried out in R.2.15.1 using packages mgcv and AICcmodavg (R Development Core Team, 2012;[START_REF] Wood | Generalized additive models: an introduction with R[END_REF]. Descriptive statistics of the data were presented as mean ± 1 SD and model estimates as mean ± 1 SE.
Results
Pooled over sites, years and age, litter size averaged 4.9 ± 1.4 when based on embryo counts and 4.5 ± 1.4 from counts of placental scars (see Table 2 for detailed results by age class).
From GAMM, all models including the random effects of the Year, the Site or the Region and the fixed effect of Age and Type had substantial support (ΔAICc < 2, Table 3). We retained the simplest of those models (Table 3). Placental scars count increased up to 5.0 ± 0.2 at the age of 4 (black line and dots in Fig. 2). From the age of 4 onwards, it significantly declined at a rate of senescence of 0.5 ± 0.02 cubs per year (Fig. 2). This pattern was consistent across study areas (random effects '1|Site' retained; Table 3.A), thereby suggesting that senescence pattern is likely to be a generalized process in red fox populations. We found divergence in senescence patterns between the two proxies of litter size (fixed effect s(Age)×Type retained; Table 3.B). Embryo counts peaked at age five but the rate of senescence in embryo counts afterwards was much reduced compared to placental scars (0.1 ± 0.01 cubs per year; Fig. 2). Finally, only a small proportion of females were killed after the age of 4 and 5 (11.7 and 5.6% respectively, median age at death: 2 years, Fig. 2), such that very few females exhibited senescence in these heavily culled populations.
Discussion
We took advantage of a large dataset collected over 10 years from a landscape-scale culling experiment in rural France, to investigate the deterioration in reproductive output with age in the red fox. Contrary to our expectation, our results revealed a weak and late reproductive senescence in this species. The onset of senescence occurred late (four years old) relatively to the age structure of the population (median age at death: two years old). The decline in litter size after four or five years old depending on the proxy used was significant but clearly more pronounced for placental scars count than for embryos count, suggesting increased embryo resorption as a likely physiological mechanism of senescence. This weak and late senescence concerned very few females in the populations (i.e. less than 11.7% of the females in the population reached the age of the onset of senescence) so that the impact of senescence on the dynamics of these heavily culled populations is likely to be negligible.
Limits inherent to post-mortem and cross-sectional data for investigating senescence
Monitoring reproductive performance in red fox is challenging on a large scale, due to its nocturnal, cryptic and elusive behaviour. We used post-mortem examination of carcasses to measure litter size and age. Although these methods may overcome some of the challenges of studying reproduction in free-ranging carnivore populations, we are aware of the inherent weaknesses in their applications. First, we estimated red fox age from cementum annuli lines in teeth. Although the method is widely used in carnivores studies such as red fox [START_REF] Harris | Age determination of badgers (Meles meles) from tooth wear: the need for a pragmatic approach[END_REF] or hyaena [START_REF] Van Horn | Age estimation and dispersal in the spotted hyena (Crocuta crocuta)[END_REF], misclassification has been noted due to some animals that did not develop a cementum line in one year (Grau et al., 1970 on raccoons;King, 1991 on stoats;[START_REF] Matson | Progress in cementum aging of martens and fishers[END_REF]. Deposition of cementum annuli and tooth wear may also vary with diet, season and region [START_REF] Costello | Reliability of the cementum annuli technique for estimating age of black bears in New Mexico[END_REF] on black bears). The method has not been applied on red foxes of known age. Thus we could not rule out some misclassifications although not quantifiable. Working with dead animals, we used placental scars and embryos counts as proxies for litter size. Placental scars counts provide a possible overestimate of litter size, due to embryos resorption, prenatal mortality and stillborn litters [START_REF] Vos | Reproductive performance of the red fox, Vulpes vulpes, in Garmish-Partenkirchen, Germany, 1987-1992[END_REF][START_REF] Elmeros | Placental scar counts and litter size estimations in ranched red foxes (Vulpes vulpes)[END_REF]. Inversely in a certain time postpartum, litter size might be underestimated by placental scars count due to the regeneration of uterine tissues [START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF][START_REF] Heydon | Demography of rural foxes (Vulpes vulpes) in relation to cull intensity in three contrasting regions of Britain[END_REF][START_REF] Lindström | Placental scar in the red fox (Vulpes vulpes L.) revisited[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF][START_REF] Mcilroy | The reproductive performance of female red foxes, Vulpes vulpes, in central-western New South Wales during and after a drought[END_REF][START_REF] Ruette | Reproduction of the red fox Vulpes vulpes in western France: does staining improve estimation of litter size from placental scar counts?[END_REF].
Our approach relies on the use of data from large-scale culling experiments to investigate senescence in five population replicates. Yet, the inference of senescence from life-table studies using cross-sectional data has been questionable for a long time. Indeed, the needs to consider sources of heterogeneity, such as unequal probability of sampling, individual heterogeneity, climate, density or early life conditions, advocate for following individuals throughout their life [START_REF] Gaillard | Senescence in natural populations of mammals: a reanalysis[END_REF][START_REF] Gaillard | An analysis of demographic tactics in bird and mammals[END_REF][START_REF] Nussey | Senescence in natural populations of animals: widespread evidence and its implications for biogerontology[END_REF][START_REF] Reid | Age-specific reproductive performance in red-billed choughs Pyrrhocorax pyrrhocorax: patterns and processes in a natural population[END_REF].
However, as non-selective methods of culling (trapping and hunting) were used, there is no reason to expect bias toward low or high reproductive individuals, since the age of adult's foxes could not be visually assessed. Moreover, we took into account the variability between populations by using samples from two contrasted regions and over several years. Nevertheless, it is important to consider both within (improvement, senescence) and betweenindividuals (selective appearance and disappearance) process in the estimation of patterns of age-dependent reproduction [START_REF] Reid | Age-specific reproductive performance in red-billed choughs Pyrrhocorax pyrrhocorax: patterns and processes in a natural population[END_REF][START_REF] Van De Pol | Age-dependent traits: a new statistical model to separate within and between individual effects[END_REF]. For instance, if individuals with high reproduction have poorer survival, mean reproduction may decline across older age because only individuals that invest little on reproduction survive. Selective disappearance has thus been found to partly mask the age-related changes in reproductive traits in ungulates [START_REF] Nussey | Measuring senescence in wild animal populations: towards a longitudinal approach[END_REF][START_REF] Nussey | The rate of senescence in maternal performance increases with early-life fecundity in red deer[END_REF].
We have no possibility to check for that kind of individual heterogeneity, determined by genetic and/or natal environment conditions. However, we found senescence in both traits i.e. numbers of placental scars and embryos and have no reason to expect a different sampling bias in vixens collected before or after parturition. Moreover, we did not observe a reduction in litter size variance with age expected in case of selective appearance or disappearance process (result not shown).
Besides, cross-sectional data are not systematically biased by individual heterogeneity and earlier studies revealing reproductive senescence from such data had been a posteriori validated by longitudinal data [START_REF] Hanks | Reproduction of elephant, Loxodonta africana, in the Luangwa Valley, Zambia[END_REF]. Hence, we are confident that our approach provides a relatively accurate picture of the age-related pattern in red fox reproduction. However, we call for long-term individual-based time series throughout longitudinal datasets to confirm senescence in free-ranging red fox populations.
Reproductive senescence in the red fox
Age-related reproductive output in red fox has long been discussed, but without reaching unanimous findings regarding senescence. Our results confirmed the increase of litter size with age among the young age-classes with a maximum reached at the age of 4-5 years old (see also [START_REF] Englund | Some aspects of reproduction and mortality rates in Swedish foxes (Vulpes vulpes), 1961 -63 and 1966 -69[END_REF][START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF][START_REF] Lindström | Food limitation and social regulation in a red fox population[END_REF]. However, the decrease in litter size for older vixens has rarely been evidenced [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF][START_REF] Cavallini | Reproduction of the red fox Vulpes vulpes in Central Italy[END_REF][START_REF] Marlow | Demographic characteristics and social organisation of a population of red foxes in a rangeland area in Western Australia[END_REF]. Moreover, litter size estimated from placental scars was even reported to be independent of age in several red foxes populations (France: [START_REF] Artois | Reproduction du renard roux (Vulpes vulpes) en France: rythme saisonnier et fécondité des femelles[END_REF]Central Italy: Cavallini and Santini, 1996;Denmark: Elmeros et al., 2003;and in Western Australia: Marlow et al., 2000). Here we were able to reveal a weak senescence pattern in reproduction in vixens from five to ten years old, and that affects reproduction at a rate of one cub less every two years when considering placental scars. [START_REF] Harris | Age-related fertility and productivity in red foxes, Vulpes vulpes, in suburban London[END_REF] and [START_REF] Harris | Demography of two urban fox (Vulpes vulpes) populations[END_REF] described for the first time reproductive senescence in London urban fox population. In a sample of 192 vixens, litter size significantly decreased in their fifth and sixth breeding season. Our results obtained in rural areas where fox densities are lower, are in accordance with those results. Interestingly in South-East Australia, i.e. in a context of invasion, reproductive parameters peaked in fifth-and sixth-year vixens, but vixens over eight years of age produced as many cubs as first-year breeders did [START_REF] Mcilroy | The reproductive performance of female red foxes, Vulpes vulpes, in central-western New South Wales during and after a drought[END_REF].
Reproductive senescence has been identified in several natural populations of mammals including ungulates, primates and domestic livestock [START_REF] Beehner | The ecology of conception and pregnancy failure in wild baboons[END_REF][START_REF] Ericsson | Age-related reproductive effort and senescence in free-ranging moose, Alces alces[END_REF][START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF][START_REF] Nussey | The rate of senescence in maternal performance increases with early-life fecundity in red deer[END_REF][START_REF] Promislow | Senescence in Natural Populations of Mammals: A Comparative study[END_REF]. To date only little evidence of reproductive senescence exists in carnivores, most of them focusing on long-lived species such as lions [START_REF] Packer | Reproductive success of lions[END_REF], bears [START_REF] Schwartz | Reproductive maturation and senescence in the female brown bear[END_REF]; but see Dugdale et al., 2011 on badgers). Only recently, senescence has been detected in free-ranging American mink, Neovison vison, a short-lived species with early age at first parturition [START_REF] Melero | Density-and age-dependent reproduction partially compensates culling efforts of invasive non-native American mink[END_REF].
The proposal formulated by [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF] that the magnitude of senescence is tightly associated with life history, mainly the slow-fast continuum, has been previously verified in populations of similar traits such as marmots [START_REF] Berger | Agespecific survival in the socially monogamous alpines marmot (Marmota marmota): evidence of senescence[END_REF], meerkats [START_REF] Sharp | Reproductive senescence in a cooperatively breeding mammal[END_REF], ground squirrels [START_REF] Broussard | Senescence and age-related reproduction of female Columbian ground squirrels[END_REF], opossums [START_REF] Austad | Retarded senescence in an insular population of Virginia opossums (Didelphis virginiana)[END_REF], and badgers [START_REF] Dugdale | Age-specific breeding success in a wild mammalian population: selection, constraint, restraint and senescence[END_REF]. Our findings provided evidence of weak reproductive senescence in the fast-living red fox, which occurred late (4-5 years old) relatively to the agestructure of our populations, and, therefore does not fully support the proposal of [START_REF] Jones | Senescence rates are determined by ranking on the fast-slow life-history continuum[END_REF]. Furthermore, it concerned only very few females, since only a small proportion of vixens were killed after the age of 4 and 5.
Increasing embryo resorption with age: a physiological mechanism underpinning reproductive senescence?
Interestingly, senescence was more pronounced on placental scars than on the number of embryos. It suggested that gestation failure is the most likely cause of the decline in red fox litter size, rather than a decrease in ovulation rate. Spontaneous embryo resorption is an important issue in obstetrics, but also in livestock breeding and wildlife breeding programs. In wild species, increasing implantation failure with age has been identified in several taxa such as roe deer (Borg, 1970, Hewison andGaillard, 2011). Accordingly, reproductive senescence resulted from a combination uterine defects and reduction in oocyte numbers in elephant [START_REF] Hanks | Reproduction of elephant, Loxodonta africana, in the Luangwa Valley, Zambia[END_REF]. The success of embryo to develop depends on a complex series of cellular and molecular mechanisms associated with hormonal balance [START_REF] Cross | Implantation and the placenta: key pieces of the developmental puzzle[END_REF][START_REF] Finn | The implantation reaction[END_REF].
According to the disposal soma theory of ageing, individuals should invest less effort in the maintenance of somatic tissues for those that invested early in life, based on the best allocation of resources among the various metabolic tasks [START_REF] Kirkwood | Evolution of ageing[END_REF][START_REF] Kirkwood | Evolution of senescence: late survival sacrificed for reproduction[END_REF]. In the case of red fox, the ageing of reproductive tracts (mainly uterus) probably plays an important role in the decrease of red fox litter size with age.
Finally, our study highlighted that reproductive senescence occurs in red fox populations, although being weak and occurring late in life. The consequences of reproductive senescence on red fox population dynamics might be negligible due to the low proportion of females in the population that reached the age at the onset of senescence. In the context of intensive removal through hunting and trapping acting on population densities [START_REF] Lieury | Compensatory Immigration Challenges Predator Control: An Experimental Evidence-Based Approach Improves Management[END_REF], a proper assessment of the effect of the variation in population density and removal pressure over time and populations on the reproductive performance is needed to investigate process such as compensatory reproduction. Lines represent GAMM predictions (plain) and their associated standard error (dashed).
Figure 1 .
1 Figure 1. Location showing the sites where the landscape-scale culling experiments of red
Figure 2 .
2 Figure 2. Variation in litter size of the red fox in relation to the age of vixens (in years)
Fig. 1
Acknowledgements
We are grateful to the regional and local Hunter's associations, and we warmly thank all people involved in field activity. We are grateful to the regional and local Hunters' Associations, especially Y. Desmidt, J.-L. Pilard, P. Hecht, C. Mercuzot, J. Desbrosse, and C. Urbaniak for sustaining the program. We warmly thank F. Drouyer, B. Baudoux, N. Haigron, C. Mangeard, and T. Mendoza for efficient support in the fieldwork, our colleagues working on hares, especially Y. Bray and J. Letty and all local people in charge of hunting, and hunters and trappers who helped in counting and collecting foxes. This study was partially funded by the Regional Hunters' Association of Champagne-Ardenne, and the Hunters' Associations of Aube and Ille-et-Vilaine.
This work was supported by the Regional Hunters' Association of Champagne-Ardenne, and the Hunters' Associations of Aube and Ille-et-Vilaine.
Reproductive senescence in the red fox |
01766433 | en | [
"sde.be"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01766433/file/Lieury%20et%20al%20Designing%20cost-effective%20CR%20survey_main%20text_RESUB.pdf | Nicolas Lieury
Sébastien Devillard
Aurélien Besnard
Olivier Gimenez
Olivier Hameau
Cécile Ponchon
Alexandre Millon
email: [email protected]
Designing cost-effective capture-recapture surveys for improving the monitoring of survival in bird populations
Keywords: survey design, optimisation, statistical power, cost efficiency, stage-structured population Running head: Cost-effective Capture-Recapture surveys
Population monitoring traditionally relies on population counts, accounting or not for the issue of detectability. However, this approach does not permit to go into details on demographic processes. Therefore, Capture-Recapture (CR) surveys have become popular tools for scientists and practitioners willing to measure survival response to environmental change or conservation actions. However, CR surveys are expensive and their design is often driven by the available resources, without estimation about the level of precision they provide for detecting changes in survival, despite optimising resource allocation in wildlife monitoring is increasingly important. Investigating how CR surveys could be optimised by manipulating resource allocation among different design components is therefore critically needed. We have conducted a simulation experiment exploring the statistical power of a wide range of CR survey designs to detect changes in the survival rate of birds. CR surveys differ in terms of number of breeding pairs monitored, number of offspring and adults marked, resighting effort and survey duration. We compared open-nest (ON) and nest-box (NB) monitoring types, using medium-and long-lived model species. Increasing survey duration and number of pairs monitored increased statistical power. Long survey duration can provide accurate estimations for long-lived birds even for small population size (15 pairs). A costbenefit analysis revealed that for long-lived ON species, ringing as many chicks as possible appears as the most effective survey component, unless a technique for capturing breeding birds at low cost is available to compensate for reduced local recruitment. For medium-lived NB species, focusing the NB rounds at a period that maximises the chance to capture breeding females inside nest-boxes is more rewarding than ringing all chicks. We show that integrating economic costs is crucial when designing CR surveys and discuss ways to improve efficiency by reducing duration to a time scale compatible with management and conservation issues.
Introduction
Studies aiming at detecting the response of wild populations to environmental stochasticity, anthropogenic threats or management actions (e.g. harvest, control or conservation), traditionally rely on the monitoring of population counts. Such data, however, suffers from a variable detectability of individuals that can alter the reliability of inferred temporal trends [START_REF] Williams | Analysis and Management of Animal Populations: Modeling, Estimation, and Decision Making[END_REF]. Methods have been developed to account for the issue of detectability, based on the measure of the observer-animal distance (Distance Sampling; [START_REF] Buckland | Introduction to Distance Sampling. Estimating abundance of biological populations[END_REF] or on multiple surveys (hierarchical modeling, [START_REF] Royle | Hierarchical Modeling and Inference in Ecology: the Analysis of Data from Populations, Metapopulations and Communities[END_REF]. Still, population size being the result of a balance between survival, recruitment, emigration and immigration, inferring population status from counts, whatever detectability is accounted for or not, may impair the assignment of the demographic status of a population (source vs. sink; Furrer and Pasinelli 2016, [START_REF] Weegman | Integrated population modelling reveals a perceived source to be a cryptic sink[END_REF].
Surveys that consist of capturing, marking with permanent tags, releasing and then recapturing wild animals (i.e. capture-recapture surveys, hereafter CR surveys), to gather longitudinal data and hence derive survival rates while accounting for imperfect detection [START_REF] Lebreton | Modeling survival and testing biological hypotheses using marked animals: a unified approach with case studies[END_REF], have become highly popular tools in both applied and evolutionary ecology [START_REF] Clutton-Brock | Individuals and populations: the role of longterm, individual-based studies of animals in ecology and evolutionary biology[END_REF]. Opting for a mechanistic instead of a phenomenological approach has indeed proved to be particularly informative for identifying the response of a population to any perturbation, and ultimately allows to pinpoint the appropriate management strategy. Over the last decade, an increasing number of practitioners have set up CR surveys with the aim of quantifying survival variation in response to i) changing environment such as climate or habitat loss [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF], ii) hunting [START_REF] Sandercock | Is hunting mortality additive or compensatory to natural mortality ? Effects of experimental harvest on the survival and cause-specific mortality of willow ptarmigan[END_REF], iii) other anthropogenic mortality causes (e.g. collision with infrastructures; [START_REF] Chevallier | Retrofitting of power lines effectively reduces mortality by electrocution in large birds: an example with the endangered Bonelli's eagle[END_REF], and iv) the implementation of management/conservation actions [START_REF] Lindberg | A review of designs for capturemarkrecapture studies in discrete time[END_REF][START_REF] Koons | Effects of exploitation on an overabundant species: the lesser snow goose predicament[END_REF], review in Frederiksen et al. 2014). In all these contexts, the estimation of survival, and its temporal variation, is particularly informative for building effective evidence-based conservation [START_REF] Sutherland | The need for evidencebased conservation[END_REF]. As an example, the high adult mortality due to electrocution in an Eagle owl Bubo bubo population of the Swiss Alps, as revealed by a CR survey, would have not been detected if the survey was solely based on population counts, that remained stable over 20 years [START_REF] Schaub | Massive immigration balances high anthropogenic mortality in a stable eagle owl population: Lessons for conservation[END_REF].
The effectiveness of a CR survey to detect and explain changes in survival rates over time depends on the levels of field effort dedicated to several survey components: i) the size of the sample population, ii) the proportion of offspring and adults marked, iii) the recapture/resighting rate of previously marked individuals and iv) the number of surveying years (or survey duration; [START_REF] Yoccoz | Monitoring of biological diversity in space and time[END_REF][START_REF] Williams | Analysis and Management of Animal Populations: Modeling, Estimation, and Decision Making[END_REF]. In a conservation context, considering only the usual trade-off between the number of marked individuals and the number of surveyed years is of little help when designing a CR survey. Indeed, practitioners need to know as soon as possible whether survival is affected by a potential threat or has alternatively benefited from a management action. Implementing CR surveys is however particularly costly in terms of financial and human resources, as it requires skilled fieldworkers over an extensive time period. Therefore, most surveys are actually designed according to the level of available resources only, and without any projection about the precision they provide for estimating survival and the statistical power they obtain for detecting survival variability.
The life-history characteristics (e.g. survival and recruitment rates) of the study species largely determine which of the different components of a CR survey will provide the most valuable data. For instance, low recruitment of locally-born individuals (due to high juvenile mortality rate and/or high emigration rates) limits the proportion of individuals marked as juveniles recruited in the local population. In such a case, we expect that reducing the effort dedicated to mark offspring in favour of marking and resighting breeding individuals would improve survey efficiency. Therefore, manipulating both sampling effort and sampling design offer opportunities to optimise CR surveys. A few attempts have been made to improve the effectiveness of CR according to species' life-histories, though most of them remain speciesspecific [START_REF] Devineau | Planning Capture-Recapture Studies: Straightforward Precision, Bias, and Power Calculations[END_REF][START_REF] Williams | Cost-effective abundance estimation of rare animals: Testing performance of small-boat surveys for killer whales in British Columbia[END_REF][START_REF] Chambert | Heterogeneity in detection probability along the breeding season in Black-legged Kittiwakes: Implications for sampling design[END_REF][START_REF] Lindberg | A review of designs for capturemarkrecapture studies in discrete time[END_REF][START_REF] Lahoz-Monfort | Exploring the consequences of reducing survey effort for detecting individual and temporal variability in survival[END_REF]. Moreover, improving CR surveys in regards to the precision of survival estimates constitutes only one side of the coin and yet, the quantification of economic costs in the optimisation process is currently lacking. Assessing costs and benefits is therefore critical if we are to provide cost-effective guidelines for designing CR surveys. This optimisation approach is increasingly considered as an important step forward for improving the robustness of inferences in different contexts such as for population surveys [START_REF] Moore | Optimizing ecological survey effort over space and time[END_REF] or environmental DNA sampling [START_REF] Smart | Assessing the cost-efficiency of environmental DNA sampling[END_REF]).
Here we offer a simulation experiment investigating the relative efficiency of a wide array of CR survey designs in terms of statistical power to detect a change in survival rates. Alongside the usual how many and how long considerations, we focused our simulations on the how to and what to monitor. We further balanced the statistical benefit of each survey component with human/financial costs, derived from actual monitoring schemes. Our aim was to provide cost-effective guidelines for the onset of new CR surveys and the improvement of existing ones. Although our work was primarily based on the monitoring of bird populations, we discussed how this approach can be applied to improve the monitoring of other taxa.
Material and methods
2.1. Bird monitoring types and model species Our simulation experiment encompassed the two most common types of bird monitoring when applied on two different life-history strategies: long-lived and open-nesting species with high but delayed local recruitment vs. medium-lived and cavity-nesting species with rapid but low recruitment of locally-born individuals. These two types of monitoring are representative of what practitioners come across in the field and further largely determine the nature of the survey and the level of resources needed. Moreover, another prerequisite of our simulations was to ensure the availability of both detailed demographic data on the model species together with a precise estimation of the human and financial costs entailed by the monitoring.
In open-nesting (ON) surveys, chicks are typically ringed at the nest before fledging with a combination of coloured rings or a large engraved plastic ring with a simple alphanumeric code, in addition to conventional metal rings. Resightings can then be obtained without recapturing the birds using binoculars or telescopes. The identification of breeding birds is typically obtained when monitoring breeding success. For our model species for ON monitoring, we combined life-history and survey characteristics of two long-lived diurnal raptors, the Bonelli's eagle Aquila fasciata and the Egyptian vulture Neophron percnopterus [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF][START_REF] Lieury | Geographically isolated but demographically connected: Immigration supports efficient conservation actions in the recovery of a range-margin population of the Bonelli's eagle in France[END_REF]. Monitoring typically consists of repeated visits of known territories during the breeding season for checking whether breeding occurs and the identity of breeding birds, and eventually ringing chicks. Breeding birds are difficult to capture, therefore limiting the number of newly marked breeders each year, although additional trapping effort can be deployed (adults are occasionally trapped, for fitting birds with GPS). Such captures are however highly time-consuming as it requires monitoring several pre-baiting feeding stations.
The second, highly common, monitoring type concerns cavity-nesting birds, whose surveys typically involve artificial nest-boxes (NB thereafter). All NBs are checked at least once a year, and additional visits concentrate on the restricted set of occupied NBs for ringing/recapturing both chicks and breeding birds. For building simulations on the NB type of monitoring, we combined information on life-history and survey characteristics from two medium-lived nocturnal raptors, the barn owl Tyto alba [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]) and the little owl Athene noctua (OH & AM, unpub. data). These two species are known to prefer NB over natural or semi-natural cavities. NB monitoring typically consists of repeated visits of NB during the breeding season for checking whether breeding occurs, catching breeding females in NB and eventually ringing chicks. Breeding females are usually relatively easy to catch, thus allowing many newly marked adults to enter the CR dataset each year, in contrast to ON. Breeding males are typically more difficult to capture than females and require alternative, time-consuming, types of trapping [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF].
For the two types of monitoring, the resighting probability of non-breeding individuals (hereafter floaters) is low as such individuals are not attached to a spatially restricted nesting area. Life-cycle graphs and values of demographic parameters are given in the appendix (Table S1; Fig. S1).
Definition of the main components of CR surveys
We designed a set of surveys for both types of monitoring by varying the level of effort dedicated to four main components (Fig. S2): 1.
Survey duration:
For each type of monitoring, we set two different durations corresponding to 1-2 and 3-4 generations of the model species (i.e. 10/20 years and 5/10 years for long-and medium-lived species respectively).
Number of breeding pairs surveyed:
The number of pairs available for monitoring is usually lower in ON monitoring of long-lived species (with larger home-range) compared to NB monitoring of medium-lived species. Number of breeding pairs varied between 15-75 and 25-100 for ON and NB monitoring respectively.
3.
Proportion of monitored nests in which chicks are ringed: This proportion was made to vary from 25 to 100% for both types of monitoring.
4.
Proportion of breeders (re)captured/resighted: This proportion was set at three different levels (0.50, 0.65, 0.80). For ON monitoring, breeding birds are not physically caught but resighted at distance. However, we evaluated the added value of a monitoring option which consists of capturing and ringing unmarked breeding adults so as to compensate for the absence of ringed adults during the early years of the survey, due to delayed recruitment in long-lived species (five adults caught every year during the first five years of the survey). In order to reduce the number of computer-intensive simulations, we removed survey designs unlikely to be encountered in the field (e.g. only 25% of nests in which chicks are ringed when 25 breeding pairs are monitored for NB). Overall, a total of 132 and 66 sampling designs were built for ON and NB monitoring respectively (Fig. S2).
Simulating time-series of demographic rates and CR histories
The relevance of each sampling design was assessed from 3500 simulated CR datasets. As we were interested in exploring the ability of different sampling designs to detect changes in survival, each CR dataset was generated from a survival time-series that incorporated a progressive increase in survival, mimicking the effect of conservation actions. Note here that simulating a decrease in survival would have led to similar results. The slope of the conservation effect was scaled in an additive way among ages and/or territorial status according to empirical estimates from populations having benefited from conservation plans (adult survival rate increased from 0.77 to 0.88 for Bonelli's eagle, Chevalier et al. 2015; from 0.84 to 0.93 for Egyptian vulture, [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF]. This increase in survival rate corresponds to an increase of approximately 1.0 on the logit scale. We simulated a gradual implementation of the conservation action over the years (3 and 7 years for medium-and long-lived species respectively) that resulted in an increase of e.g. adult survival from 0.37 to 0.61 and from 0.81 to 0.92 for medium-and long-lived species respectively (Fig. S3). We checked the range of survival rates obtained for medium-lived species fell within the temporal variation observed in the barn owl [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]. For each simulated CR dataset, we added random environmental variations around average survival to match variation observe in specific studies (standard deviation constant across ages on logit scale: 0.072 for ON longlived species, [START_REF] Lieury | Relative contribution of local demography and immigration in the recovery of a geographically-isolated population of the endangered Egyptian vulture[END_REF]0.36 for NB medium-lived species, [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF]. Individual CR histories were thus simulated based on survival trends (plus environmental noise) and according to the defined life-history stages (see online supplementary material for the detailed simulation procedure).
CR analyses and contributions to statistical power
We analysed each simulated CR dataset using a multi-state (breeder, floater) CR model for ON monitoring and a single-state model for NB monitoring (detailed structures shown in Fig. S1, Table S1). We then ran three models with survival i) constant , ii) varying over years and iii) linearly related to the conservation action . We used ANODEV as a measure of the conservation effect on survival variation, as recommended by [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF]. This statistic ensures a proper estimation of the effect of a temporal covariate whatever the level of the residual process variance. The ANODEV, follows a Fisher-Snedecor distribution, and was calculated as: where and are, respectively, the deviance and the number of parameters of the models [START_REF] Skalski | Testing the significance of individual-and cohort-level covariates in animal survival studies[END_REF]. As a measure of the statistical power to detect a change in survival rate, we counted the number of simulations in which the ANODEV was significant. Given the limited number of years typically available in a conservation context, we chose an -level = 0.2 to favour statistical power, at the expense of inflated probability of type I error [START_REF] Yoccoz | Use, overuse, and misuse of significance tests in evolutionary biology and ecology[END_REF][START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF]. A specific CR survey was considered efficient when the proportion of significant ANODEV exceeded a threshold of 0.7 [START_REF] Cohen | Statistical Power Analysis for the Behavioral Sciences[END_REF].
For each design, we calculated the relative increase in power by dividing the difference between the power of a given sampling design and the minimum power across all scenarios, by the difference between the maximum and minimum power across all scenarios. This ratio, power, was used as a response variable in a linear model to quantify the effect of three explanatory variables: i) the proportion of monitored nests in which chicks are ringed, ii) the proportion of breeders (re)captured/resighted and iii) whether adult breeders were caught (in ON survey only). The survey duration and the number of surveyed nests were fixed. As explanatory variables explained 100% of the variance of power, coefficients of the linear model sum to 1. Therefore, coefficients can be interpreted as the relative contribution of each design component to the increase in statistical power.
Calculating the cost of CR surveys
Human and financial costs of each design were derived from our own field experiences. Costs included the number of working-days required to monitor a territorial pair (resighting for ON, capture/recapture for NB), to ring chicks and capture territorial breeders (for ON only). For both types of monitoring, these costs were multiplied by the number of breeding pairs surveyed, the number of monitored nests in which chicks are ringed and the total number of breeders caught. The specific case of the resighting of breeders in the ON monitoring required knowing the distribution of working-days used to check whether a given breeder was ringed and to identify it (Fig. S4). Indeed, since all territorial birds were not ringed, some observations did not provide information for the CR dataset. To account for this issue, we recorded from simulated demography the annual proportion of ringed breeders in the population and the number of observations. Then we calculated the costs of all bird observations, ringed or not, by sampling the number of working-days in the observed distribution of working-days (mean = 3.7 ± 3.3 per bird, Fig. S3). Finally, we converted the total number of working-days required for each simulation into financial cost in euros, according to the average wage of conservation practitioners in France, assuming no volunteerbased work and accounting for travel fees and supplementary materials (e.g. binocular, traps). Note that we are interested in the relative, not absolute, cost of survey designs. Finally, as for statistical power, we calculated the relative contribution of the different components of a survey design to the increase of the total cost by performing a linear model with costs (calculated as power) as the response variable.
Finally, we calculated cost-effective contributions of each design component, by dividing the relative contribution in statistical power increase by the relative contribution in cost increase. This allowed us to specifically assess in which component one should preferentially invest to increase CR survey efficiency.
All simulations and analyses were run with R 3.1.2 (R Core Team 2014). We used RMark (Laake 2013) package calling program MARK [START_REF] Cooch | Program MARK: a gentle introduction[END_REF] from R for CR analyses. We provided all R scripts as supplementary information (Appendices S2-S5).
Results
Survey components affecting the power to detect a change in survival for Open-Nesting monitoring
The survey duration and the number of nests surveyed were identified as the two major components for improving the ability of CR surveys in detecting a change in survival rates (Fig. 1). All long-duration surveys reached the power threshold, whereas the majority of short-duration surveys did not (44/66).
The capture of five territorial birds each year during the five first years greatly increased the effectiveness of CR surveys (Fig. 1). This component actually compensated for the absence of ringed territorial birds in the early years, a consequence of delayed recruitment in long-lived species. Most survey designs lacking the initial capture of territorial birds (27/33) failed to reach the power threshold in short-duration surveys. However, the benefit in terms of statistical power of this component diminished as i) the survey duration increased from 10 to 20 years and ii) the number of breeding pairs monitored increased. For example, when 25 breeding pairs were monitored, a survey involving the initial capture of territorial birds and 50% of nests with chicks ringed, was more efficient than a survey involving 100% of nests with chicks ringed but no territorial bird caught. Similarly, initial captures of territorial birds were more valuable than increasing the proportion of breeders resighted, although this effect tended to vanish as the survey duration and/or the number of surveyed nests increased. These interactions arose from the fact that we considered an absolute number of captures, and not a fixed proportion among the birds monitored. The smaller the number of breeding pairs surveyed and the shorter the survey duration, the more valuable became the initial capture of territorial breeders. Interestingly enough, monitoring as few as 15 pairs might provide a satisfactory statistical power, understanding that study has been conducted over 20 years (Fig. 1).
Survey components affecting the power to detect a change in survival for Nest-Box monitoring
The important environmental random variation implemented in the simulations (≥ to conservation effect) produced a noisy relationship between statistical power and the level of effort dedicated to the different survey components (Fig. 2a,b). Indeed, survival of medium-lived species suffer from a high level of residual temporal variation, compared to long-lived species, which reduces statistical power. A solution to this issue might be found in the addition of relevant environmental covariates (e.g. prey abundance, climate indices) into CR models, to increase the ability of analyses to detect the genuine effect of conservation actions [START_REF] Grosbois | Assessing the impact of climate variation on survival in vertebrate populations[END_REF].
Trends can nevertheless be extracted and we provided an additional figure without environmental variation to ascertain these inferences (Fig. 2c,d). First, while the majority of long-duration survey reached the statistical power threshold (24/33), no sampling design did so in short-duration survey. Second, monitoring 25 pairs provided little statistical power whatever the survey duration and the level of effort dedicated to other components. Overall, the proportion of nests in which chicks were ringed had virtually no effect, partly because this component increases the proportion in the CR dataset of young birds subject to higher environmental stochasticity than adults. The number of nest-boxes monitored increased statistical power and the threshold was reached for long-term survey designs including 50 nest-boxes monitored and an intermediate effort dedicated to the capture of breeding birds. The proportion of breeding birds caught appeared as the most effective component of NB surveys for medium-lived birds. This is essentially due to the fact that capturing breeding birds allowed ringing a large number of new birds, therefore enriching the CR dataset and compensating for the low recruitment rates of individuals ringed as chicks. It appeared more effective to increase the effort in terms of proportion of breeding birds caught (from 0.5 to 0.8), than increasing the number of pairs surveyed by 25, especially for short-duration surveys.
Cost of CR surveys
The number of working-days represented 97 and 88% of the total financial cost of CR surveys for ON and NB monitoring respectively. Due to the multiple visits needed to monitor breeding success, the number of nests surveyed contributed the most to the cost of CR surveys in both types of monitoring (Fig. 3). Survey duration also largely contributed to the overall costs, by multiplying this expense over the number of years (Fig. S5). In contrast, improving the recapture/resighting probability of breeders only marginally increased the survey cost. With all other things being equal, the capture of territorial birds in ON monitoring was more costly than improving the proportion of territorial birds resighted or increasing the proportion of chicks ringed. For NB monitoring, increasing the proportion of chicks ringed was more costly than improving the recapture probability of breeders. This discrepancy between monitoring can be explained by the cost difference for a same component (Table S2): capturing a breeder in ON monitoring was much more expensive than ringing chicks (15 vs. 2 working-days), compared to NB monitoring (25 vs. 40 min).
The identification of cost-effective surveys
The most efficient CR surveys were those that surveyed small numbers of nests but over long durations. However, these durations generally exceeded the timescale of management planning and did not represent an effective way to quickly adapt conservation actions in response to a threat affecting survival. Therefore, we have chosen here to focus on shortduration surveys to identify the key design components providing the highest added value.
For ON monitoring conducted on 50 breeding pairs of a long-lived species, the most important contribution to the increase in statistical power came from the initial capture of breeding birds (29%) but increasing the proportion of nests in which chicks are ringed proved also to be efficient (57% cumulated gain when passing from 25 to 100% of ringed chicks; Fig. 4a). Surprisingly, increasing the proportion of resighted territorial birds provided only limited gain of power (14%). The contribution of these different components to the overall survey cost was highly heterogeneous with the capture of territorial breeders being particularly expensive (58%), whereas ringing chicks was cheap (14%; Fig. 4b). When balancing costs and benefits, it turned out than investing in the ringing of the chicks was the most rewarding option (Fig. 4c).
For NB monitoring conducted on 75 breeding pairs of a medium-lived species, the major contribution to the increase in statistical power was achieved through the proportion of breeding adults caught (97% cumulative gain), with the proportion of chicks ringed providing only little added value (3%). This trend was reinforced when considering cost contributions, such that the proportion of breeding adults caught was unambiguously pointed out as the most rewarding component of a NB sampling design (Fig. 5d,e,f).
Discussion
We offered a methodological framework for exploring the relative efficiency of alternative survey designs to detect a change in survival, a key demographic parameter widely used by scientists and practitioners for monitoring animal populations. The set of sampling designs (N = 198) encompass the most common types of monitoring dedicated to the demographic study of birds by capture-recapture (nest-box and open-nest) and applied on medium-or long-lived species. More importantly, we conducted a cost-benefit analysis balancing the increase in statistical power with costs in working-days, entailed by the four main components of CR surveys (survey duration, number of breeding pairs surveyed, proportion of monitored nests in which chicks are ringed and proportion of breeders (re)captured/resighted). For long-lived open-nesting species, increasing the proportion of chicks ringed is the most valuable option once the survey duration was fixed to a conservation-relevant timescale. In contrast, for medium-lived species monitored in nest-boxes, dedicating resources to increase the proportion of breeding adults caught reduces the number of pairs monitored necessary to reach an adequate statistical power in short-duration surveys.
Our simulation experiment pointed out that extended survey durations (over 3-4 generation time) and/or high numbers of monitored breeding pairs (50-75) were often necessary to allow the detection of a change in survival. This is however problematic, as long-duration surveys exceed the timescale of management planning and is unsatisfactorily regarding the implementation of conservation actions [START_REF] Yoccoz | Monitoring of biological diversity in space and time[END_REF]. Moreover, practitioners dealing with species of conservation concern have to do the best of limited resources. Thus, the answers to the classical questions how long and how many are highly constrained in a management context. On the one hand, practitioners need an answer as soon as possible, so as to ensure the success of the management action while limiting costs. On the other hand, the number of breeding pairs monitored is either dictated by the total number of pairs available when studying restricted populations or by the level of human/financial resources available. Overall, we believe that the questions how to and what to monitor can provide a significant added value to the design of monitoring schemes in a conservation/management context. Below we discuss several ways to overcome issues regarding monitoring design, in link with monitoring type and species life-history.
On the relevance of ringing 100% of the offspring monitored
Based on our own experience, the ability of practitioners/scientists to ring all the monitored chicks is a common quality control of CR surveys. Here we challenge this view as our simulation results showed that the validity of this 'gold standard' depends on the species' lifehistory. For long-lived species, with high recruitment of locally-born individuals, this surely constitutes a pertinent option given the low cost of this component. For species with lower local recruitment rates such as medium-lived species however, our results showed that investing in the capture of breeding adults, instead of seeking for an exhaustive ringing of chicks, is more efficient. Specifically, this strategy would consist of increasing the number of nest-box's rounds when breeding adults are most likely to be caught, at the expense of rounds dedicated to the ringing of the last broods.
It can be argued, however, that this strategy may reduce our ability to estimate juvenile survival. The population growth rate of short-and medium-lived species is theoretically more sensitive to juvenile than adult survival (e.g. [START_REF] Altwegg | Age-specific fitness components and their temporal variation in the barn owl[END_REF], although the actual contribution of different demographic traits to population dynamics may differ from theoretical expectations (e.g. Hunter et al. 2010). Therefore, it could be of prime importance to avoid CR surveys that fail in providing reliable estimates of juvenile survival for such species. Estimating juvenile survival however remains problematic (Gilroy et al. 2012). Indeed, standard CR surveys allow the estimation of apparent survival, i.e. the product between true survival and the probability of recruiting in the study area, the latter being often weak for juveniles. For NB-breeding species, apparent survival is further affected by the probability of breeding in a nest-box, and not in a natural cavity where birds are typically outof-reach. Therefore, juvenile survival cannot be compared among study areas that differ by the proportion of pairs occupying nest-boxes, the latter being usually unknown. Overall, we suggest that the monitoring of new recruitment in NB survey, achieved by the capture of breeding birds, may significantly contribute to the comprehension of population dynamics in absence of reliable data on juvenile survival [START_REF] Karell | Population dynamics in a cyclic environment: consequences of cyclic food abundance on tawny owl reproduction and survival[END_REF].
Capturing breeding adults: the panacea?
For both ON long-lived species and NB medium-lived species, the capture of breeding adults greatly improved the probability to detect a change in survival rates. Delayed recruitment in long-lived species is a major constraint to CR survey, and especially for species in which the probability to observe non-breeding birds is low. Our simulations showed that capturing some adults in the initial years greatly improved the ability of short-duration surveys to reach a satisfactory statistical power. However, the costs associated to this component vary across species and can severely reduce its effectiveness. For instance in large ON raptors, this entails prohibitive costs as it requires the mobilisation of numerous and highly-skilled people over a long time period. Alternative indirect techniques may however be implemented to reduce the costs of capturing adults (see below).
In contrast, capturing breeding birds in nest-boxes is relatively easy and cheap and only requires the knowledge of the breeding phenology. Females can be caught during late incubation or when brooding chicks and therefore provide highly valuable CR data. This is especially true when considering medium-lived species in which local recruitment rate is low.
Implementation and future directions
If we are to reliably inform management on a reasonably short time-scale, CR surveys maximising statistical power should be favoured. Unfortunately, such surveys often include costly components such as capturing breeding individuals in ON long-lived species. Our simulations included standard CR techniques and alternative methods may be achievable to decrease the cost of the more effective but less efficient design components. For instance, collecting biological materials to implement identification of individuals through DNA analyses might provide valuable data for ON long-lived species [START_REF] Marucco | Wolf survival and population trend using non-invasive capture-recapture techniques in the Western Alps[END_REF][START_REF] Bulut | Use of Noninvasive Genetics to Assess Nest and Space Use by White-tailed Eagles[END_REF][START_REF] Woodruff | Estimating Sonoran pronghorn abundance and survival with fecal DNA and capturerecapture methods[END_REF]. Feathers of breeding birds can be searched for when nests are visited for ringing chicks. Providing that specific microsatellite markers are already available, genetic CR data can be gathered at low costs (30-50 € per sample). Alternatively, RFID microchips embedded in plastic rings may also reduce the cost of recapture by recording the ID of the parents when visiting the nests for both ON and NB (e.g. [START_REF] Ezard | The contributions of age and sex to variation in common tern population growth rate[END_REF]. Reducing costs entailed by the number of nests surveyed, or the proportion of nest in which chicks are ringed, may be further achieved by optimising travelling costs as proposed by [START_REF] Moore | Optimizing ecological survey effort over space and time[END_REF].
Here we took advantage of data-rich study models to set our simulations. Many species of conservation concern may lack such data but values for demographic traits can be gathered from the literature on species with similar life history characteristics. Furthermore, the effect size of the conservation effect can be set according to the extent of temporal variation in survival, as we did for the NB example. Because we did not have other systems combining a field-derived knowledge of both demographic parameters and survey costs available, we did not perform a full factorial treatment between life-history strategies and monitoring types. We believe, however, that our simulation framework enabled one to derive generic statements on the way CR surveys should be designed, partly because the relative, not absolute, costs between the different components are likely to be similar whatever the species considered. Our conclusions regarding the NB monitoring are largely insensitive to the type of lifehistory, as the capture of breeding adults remain feasible at low cost for species with either shorter (e.g. blue tit Cyanistes caeruleus, [START_REF] Garcia-Navas | The role of immigration and local adaptation on fine-scale genotypic and phenotypic population divergence in a less mobile passerine[END_REF] or longer life expectancy (e.g. tawny owl Strix aluco, [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF]Cory's shearwater Calonectris diomedea, Oppel et al. 2011). NB monitoring of passerines can entail colour ringing and resightings in addition to recapture. Regarding ON monitoring, our conclusions drawn for long-lived raptors may be altered when considering species with lower local recruitment rate and for which the capture of breeding adults, e.g. mist-nets might be easier/cheaper (e.g. ring ouzel Turdus torquatus; [START_REF] Sim | Characterizing demographic variation and contributions to population growth rate in a declining population[END_REF]. In such a case, it is likely that the cost-benefit analysis regarding the capture of adults will promote this component. Finally, many cliff-nesting seabirds show similar monitoring type and life-history characteristics to our examples and our guidelines are likely to apply equally. For instance, a recent post-study evaluation of a CR survey conducted on common guillemot Uria aalge found that resighting effort could be halved without altering the capacity to monitor survival [START_REF] Lahoz-Monfort | Exploring the consequences of reducing survey effort for detecting individual and temporal variability in survival[END_REF], in agreement with our results. The complete R scripts provided as electronic supplements can be modified to help designing specific guidelines for other species.
Finally, the different components of CR design considered in our simulations are somewhat specific to bird ecology and may not directly apply when considering other vertebrates such as mammals, reptiles or amphibians. For instance, in carnivorous mammals, CR surveys are limited by the difficulty of capturing/recapturing individuals with elusive behaviour. Survival estimations often rely on the use of GPS/VHF tracking that is not well suited for long-term monitoring. Camera-trapping and DNA-based identification are increasingly used to improve CR surveys in such species [START_REF] Marucco | Wolf survival and population trend using non-invasive capture-recapture techniques in the Western Alps[END_REF][START_REF] Cubaynes | Importance of accounting for detection heterogeneity when estimating abundance: the case of French wolves[END_REF][START_REF] O'connell | Camera traps in animal ecology: methods and analyses[END_REF] and we believe that a cost-efficiency approach may be helpful for carefully designing optimal surveys in such monitoring. For example, one could simulate different sampling designs varying by trap number, inter-trap distance and the area covered for carnivores having small or large home-ranges to assess the effect of these components on the detection of survival variation. The path is, therefore, open for developing cost-effective CR surveys and improving the output of wildlife monitoring in all management situations.
Acknowledgments
We would like to thank all the practitioners we have worked with for sharing their experiences on the monitoring of wild populations. NL received a PhD Grant from École Normale Supérieure/EDSE Aix-Marseille Université. Comments from two anonymous reviewers helped us to improve the quality of the manuscript. Sonia Suvelor kindly edited the English. |
01744592 | en | [
"sdv.mhep.csc",
"sdv.mhep.phy",
"sdv.mhep.em"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01744592/file/COMPREHENSIVE-PHYSIOL.pdf | Bénédicte Gaborit
PhD Coralie Sengenes
PhD Patricia Ancel
Alexis Jacquier
MD, PhD Anne Dutour
Role of epicardial adipose tissue in health and disease: a matter of fat?
Epicardial adipose tissue (EAT) is a small but very biologically active ectopic fat depot that surrounds the heart. Given its rapid metabolism, thermogenic capacity, unique transcriptome, secretory profile, and simply measurability, epicardial fat has drawn increasing attention among researchers attempting to elucidate its putative role in health and cardiovascular diseases. The cellular crosstalk between epicardial adipocytes and cells of the vascular wall or myocytes is high and suggests a local role for this tissue. The balance between protective and proinflammatory/profibrotic cytokines, chemokines, and adipokines released by EAT seem to be a key element in atherogenesis and could represent a future therapeutic target. EAT amount has been found to predict clinical coronary outcomes. EAT can also modulate cardiac structure and function. Its amount has been associated with atrial fribrillation, coronary artery disease, and sleep apnea syndrome. Conversely, a beiging fat profile of EAT has been identified. In this review, we describe the current state of knowledge regarding the anatomy, physiology and pathophysiological role of EAT, and the factors more globally leading to ectopic fat development. We will also highlight the most recent findings on the origin of this ectopic tissue, and its association with cardiac diseases.
Didactic synopsis
Major teaching points:" followed by a bulleted list of 5-10 summary statements.
EAT is an ectopic fat depot located between myocardium and the visceral pericardium with no fascia separating the tissues, allowing local interaction and cellular cross-talk between myocytes and adipocytes Given the lack of standard terminology, it is necessary to make a distinction between epicardial and pericardial fat to avoid confusion in the use of terms. The pericardial fat refers to the combination of epicardial fat and paracardial fat (located on the external surface of the parietal pericardium) Imaging techniques such as echocardiography, computed tomography or magnetic resonance imaging are necessary to study EAT distribution in humans Very little amount of EAT is found in rodents compared to humans EAT displays high rate of fatty acids metabolism (lipogenesis and lipolysis), thermogenic (beiging features), and mechanical properties (protective framework for cardiac autonomic nerves and vessels) Compared to visceral fat, EAT is likely to have predominant local effects EAT secretes numerous bioactive factors including adipokines, fibrokines, growth factors and cytokines that could either be protective or harmful depending on the local microenvironement Human EAT has a unique transcriptome enriched in genes implicated in extracellular matrix remodeling, inflammation, immune signaling, beiging, thrombosis and apoptosis pathways Epicardial adipocytes have a mesothelial origin and derive mainly from epicardium. Cells originating from the Wt1+ mesothelial lineage, can differentiate into EAT and this "epicardium-to-fat transition" fate could be reactivated after myocardial infarction Factors leading to cardiac ectopic fat deposition may include dysfunctional subcutaneous adipose tissue, fibrosis, inflammation, hypoxia, and aging Periatrial EAT has a specific transcriptomic signature and its amount is associated with atrial fibrillation EAT is likely to play a role in the pathogenesis of cardiovascular disease and coronary artery disease EAT amount is a strong independent predictor of future coronary events EAT is increased in obesity, type 2 diabetes, hypertension, metabolic syndrome, nonalcoholic fatty liver disease, and obstructive sleep apnea (OSA)
Introduction
Obesity and type 2 diabetes have become importantly prevalent in recent years, and are strongly associated with cardiovascular diseases, which remain a major contributor to total global mortality despite advances in research and clinical care (195). Organ-specific adiposity has renewed scientific interest in that it probably contributes to the pathophysiology of cardiometabolic diseases [START_REF] Despres | Body Fat Distribution and Risk of Cardiovascular Disease: An Update[END_REF]321). Better phenotyping obese individuals, increasing our knowledge on one's individual risk, and identifying new therapeutic targets is therefore decisive. Epicardial adipose tissue (EAT) is the visceral heart depot in direct contact with myocardium and coronary arteries. Its endocrine and metabolic activity is outstanding, and its key localization allows a singular cross talk with cardiomyocytes and cells of the vascular wall.
Despite the little amount of EAT found in rodents, human EAT is readily measured using imaging methods. This has brought more than 1000 publications in the past decade. In this review, we discuss the recent basic and clinical research with regards to the EAT (i) anatomy, (ii) physiology, (iii) origin, and (iv) development, (v) clinical applications of EAT measurments, and (vi) its role in pathophysiology, in particular with atrial fribrillation, heart function, coronary artery disease (CAD) and obstructive sleep apnea syndrome.
Systematic review criteria
We searched MEDLINE and Pubmed for original articles published over the past ten years, focusing on epicardial adipose tissue. The search terms we used alone or in combination, were "cardiac ectopic fat", "cardiac adiposity", "fatty heart", "ectopic cardiovascular fat", "ectopic fat depots", "ectopic fat deposits", "epicardial fat" "epicardial adipose tissue", "pericardial fat", "pericardial adipose tissue". All articles identified were English-language, full-text papers. We also searched in the reference lists of identified articles, for further investigation.
EAT IN HEALTH
Anatomy of EAT
Definitions and distinction between pericardial and epicardial fat
Epicardial fat is the true visceral fat deposit over the heart (111,253,265). It is most commonly defined as adipose tissue surrounding the heart, located between the myocardium and the visceral pericardium (Figure 1). It should be distinguished from paracardial fat (adipose tissue located external to the parietal pericardium) and pericardial fat (often defined as paracardial fat plus epicardial fat) [START_REF] Gaborit | Epicardial fat: more than just an "epi" phenomenon?[END_REF]126). However, it should be noted that in the literature there is often some confusion in the use of the term pericardial instead of epicardial or conversely, so that it is prudent to carefully review the definition of adipose tissues measured by imaging used by authors in any individual study.
Distribution of EAT in humans and other species
Eventhough the adipose tissue of the heart was neglected for a long time, anatomists made early observations in humans that it varies in extent and distribution pattern. EAT constitutes in average 20% of heart weight in autopsy series [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]253,259). However, it has been shown to vary widely among individuals from 4% to 52% and to be preferentially distributed over the base of the heart, the left ventricular apex, the atrioventricular and interventricular grooves, along the coronary arteries and veins, and over the right ventricle (RV), in particular free wall (253). In our postmortem study, age, waist circumference and heart weight were the main determinants of EAT increase, the latter covering the entire epicardial surface of the heart in some cases (284). Importantly, a close functional and anatomical relationship exists between the EAT and the myocardium. Both share the same microcirculation, with no fascia separating the adipose tissue from myocardial layers, allowing cellular cross talk between adipose tissue and cardiac muscle (127). In other species than humans, such as pigs, rabbits or sheep, EAT is relatively abundant, which contrasts with the very small EAT amount found in rodents (Figure 2) (127). Initially, these findings did not support for a critical role of EAT in normal heart physiology and partly explain why EAT has been so poorly studied. However, there is a growing body of evidence that beyond the amount of EAT, its metabolic and endocrine activity is also crucial.
Physiology of EAT
The current understanding of EAT physiology is still in its infancy. The main anatomical and supposed physiological properties of epicardial fat are summarized in table 1. One of the major limitations in studying the physiology of EAT is that only patients with cardiac diseases undergo cardiac surgery. Sampling healthy volunteers would be unethical.
Histology
In humans, EAT has a smaller adipocyte size than subcutaneous or peritoneal adipose tissue [START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF]. But EAT is composed of far more than simply adipocytes. It also contains inflammatory, stromal and immune cells but also nervous and nodal tissue (206). It has been suggested that EAT may serve as a protective framework for cardiac autonomic nerves and ganglionated plexi (GP). Accordingly, nerve growth factor (NGF), which is essential for the development and survival of sensory neurons, is highly expressed in EAT (266). Atrial EAT is thus often the target of radiofrequency ablation for arrhythmias (see paragraph EAT and atrial fibrillation).
Metabolism
Up to now, our understanding of EAT physiology in humans remains quite limited, and data regarding lipid storage (lipogenesis) and release (lipolysis) come mainly from animal studies.
In guinea pigs, Marchington et al., reported that EAT exhibits an approximately two-fold higher metabolic capacity for fatty acids incorporation, breakdown, and release relative to other intra-abdominal fat depots (198). Considering that free fatty acids (FFA) are the major source of fuel for contracting heart muscle, EAT may act as a local energy supply, and an immediate ATP source for adjacent myocardium during time of energy restriction (199).
Conversely, due to its high lipogenic activity, and high expression of fatty acid transporters specialized in intracellular lipid trafficking such as FABP4 ( 325), (fatty-acid-binding-protein 4), EAT could serve as a buffer against toxic levels of FFA during time of excess energy intake. How FFAs are transported from the EAT into the myocardium has however to be elucidated. One hypothesis is that FFAs could diffuse bidirectionally in interstitial fluid across concentration gradients (265).
Secretome
EAT is more than a fat storage depot. Indeed, it is now widely recognized to be an extremely active endocrine organ and a major source of adipokines, chemokines, cytokines that could either be protective or harmful depending on the local microenvironement (127,206). The human secretome of EAT is wide and is described in Table 2. This richness probably reflects the complex cellularity and cross talk between EAT and neighboring structures. Interleukin (IL)-1β, IL6, IL8, IL10, tumor necrosis factor α (TNF-α), monocyte chemoattractive protein 1 (MCP-1), adiponectin, leptin, visfatin, resistin, phospholipase A2 (sPLA2), and plasminogen activator inhibitor 1 (PAI-1) are examples of bioactive molecules secreted by EAT [START_REF] Cherian | Cellular cross-talk between epicardial adipose tissue and myocardium in relation to the pathogenesis of cardiovascular disease[END_REF][START_REF] Dutour | Secretory Type II Phospholipase A2 Is Produced and Secreted by Epicardial Adipose Tissue and Overexpressed in Patients with Coronary Artery Disease[END_REF]206,268). Given the lack of anatomical barriers, adipokines produced by EAT are thought to interact with vascular cells or myocytes in two manners: paracrine and/or vasocrine.
Interaction with cardiomyocytes is likely to be paracrine as close contact between epicardial adipocytes and myocytes exist and fatty infiltration into myocardium is not rare [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]193,308). Interactions with cells of the vascular wall seem to be paracrine or vasocrine. In paracrine signalling, it is hypothesized that EAT-derived adipokines could diffuse directly through the layers of the vessel wall via the interstitial fluid to interact with smooth muscle cells, endothelium probably influencing the initiation of inflammation, and atherogenesis (see EAT and Coronary artery disease (CAD)). An alternative vasocrine signalling mechanism has been proposed, in which EAT-derived adipokines directly enter the lumen of closely opposed adventitial vasa vasorum, and thus are transported downstream into the arterial wall (126,265). Apart from the classical endothelial and intima layers "inside-out" cross talk, this would suggest the opposite existence of an "outside-in" cellular cross talk (111,124,266).
Supposed Protective functions
Mechanical protective effects have been attributed to epicardial fat. EAT is supposed to act as a shock absorber to protect coronary arteries against torsion induced by the arterial pulse wave and cardiac contraction (253). A permissive role of EAT on vessel expansion and positive remodeling of coronary vessels, to maintain the arterial lumen has been reported (251). Given its high metabolic activity, EAT is likely to be involved in the regulation of fatty acids homeostasis in the coronary microcirculation (199). Some adipokines such as adiponectin, adrenomedullin and omentin, may have protective effects on vasculature, by regulating arterial vascular tone (vasodilation), reducing oxidative stress, improving endothelial function, and increasing insulinsensitivity [START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF][START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF][START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]283). EAT is also considered as an immunological tissue that serves to protect the myocardium and vessels against pathogens [START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF]266). Hence, under physiological conditions EAT can exert cardioprotective actions through production of anti-atherogenic cytokines. However, the modification of EAT into a more pro-inflammatory or pro-fibrosing phenotype is susceptible to favor many pathophysiological states (see EAT in diseases). Determining the factors that regulate this fragile balance is a big challenge for next years.
Transcriptome
EAT has a unique transcriptomic signature when compared to subcutaneous fat [START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]188).
Using a pangenomic approach we identified that EAT was particularly enriched in extracellular matrix remodeling, inflammation, immune signaling, beiging, coagulation, thrombosis and apoptosis pathways [START_REF] Gaborit | Human epicardial adipose tissue has a specific transcriptomic signature depending on its anatomical peri-atrial, periventricular, or peri-coronary location[END_REF]. Omentin (ITLN1) was the most upregulated gene in EAT, as confirmed by others [START_REF] Fain | Identification of omentin mRNA in human epicardial adipose tissue: comparison to omentin in subcutaneous, internal mammary artery periadventitial and visceral abdominal depots[END_REF]102), and network analysis revealed that its expression level was related with many other genes, supporting an important role for this cardioprotective adipokine (273). Remarkably, we observed a specific transcriptomic signature for EAT taken at different anatomical sites. EAT taken from the periventricular area overexpressed genes implicated in Notch/p53, inflammation, ABC transporters and glutathione metabolism. EAT taken from coronary arteries overexpressed genes implicated in proliferation, O-N glycan biosynthesis, and sphingolipid metabolism. Finally, EAT taken from atria overexpressed genes implicated in oxidative phosphorylation, cell adhesion, cardiac muscle contraction and intracellular calcium signalling pathway, suggesting a specific contribution of periatrial EAT to cardiac muscle activity. These findings further support the importance of the microenvironment on EAT gene profile. Likewise abdominal adipose tissue comprises many different depots there is not one but rather many epicardial adipose tissues.
Thermogenesis
The thermogenic and browning potential of epicardial fat has received increasing attention, and has been recently reviewed elsewhere [START_REF] Chechi | Thermogenic potential and physiological relevance of human epicardial adipose tissue[END_REF]. Brown adipose tissue generates heat in response to cold temperatures and activation of the autonomic nervous system. The heat generation is due to the expression of an uncoupling protein UCP-1, in the mitochondria of brown adipocytes (183). Until quite recently, BAT was thought to be of metabolic importance only in mammals during hibernation, and human newborns. However, recent studies using positron emission tomography (PET), have reported the presence of metabolically active BAT in human adults [START_REF] Cypess | Identification and importance of brown adipose tissue in adult humans[END_REF]224). Interestingly, Sacks et al, reported that UCP-1 expression was fivefold higher in EAT than substernal fat, and undetectable in subcutaneous fat, suggesting that EAT could have «brown» fat properties to defend myocardium and coronary arteries against hypothermia [START_REF] Chechi | Brown fat like gene expression in the epicardial fat depot correlates with circulating HDL-cholesterol and triglycerides in patients with coronary artery disease[END_REF]. The authors further demonstrated that the structure and architecture of EAT differs among the neonate, infant, and child with more genes implicated in the control of thermogenesis in EAT of neonates, and a shift towards lipogenesis through infancy (230).
Further studies identified that EAT had beige or brite profile, with the expression of beige markers such as CD137 (267). Besides, we reported that periventricular EAT could be an EAT more sensitive to browning, as it expressed more UCP-1 than other epicardial fat stores [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF]. Furthermore, several genes upregulated in periventricular EAT encoded for enzymes of the glutathione metabolism pathway. Yet these enzymes have a specific signature in brown adipose tissue, due to the decoupling of the respiratory chain, and the increase in oxidative metabolism (246). The 'brite' (i.e. brown in white) or 'beige' adipocytes are multi-locular adipocytes located within white adipose tissue islets, which have the capacity to be recruited and to express UCP-1, mainly in case of cold exposure [START_REF] Cousin | Occurrence of brown adipocytes in rat white adipose tissue: molecular and morphological characterization[END_REF]282,339). It has been suggested that beige adipose tissue in EAT originates from the recruitment of white adipocytes that could produce UCP-1 in response to browning factors such as myokines like irisin, cardiac natriuretic peptides, or fibroblast growth factor 21 (FGF21) [START_REF] Bordicchia | Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes[END_REF]. Whether these factors have a direct beiging effect on EAT and can stimulate its thermogenic potential remains to be addressed. A recent study demonstrated that increased reactive oxygen species (ROS) production from epicardial fat of CAD patients was possibly associated with brown to white transdifferentiation of adipocytes within EAT [START_REF] Dozio | Increased reactive oxygen species production in epicardial adipose tissues from coronary artery disease patients is associated with brown-to-white adipocyte trans-differentiation[END_REF]. Accordingly, another study revealed that an increase in brown EAT was associated with the lack of progression of coronary atherosclerosis in humans [START_REF] Ahmadi | Aged garlic extract with supplement is associated with increase in brown adipose, decrease in white adipose tissue and predict lack of progression in coronary atherosclerosis[END_REF]. These results point to a beneficial role of EAT browning in CAD development. Whether these beige adipocytes within white epicardial adipocytes could serve, as a therapeutic target to improve cardiac health and metabolism remains to be explored.
The origin of epicardial adipose tissue
In the recent years, there has been growing interest in the distribution and function of adipocytes and the developmental origins of white adipose tissue (WAT) [START_REF] Billon | Developmental origins of the adipocyte lineage: new insights from genetics and genomics studies[END_REF]109,168,244).
Since adipocytes are located close to microvasculature, it has been suggested that white adipocytes could have endothelial origin (307,315). However, this hypothesis has been challenged by recent lineage tracing experiments that revealed epicardium as the origin of epicardial fat cells [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]180,343). Chau et al, used genetic lineage tracing to identify descendants of cells expressing the Wilms' tumor gene Wt1 (Wt1-Cre mice), and found a major ontogenetic difference between VAT and WAT [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. The authors observed that epicardial and five other visceral fat depots (gonadal, mesenteric, perirenal, retroperitoneal, and omental) appearing postnatally received a significant contribution from cells that once expressed Wt1 late in gestation. By contrast, Wt1-expressing cells did not contribute to the development of inguinal WAT or brown adipose tissue (BAT). Wt1 is a major regulator of mesenchymal progenitors in the developing heart. During development Wt1 expression is restricted mainly to the intermediate mesoderm, parts of the lateral plate mesoderm and tissues that derive from these and the mesothelial layer that lines the visceral organs and the peritoneum (201). Postnatally, in their experiments a subset of visceral WAT continued to arise from Wt1-expressing cells, consistent with the finding that Wt1 marks a proportion of cell populations enriched in WAT progenitors [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. Depending on the depot, Wt1+ cells comprised 4-40% of the adult progenitor population, being the most abundant in omental and epicardial fat. This suggested heterogeneity in the visceral WAT lineage. Finally, using FACS analysis the authors showed that Wt1-expressing mesothelial cells expressed accepted markers of adipose precursors (CD29, CD34, Sca1). Cultures of epididymal appendage explants in addition gave rise to adipocytes from Wt1+ cells, confirming that Wt1 expressing mesothelium can produce adipocytes [START_REF] Chau | Visceral and subcutaneous fat have different origins and evidence supports a mesothelial source[END_REF]. The concept of a mesothelial origin of epicardial fat cells has been supported by contemporaneous lineage-tracing studies from Liu et al, using double transgenic mice line Wt1-CreER; Rosa26 RFP/+ tracing epicardium-derived cells (EDPCs), and adenovirus that expresses Cre under an epicardium-specific promoter Msln (180). They demonstrated that epicardial fat descends from embryonic epicardial progenitors expressing Wt1 and Msln. They referred to this as epicardium-to-fat transition (ETFT).
Furthermore, cells of the epicardium in adult animals gave rise to epicardial adipocytes following myocardial infarction, but not during normal heart homeostasis (180). Another group confirmed these results and further established IGF1R signaling as a key pathway that governs EAT formation after myocardial injury by redirecting the fate of Wt1+ lineage cells (349). Taken together this suggested that while embryonic epicardial cells contribute to EAT, there is minima ETFT in normal adult heart, but this process can be reactivated after myocardial infarction or severe injury (Figure 3). This important discovery provides new insights into the treatment of cardiovascular diseases and regenerative medicine or stem cell therapy, as isolated human epicardial adipose derived stem cells (ADSCs) revealed the highest cardiomyogenic potential, as compared to the pericardial and omental subtypes (340).
Further investigations are awaited in humans to decipher the mechanisms of ETFT reactivation in the setting of metabolic and cardiovascular diseases.
Another study clarified the discrepancy of EAT abundance among species in EAT development (343). The authors confirmed in mice that EAT originates from epicardium, that the adoption of the adipocyte fate in vivo requires the transcription factor PPARγ (peroxisome proliferator activated receptor gamma). By stimulation of PPARγ at times of epicardiummesenchymal transformation, they were indeed able to induce this adipocyte fate ectopically in ventricular epicardium, in embryonic and adult mice (343). Human embryonic ventricular epicardial cells natively express PPARγ, which explains the abundant presence of fat seen in human hearts at birth and throughout life, whereas in mice EAT remains small and located to the atrio-ventricular groove.
Whereas EAT seems to have epicardial origin, adipocytes present in myocardium could have a different one (Figure 3). Indeed infiltration of adipocytes interspersed with the right ventricular muscle fibres is commonly seen in necropsies (308). It is thought to reflect the normal physiological process of involution that occurs with ageing. This is different from the accumulation of triglycerides in cardiomyocytes (namely steatosis). A recent study identified endocardial origin of intramyocardial adipocytes during development (351). Nevertheless, the endocardium of the postnatal heart did not contribute to intramyocardial adipocytes during homeostasis or after myocardial infarction, suggesting that the endocardium-to-fat transition could not be recapitulated after myocardial infarction. It remains however unknown whether endocardial cells could give rise to excessive adipocytes in other types of cardiovascular diseases such as arrhythmogenic right ventricular cardiomyopathy. In this genetic disease, excessive adipose tissue replace myocardium of the right ventricle, leading to ventricular arrhythmias, and sudden death (182).
Taken together, further lineage studies are hence needed to better understand whether mesothelial progenitors contribute to epicardial adipocyte hyperplasia in obesity, type 2 diabetes or cardiovascular diseases.
What drives the development of ectopic fat in the heart?
It is likely that genetic, epigenetic and environmental factors are involved in this process.
EAT has been found to vary among population of different ethnicities [START_REF] Baba | CT Hounsfield units of brown adipose tissue increase with activation: preclinical and clinical studies[END_REF][START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF][START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Bakkum | The impact of obesity on the relationship between epicardial adipose tissue, left ventricular mass and coronary microvascular function[END_REF][START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF][START_REF] Bapat | Depletion of fat-resident Treg cells prevents age-associated insulin resistance[END_REF][START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF], EAT volume or thickness was reported to be lower in South Asians, Southeast and East Asians compared to Caucasians [START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF], higher in White, or Japanese versus Blacks or African Americans [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF][START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Bambace | Adiponectin gene expression and adipocyte diameter: a comparison between epicardial and subcutaneous adipose tissue in men[END_REF].
In a genome-wide association analysis including 5,487 individuals of European ancestry from the Framingham Heart Study (FHS) and the Multi-Ethnic Study of Atherosclerosis (MESA) a unique locus 10198628 near TRIB2 (Tribbles homolog 2 gene) was identified to be associated with cardiac ectopic fat deposition, reinforcing the concept that there are unique genetic underpinnings to ectopic fat distribution [START_REF] Baroja-Mazo | The NLRP3 inflammasome is released as a particulate danger signal that amplifies the inflammatory response[END_REF]. Animal studies have also revealed the possible effects of fetal programming such as late gestation undernutrition on visceral adiposity predisposing [START_REF] Barone-Rochette | Left ventricular remodeling and epicardial fat volume in obese patients with severe obstructive sleep apnea treated by continuous positive airway pressure[END_REF]. Other environmental factors such as aging, excess caloric intake, sedentary life style, pollutants, and microbiota may also modulate ectopic fat deposition [START_REF] Bastarrika | Relationship between coronary artery disease and epicardial adipose tissue quantification at cardiac CT: comparison between automatic volumetric measurement and manual bidimensional estimation[END_REF][START_REF] Batal | Left atrial epicardial adiposity and atrial fibrillation[END_REF]. In obesity and type 2 diabetes, increased amount of ectopic fat stores have been consistently reported, but the mobilization of those ectopic fat depots seem to be site specific [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF][START_REF] Bidault | LMNA-linked lipodystrophies: from altered fat distribution to cellular alterations[END_REF][START_REF] Billon | Developmental origins of the adipocyte lineage: new insights from genetics and genomics studies[END_REF].
Studying the cellular mechanisms that favors ectopic fat accumulation has become therefore an important focus of research.
Factors leading to ectopic fat development
Expandability hypothesis: dysfunctional subcutaneous fat
There are several potential mechanisms that might explain the tendency to deposit ectopic fat but one convincing hypothesis is that individual's capacity to store lipids in subcutaneous adipose tissue has a set maximal limit. When this limit is exceeded, increased import and storage of lipids in visceral adipose tissue and in non-adipose tissues occurs. This is the adipose tissue expandability hypothesis (323). The limited capacity of the subcutaneous adipose tissue to expand induces a "lipid spillover" to other cell types, leading to ectopic lipid deposition, which, in turn, are drivers of insulin resistance and the collective pathologies that encompass metabolic syndrome (319).
There is some intriguing evidence from human studies that supports the adipose tissue expansion hypothesis. In LMNA linked lipodystrophies, the lack of subcutaneous adipose tissue result in severe insulin resistance, hypertriglyceridemia and increased ectopic fat deposition in the liver and the heart [START_REF] Bidault | LMNA-linked lipodystrophies: from altered fat distribution to cellular alterations[END_REF][START_REF] Galant | A Heterozygous ZMPSTE24 Mutation Associated with Severe Metabolic Syndrome, Ectopic Fat Accumulation, and Dilated Cardiomyopathy[END_REF]. Data on animal studies have revealed that transplantation of SAT or removal of VAT in obese mice reversed adverse metabolic effects of obesity, improved glucose homeostasis, and hepatic steatosis [START_REF] Foster | Removal of intra-abdominal visceral adipose tissue improves glucose tolerance in rats: role of hepatic triglyceride storage[END_REF]117). These data replace the adipose tissue function at the center of ectopic lipids deposition.
Fibrosis
Adipocytes are surrounded by a network of extracellular matrix (ECM) proteins which represent a mechanical support and respond to various signaling events (151,223). During adipogenesis, both the formation and expansion of the lipid droplet require dramatic morphological changes, involving both cellular and ECM remodeling (208). Throughout the progression from the lean to the obese state, adipose tissue has been reported to actively change its ECM to accommodate the growth [START_REF] Alligier | Subcutaneous adipose tissue remodeling during the initial phase of weight gain induced by overfeeding in humans[END_REF][START_REF] Divoux | Architecture and the extracellular matrix: the still unappreciated components of the adipose tissue[END_REF]244). Moreover, it has been shown that metabolically dysfunctional adipose tissue exhibits a higher degree of fibrosis, characterized by abundant ECM proteins, and particularly abnormal collagen deposition (151). Therefore, as obesity progresses, ECM rigidity, composition and remodeling impact adipose tissue expandability by physically limiting adipocytes hypertrophy, thus promoting lipotoxicity and ectopic fat deposition. Indeed, genetic ablation of collagen VI (which is a highly enriched ECM constituent of adipose tissue ( 137)) in mouse model of genetic or dietary obesity, induced impaired ECM stability, reduced adipose tissue fibrosis and dramatically ameliorated glucose and lipid metabolism (151). In this mouse model, the lack of collagen VI allowed adipocytes to increase their size without ECM constraints, which favored lipid storage and minimized ectopic lipid accumulation in non adipose tissues. Such results suggest that adipose tissue fibrosis is likely to induce systemic metabolic alterations as fibrosis in the liver, heart or kidney. Moreover, it appears that maintaining a high degree of ECM elasticity allows adipose tissue to expand in a healthy manner, without adverse metabolic consequences (299).
Though hypertrophic adipocytes exhibit a profibrotic transcriptome (114), the contribution and the identity of different cell types responsible for fibrotic deposits in adipose tissue is difficult to determine. However, we and others have demonstrated that macrophages are the master regulators of fibrosis in adipose tissue [START_REF] Bourlier | TGFbeta family members are key mediators in the induction of myofibroblast phenotype of human adipose tissue progenitor cells by macrophages[END_REF]150,299). They produce high levels of transforming growth factor β1 (TGF-β1), and we and others demonstrated that they could directly activate preadipocytes (the so-called adipose progenitor cells) to differentiate towards a myofibroblast-like phenotype thus promoting fibrosis into adipose tissue during its unhealthy excessive development [START_REF] Bourlier | TGFbeta family members are key mediators in the induction of myofibroblast phenotype of human adipose tissue progenitor cells by macrophages[END_REF]150). Importantly, it has recently been demonstrated that transcription factor interferon regulatory factor 5 (Irf5) known to polarize macrophages toward an inflammatory phenotype (162) represses directly TGF-β1 expression in macrophages thus directly controlling ECM deposition [START_REF] Dalmas | Irf5 deficiency in macrophages promotes beneficial adipose tissue expansion and insulin sensitivity during obesity[END_REF]. Importantly, IRF5 expression in obese individuals is negatively associated with insulin sensitivity and collagen deposition in visceral adipose tissue (162).
It has been proposed that fibrosis development in adipose tissue promotes adipocyte necrosis
which in turn induces the infiltration of immune cells in order to remove cell debris, thus leading to low-grade inflammation state. Whether fibrosis is a cause or a consequence of adipose tissue inflammation in obesity is still a matter of intense debate (258). That being said, it is undisputed that there is a close relationship between adipose tissue fibrosis and inflammation development in adipose tissue.
Inflammation
The link between obesity and adipose tissue inflammation was first suspected with the finding that proinflammatory cytokine TNF-α levels were increased in obese adipose tissue the blockade of which led to insulin sensitivity improvement (120, 121). Consequently, macrophages were found to infiltrate obese adipose tissue (329, 341), which led to the general concept that obesity is a chronic unmitigated inflammation with insidious results, where adipose tissue releases proinflammatory cytokines and adipokines which impairs insulin sensitivity in metabolic tissues [START_REF] Cildir | Chronic adipose tissue inflammation: all immune cells on the stage[END_REF]. Very importantly, of the various fat depots visceral adipose tissue has been shown to be the predominant source of chronic systemic inflammation (140). Under lean conditions adipose tissue houses a number of immune cells, mostly M2-like macrophages (with a 4:1 M2:M1 ratio( 186)), as well as eosinophils and regulatory T cells which secrete Il-4/IL-13 and IL-10 respectively, polarizing macrophages toward an antiinflammatory phenotype (185, 331). To note, the M2-like phenotype of macrophages has been reported to be maintained by both immune cells and adipocytes (203). Importantly, the polarization of macrophages from an M2 to a pro-inflammatory M1-like phenotype has been considered as a key event in the induction of obesity visceral adipose tissue inflammation [START_REF] Bourlier | Remodeling phenotype of human subcutaneous adipose tissue macrophages[END_REF][START_REF] Castoldi | The Macrophage Switch in Obesity Development[END_REF]185,240). However, the crucial trigger for such polarization as well as the increase of immune cells in adipose tissue is still unclear, but is likely to be derived from adipocytes. As already mentioned above, with adipose tissue mass increase several morphological changes occur leading to activation of several stress pathways such as endoplasmic reticulum stress, oxidative stress and inflammasome within adipose tissue [START_REF] Clement | Weight of Pericardial Fat on Coronaropathy[END_REF][START_REF] Cypess | Identification and importance of brown adipose tissue in adult humans[END_REF]. Meanwhile, adiponectin production drops, the one of leptin increases and adipose tissue produces inflammatory mediators including IL1-β, IL-6; IL-8, Il-10; TGF-β, TNF-α, MCP-1, plasminogen activating inhibitor-1 (PAI-I) macrophage migratory inhibitory, metallothionin, osteopontin, chemerin, and prostaglandin E2 (140,196). Adiponectin drop results in decreased glucose uptake while leptin decrease affects satiety signals but also the immune system. Indeed, leptin receptor (LEP-R) is expressed on most immune cells (331) and increased leptin production by adipose tissue could dramatically promote immune cell increase (236). Mice that are leptin (ob/ob) or leptin receptor (db/db) deficient are obese and exhibit a strong reduction in functional immune cells (regulatory T cells, NK cells and dendritic cells (166, 214)). Paradoxically, very provocative recent data argue that a reduced ability for an adipocyte to sense and respond to proinflammatory stimuli decreases the capacity for healthy adipose tissue expansion and remodeling. As for fibrosis, such inability would result in increased high fat diet induced ectopic fat accumulation and metabolic dysfunction. Moreover, the authors demonstrate that proinflammatory responses in adipose tissue are essential for both proper ECM remodeling and angiogenesis, two processes known to facilitate adipogenesis, thus favoring healthy adipose tissue expansion (332). Finally, new regulatory players in adipose tissue homeostasis have been identified: the innate lymphoid type 2 cells (ILC2s) and IL-33. ILC2 are a regulatory subtype of ILCs, which are immune cells that lack a specific antigen receptor and can produce a spectrum of effectors cytokines, which match T helper cell subsets (294). ILCs are activated by IL-33 and produce large amounts of type 2 cytokines IL-5 and IL-13 (217).
Upon binding to its receptor (ST2), IL-33 induces the production of large amounts of antiinflammatory cytokines by adipose tissue ILC2s and also the polarization of macrophages toward a M2 phenotype, which results both in adipose tissue mass reduction and insulin resistance improvement (110).
Considerable changes in the composition and phenotype of immune cells occur in adipose tissue during the onset of obesity suggesting that they are actively involved in releasing secretory products along with adipocytes. Conversely to chronic systemic inflammation, which interferes with optimal metabolic fitness a potent acute adipose tissue inflammation is an adaptive response to stress-inducing conditions, which has beneficial effects since it enables healthy adipose tissue remodeling and expansion.
Hypoxia
In the attempt to identify the trigger of adipose dysfunction in obesity, the theory of insufficient angiogenesis to maintain normoxia in the developing fat pad during obesity has also been proposed (316,345). Interestingly parallels exist between the excessive development of adipose tissue and tumors in that both situations are challenged to vascularize growing tissue to provide sufficient O2 and nutrients (298). Various arguments strongly support the idea of "hypoxia in adipose tissue". First, white mature hypertrophic adipocytes can reach a diameter of up to 200 µm in obese patients (205,286) and the normal diffusion distance of O2 across tissues is 100 to 200 µm [START_REF] Brahimi-Horn | Oxygen, a source of life and stress[END_REF]. Second, although lean subjects exhibit a postprandial blood flow rise to adipose tissue obese individuals do not [START_REF] Goossens | Increased Adipose Tissue Oxygen Tension in Obese Compared With Lean Men Is Accompanied by Insulin Resistance, Impaired Adipose Tissue Capillarization, and Inflammation[END_REF]148), indicating that O2 delivery to adipose tissue is indeed impaired in obesity. Third, various works performed in different murine models of obesity have robustly shown that in obese mice, hypoxia-responsive genes expression is increased, increased number of hypoxic foci (using hydroxyprobes system, such as pimonidazole) is found as well as lower adipose tissue oxygen partial pressure (256,344,347). As a result of hypoxic state, hypoxia-inducible factor (HIF) 1α, which has been described as the "master regulator of oxygen homeostasis" (261, 274, 317) is induced in adipose tissue. The molecular and cellular responses of mature adipocytes to reduced O2 tension have been intensively investigated (336). Hypoxia has been shown to dramatically modify the expression and/or release of leptin (increase) and adiponectin (decrease) and inflammation related proteins (IL-6, IL1β, MCP-1), indicating the installation of an inflammatory state (336). For that reason, hypoxia is postulated to explain the development of inflammation and is considered as a major initiating factor for ECM production, thus triggering the subsequent metabolic dysfunction of adipose tissue in obesity (299, 317). Among the other functional changes that were described concern the rates of lipolysis and lipogenesis, where lipolysis seem to be increased [START_REF] Geiger | Identification of hypoxia-induced genes in human SGBS adipocytes by microarray analysis[END_REF] while both lipogenesis and the uptake of fatty acids are decreased (232) and the fact that hypoxia may directly impair adipocyte insulin sensitivity (257). Other cell types present in adipose tissue have been shown to respond to hypoxia. Indeed, it has been clearly demonstrated that hypoxia induces proinflammatory phenotype of macrophages (218). Moreover, macrophages have been localized within adipose tissue in hypoxic areas of obese mice thus augmenting their inflammatory response (256). In addition to macrophages, preadipocytes have been demonstrated to largely increase both their production of VEGF and leptin under hypoxic culture conditions. Conversely, PPARγ expression was reported to be dramatically diminished thus reducing preadipocyte adipogenic abilities under hypoxic environment (153).
Aging
With aging, adipose tissue changes in abundance, distribution, cell composition and endocrine signaling. Indeed, through middle/early old age, body fat percentage increases in both, men and women (107,165,211), shifts from subcutaneous depots to intra-abdominal visceral depots [START_REF] Enzi | Subcutaneous and visceral fat distribution according to sex, age, and overweight, evaluated by computed tomography[END_REF]235). Moreover, the aging process is accompanied by subsequent changes in adipose tissue metabolic functions such as decreased insulin responsiveness and altered lipolysis, which could cause excessive free fatty acids release with subsequent ectopic lipid deposition and lipotoxicity [START_REF] Das | Caloric restriction, body fat and ageing in experimental models[END_REF][START_REF] Fukagawa | Loss of skeletal muscle mass with aging: effect on glucose tolerance[END_REF]287). In a metabolic point of view, the balance between fat storage and oxidation is disrupted with aging and the capacity of tissues to oxidize fat progressively decreases. Therefore, it is likely that adiposity increase with aging could be also due to positive energy balance, decreased physical activity and basal metabolic rate and maintained caloric intake [START_REF] Enzi | Subcutaneous and visceral fat distribution according to sex, age, and overweight, evaluated by computed tomography[END_REF]245). Thus, fat aging is associated with age-related diseases, lipotoxicity, reduced longevity (216, 309). The aged adipose tissue is also characterized by reduced adipocyte size, fibrosis, endothelial dysfunction and diminished angiogenic capacity [START_REF] Donato | The impact of ageing on adipose structure, function and vasculature in the B6D2F1 mouse: evidence of significant multisystem dysfunction[END_REF]. Importantly, extensive changes in preadipocyte functions occur with aging [START_REF] Djian | Influence of anatomic site and age on the replication and differentiation of rat adipocyte precursors in culture[END_REF]154,155). These include preadipocyte replication decrease [START_REF] Djian | Influence of anatomic site and age on the replication and differentiation of rat adipocyte precursors in culture[END_REF], diminished adipogenic abilities (155), increased susceptibility to lipotoxicity (108), and increased pro-inflammatory cytokine, chemokine and ECM-modifying proteases [START_REF] Cartwright | Aging, depot origin, and preadipocyte gene expression[END_REF]310).
As in obesity, inflammation is a common feature of aging (215,295). Associated to this lowgrade inflammation state, macrophages have been reported to accumulate with age in subcutaneous adipose tissue. Conversely, no significant change in the visceral one was observed, however, the ratio of pro-inflammatory M1 macrophages to anti-inflammatory M2 macrophages has been shown to increase with aging [START_REF] Garg | Changes in adipose tissue macrophages and T cells during aging[END_REF]185,187). Interestingly, T cells populations have also been reported to change with aging. Specifically, Treg cells which accumulate to unusually high levels as a function of age and exacerbate both the decline of adipose metabolic function as well as the rise in insulin resistance [START_REF] Bapat | Depletion of fat-resident Treg cells prevents age-associated insulin resistance[END_REF]187). Aging is also linked with immune-senescence, a process leading to dysregulation of immunity or an adaptive response (106, 241). Notably, T cell dysfunction has been described and might also lead to systemic increases in TNF-α, IL-6 and acute phase proteins such as C-reactive protein and serum amyloid A [START_REF] Bruunsgaard | Age-related inflammatory cytokines and disease[END_REF]270). The "redox stress hypothesis' is also proposed to explain that age-related redox imbalance activates various pro-inflammatory signaling pathways leading to tissue inflammaging and immune deregulation (288). To note, considerable accumulation of senescent cells has been reported in aging adipose tissue (309). Among the various changes, which occur in senescent cells, multiple cytokines, chemokines, growth factors, matrix metalloproteinases and senescence-associated secretory phenotype (SASP) proteins are secreted and were shown to induce or sustain the age-related inflammation state [START_REF] Coppe | Senescence-associated secretory phenotypes reveal cellnonautonomous functions of oncogenic RAS and the p53 tumor suppressor[END_REF]187,235,342). It was recently shown that removing senescent cells from older mice improves adipogenesis and metabolic function (342). The authors propose that senescent cell removal may facilitate healthy adipose tissue expansion, less ectopic fat formation and improved insulin sensitivity (235).
Circulating adipose stem/stromal cells
Ectopic fat deposition can also take the form of mature adipocytes, which "infiltrate" non adipose organs such as muscles, pancreas and heart. Conversely to ectopic lipid formation, the cause and mechanisms responsible for ectopic adipocyte formation are largely unknown [START_REF] Bluher | Adipose tissue dysfunction in obesity[END_REF], neither their cellular origin nor the mechanisms controlling their metabolic activity [START_REF] Addison | Intermuscular Fat: A Review of the Consequences and Causes[END_REF]248,313). As already discussed in the present review, adipose tissue depots undergo active remodeling throughout adulthood. To enable such remodeling, the presence of precursor cells exhibiting adipogenic potential is necessary (272). A population of multipotent progenitors, the adipose-derived stem/stromal cells (ASCs) (long identified as preadipocytes) were identified by various studies including ours to exhibit such abilities [START_REF] Gimble | Adipose-derived adult stem cells: isolation, characterization, and differentiation potential[END_REF]204,205,262,275,352). ASCs, as their bone marrow counterpart the mesenchymal stem/stromal cells (MSCs) are endowed with multilineage mesodermal differentiation potentials as well as regenerative abilities, leading to their extensive investigation from a therapeutic and tissue engineering perspective [START_REF] Ferraro | Adipose Stem Cells: From Bench to Bedside[END_REF][START_REF] Gimble | Human adipose-derived cells: an update on the transition to clinical translation[END_REF]158). Adipose tissue remodeling is frequently reported to be associated with the infiltration of various cell populations (226, 329). However, adipose tissue is rarely seen as a reservoir of exportable cells. Indeed, cell export, the so-called mobilization process, has been essentially studied in bone marrow (169). For instance, in response to stress or injury, hematopoietic stem/progenitor cells lose their anchorage in the bone marrow microenvironment and are increasingly mobilized into the circulation. Cell mobilization involves chemoattractants and adhesion molecules and among these factors, the chemokine CXCL12 and its receptor CXCR4 are dominant in controlling stem/progenitor cell trafficking [START_REF] Döring | The CXCL12/CXCR4 chemokine ligand/receptor axis in cardiovascular disease[END_REF]170,171). Interference with CXCL12/CXCR4-mediated retention is a fundamental mechanism of stem/progenitor cell mobilization. Such interferences can be obtained by inducing (i) a CXCL12 decrease in the microenvironment through proteolysis by protease dipeptidyl-peptidase 4 (DPP4, also known as CD26) [START_REF] Christopherson Kw 2nd | Cell surface peptidase CD26/DPPIV mediates G-CSF mobilization of mouse progenitor cells[END_REF], (ii) a CXCL12 destabilization with MMP9, or neutrophil elastase or cathepsin G (175), (iii) an increase in CXCL12 plasma levels, which favors CXCL12-induced migration of stem/progenitor cells into the circulation over their retention in the bone marrow ( 213) and (iv) CXCR4 antagonism, with AMD3100 for instance, which induces the fast release of stem/progenitor cells from the bone marrow to the circulation [START_REF] Dar | Rapid mobilization of hematopoietic progenitors by AMD3100 and catecholamines is mediated by CXCR4-dependent SDF-1 release from bone marrow stromal cells[END_REF]. We and others have reported that both human and murine native ASCs (freshly harvested) express functional CXCR4 [START_REF] Gil-Ortega | Native adipose stromal cells egress from adipose tissue in vivo: evidence during lymph node activation[END_REF]276). Moreover we have also demonstrated for the first time that the in vivo administration of AMD3100 (a CXCR4 antagonist) induces the rapid mobilization of ASCs from subcutaneous adipose tissue to the circulation [START_REF] Gil-Ortega | Ex vivo microperfusion system of the adipose organ: a new approach to studying the mobilization of adipose cell populations[END_REF][START_REF] Gil-Ortega | Native adipose stromal cells egress from adipose tissue in vivo: evidence during lymph node activation[END_REF].
Interestingly, obesity has been associated with increased systemic circulation of MSCs, the tissue origin of which has not been identified [START_REF] Bellows | Influence of BMI on level of circulating progenitor cells[END_REF]. Moreover, while a reduction in CXCL12 level has been demonstrated in adipose tissue with obesity (227), CXCL12 plasmatic levels were demonstrated to dramatically increase in the context of type 2 diabetes (147, 181).
Therefore one can speculate that since we showed that subcutaneous adipose tissue releases adipose progenitors via a CXCL12/CXCR4 dependant mechanism, the unhealthy development of subcutaneous adipose tissue might trigger the aberrant release of adipose progenitors into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation (Figure 4).
To sum up, the mechanisms driving the development of ectopic fat deposition and its consequences are summarized in Figure 4. What drive the development of one ectopic fat among others remains unknown. This needs to be explored further in clinical and experimental settings.
EAT IMAGING Noninvasive Imaging Quantification of EAT
EAT can be relatively easily assessed by a variety of different imaging techniques, whose characteristics are summarized in Table 3. Epicardial fat quantification is usually performed on an exam that was realized in a clinical work up for a condition other than fat repartition quantification. In research, set up quantification of EAT is of major interest in several cardiac and metabolic diseases. Pericardium is the anatomical limit between epicardial and paracardial fat. As outlined earlier in this review, these two tissues have different embryonic origin (see paragraph EAT origin), different vascularization, and their hypertrophy has different origin and consequences (265). The main problem for quantification of epicardial fat is the precise definition of the anatomical limit of the pericardium. Normal pericardium is a very thin layer and required cardiac ultrasound, gated MRI sequences and synchronized CT acquisition to be depicted. Besides imaging acquisition that has to depict correctly the pericardium layer, manual quantification of epicardial fat volume is time consuming. Recent teams have developed software analysis allowing and semi-automatic quantification of epicardial fat (192,222,229). These tools are now available for research community and progress will be made to save time during analysis phase.
Echocardiography
Quantification of epicardial fat using trans thoracic echocardiography (TTE) is limited to measurements of fat thickness surrounding the right ventricle through one echoic window. Indeed, EAT is visible as an echo free space between the outer wall of the myocardium and the visceral layer of the pericardium (Figure 5). The thickness of this space is measured on the right ventricular free wall in the parasternal long and short axis views where EAT is thought to be thickest. This technique, which is the most accessible and affordable imaging modality has been described by the group of Iacobellis (125). Distinction of the pericardium in a normal patient using TTE is possible so distinction of epicardial or paracardial fat is feasible using TTE.
Computed Tomography (CT)
CT is widely used for thoracic or cardiac diseases. The majority of clinical studies to date examining associations of epicardial fat depots with cardiovascular disease have utilized CT.
With high spatial resolution, pericardial fat can be readily and reproducibly identified with CT (Figure 6). Pericardial fat quantification is possible on non synchronized images but motion artefacts might pertain clear depiction between epicardial and paracardial fat [START_REF] Britton | Body fat distribution, incident cardiovascular disease, cancer, and all-cause mortality[END_REF].
Synchronized acquisitions such as calcium scoring and coronary CT angiography are now well-established exams in clinical practice with a large number of indications. Distinction of the pericardium layer is facilitated by excellent spatial definition and by the high contrast between chest-pericardium-EAT and heart. Synchronized images provide less artifact and more precise quantification of fat volume and should be considered as the standard of reference for fat volume quantification using CT (174). Iodine injection is not required for fat quantification and acquisition such as calcium scoring could be used for fat quantification [START_REF] Cheng | Pericardial fat burden on ECG-gated noncontrast CT in asymptomatic patients who subsequently experience adverse cardiovascular events[END_REF]. Technical progress has dramatically decreased the amount of radiation exposure for one standard acquisition for 10 years with the irradiation dose of less than 1msv for calcium score and coronary CT. Nevertheless irradiation exposure pertains broad use of CT for fat quantification. Recent studies suggested that epicardial fat quantification can be performed semi-automatically with good accuracy thus reducing the time required for the quantification to fewer than 2 min [START_REF] Cheng | Pericardial fat burden on ECG-gated noncontrast CT in asymptomatic patients who subsequently experience adverse cardiovascular events[END_REF]292).
Magnetic Resonance Imaging (MRI)
MRI offers excellent spatial resolution and is considered today as the standard of reference for epicardial fat quantification (192). Furthermore MRI is a great tool to assess other cardiac parameters such as function, myocardial fibrosis or intramyocardial fat quantification using proton spectroscopy [START_REF] Gaborit | Effects of bariatric surgery on cardiac ectopic fat: lesser decrease in epicardial fat compared to visceral fat loss and no change in myocardial triglyceride content[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]. MRI image acquisition does not require irradiation and MRI is the ideal imaging method for follow-up. Usually, distinction of pericardium is well performed either on end diastolic or systolic phase (Figure 7). Areas obtained for each slice are summed together and multiplied by the slice thickness to yield epicardial fat volume. Consistency between measurements at two different time points required the definition of anatomical landmarks and by using the same imaging parameters [START_REF] Gaborit | Effects of bariatric surgery on cardiac ectopic fat: lesser decrease in epicardial fat compared to visceral fat loss and no change in myocardial triglyceride content[END_REF]. Recently software that provides an automatic quantification of epicardial fat was described with no difference compared to manual drawing and significant time saving but to date these tools are not broadly available [START_REF] Torrado-Carvajal | Automated quantification of epicardial adipose tissue in cardiac magnetic resonance imaging[END_REF].
What Should be Measured and How?
MRI was the only technique that was validated in vivo on animal models (192,225). Mahajan et al., imaged at 1.5T, 10 merino sheep using cine steady state free precession sequences in short axis covering the whole heart. End diastolic images were used to quantify ventricular, atrial and total pericardial fat. Correlation between MRI and autopsies were strong with ICC>0.8 and Inter-observer 95% limits of agreement were 7.2% for total pericardial adipose tissue (192). No study validates CT against histologic quantification of adipose tissue but based on the current knowledge, one can assume that result might be similar to MRI. MRI and CT are the two techniques that could quantify the total amount of epicardial, paracardial and pericardial fat. Nevertheless MR should be preferred, if possible, due to the lack of irradiation. Ultrasound is limited to fat thickness assessment on one region. A recent study including 311 patients validated TTE against CT with the use of a High-Frequency Linear Probe (r=0.714, p< 0.001) (116). By contrast, one recent paper found no correlation between epicardial fat thicknesses measured using TTE and volume of epicardial fat measured using MRI (281). This fact could be explained by the wide anatomical variability of cardiac fat repartition [START_REF] Bastarrika | Relationship between coronary artery disease and epicardial adipose tissue quantification at cardiac CT: comparison between automatic volumetric measurement and manual bidimensional estimation[END_REF]. Nevertheless, localized thickness of epicardial fat might be a measured of interest to assess clinical risk. A recent paper showed that EAT thickness localized at the left atrio-venticular groove assessed on CT performed for calcium scoring was the only parameter correlated with the number of vessels exhibiting stenosis 50% (338). Furthermore some investigators found that epicardial fat thickness measured at the left atrioventricular groove was the best predictor of obstructive coronary artery disease (116,338). This finding was confirmed in a meta-analysis but confirmation is needed in other populations than Asians (337).
EAT IN DISEASES
EAT and atrial fibrillation
Atrial fibrillation (AF) is caused by an interaction between an initiating trigger and the underlying atrial substrate, the latter being structural or electrical. AF is the most prevalent cardiac arrhythmia seen in clinical practice, that is associated with increased morbidity and mortality such as stroke or heart failure (144,160,334). Previous studies have highlighted that obesity is an independent risk factor for the new onset of atrial fibrillation (AF) (311, 327). In the general population, obesity increases the risk of developing AF by 49%, and the risk escalates in parallel with increased BMI (326). Recently, there has been evolving evidence that EAT could be implicated in the pathogenesis of AF. Numerous studies have confirmed the association between EAT abundance and the AF risk, severity and post ablation or electrical cardioversion recurrence [START_REF] Chekakie | Pericardial fat is independently associated with human atrial fibrillation[END_REF][START_REF] Chao | Epicardial adipose tissue thickness and ablation outcome of atrial fibrillation[END_REF][START_REF] Cho | Impact of duration and dosage of statin treatment and epicardial fat thickness on the recurrence of atrial fibrillation after electrical cardioversion[END_REF]219,221,312,335). This has been particularly observed in patients with persistent compared to paroxysmal AF [START_REF] Chekakie | Pericardial fat is independently associated with human atrial fibrillation[END_REF][START_REF] Batal | Left atrial epicardial adiposity and atrial fibrillation[END_REF]280). This association was found to be independent of total adiposity or left atrial enlargement (3).
In the Framingham Heart cohort including 3217 participants, CT measured pericardial fat (but not VAT) was an independent predictor of prevalent AF even after adjusting for established AF risk factors (age, sex, systolic blood pressure, PR interval, clinically significant valvular disease) and other measures of adiposity such as BMI or intrathoracic fat volume (312).
Interestingly, several studies have shown that EAT surrounding the atria in particular, was linked to AF recurrence after catheter ablation (219,221,318). But what are the mechanisms involved in this association between EAT and AF? Does EAT modulate the trigger (initiation) or the substrate (maintenance) of AF?
Direct mechanisms
Histologically, there is no fascia boundaries separating EAT from myocardium. Hence a direct infiltration of adipocytes within the atrial myocardium is not rare as we observed in huma atria (Figure 8). This could contribute to a remodeled atrial substrate, and lead to conduction defects (conduction slowing or inhomogeneity) (112,335). In a diet-induced obese sheep model, Mahajan et al, showed a major fatty infiltration in the atrial musculature (posterior left atrial wall) of obese sheep compared to controls (193). This sub-epicardial adipocyte infiltration interspersed between cardiac myocytes was associated with reduction in posterior left atrial voltage and increased voltage heterogeneity in this region, suggesting that EAT could be a unique feature of the AF substrate (193). This EAT infiltration could promote side-to-side cells connection loss and conduction abnormalities in a way similar to microfibrosis (291). In 30 patients in sinus rhythm, prior to AF ablation procedure, left atrial EAT was associated with lower bipolar voltage and electrogram fractionation (350). In the Framingam Heart study cohort, Friedman et al, showed that pericardial fat was significantly associated with several P wave indices such as P wave duration even after adjustment for visceral and intrathoracic fat [START_REF] Friedman | Pericardial fat is associated with atrial conduction: the Framingham Heart Study[END_REF]. P wave indices (PWI) represent indeed a summation of the electrical vectors of atrial depolarization reflecting the atrial activation sequence. These are also known as markers of atrial remodeling (249). Another small study using a unique 3D merge process, dominant frequency left atrial map, identified EAT locations to correspond to high dominant frequency during AF. High dominant frequency are key electrophysiological parameters reflecting microreentrant circuits or sites of focal-firing that drive AF [START_REF] Atienza | Mechanisms of fractionated electrograms formation in the posterior left atrium during paroxysmal atrial fibrillation in humans[END_REF]302).
Therefore, overlap between EAT locations and high dominant frequency sites implies that EAT is most likely to harbor high-frequency sites, producing a favorable condition for perpetuation of AF. In vitro incubation of isolated rabbit left atrial myocytes with EAT modulated the electrophysiological properties of the cells leading to higher arrhythmogenesis in left atrial myocytes (178). All together, these data suggest a possible role of EAT on AF electrophysiological substrate.
Another important point is that EAT is the anatomical site of intrinsic cardiac autonomic nervous system, namely ganglionated plexi (GP) and interconnecting nerves, especially in the posterior wall around pulmonary veins ostia (124). These ganglia are a critical element responsible for the initiation and maintenance of AF [START_REF] Coumel | Paroxysmal atrial fibrillation: a disorder of autonomic tone?[END_REF]250). GP activation includes both parasympathetic and sympathetic stimulation of the atria/ pulmonary veins adjacent to the GP.
Parasympathetic stimulation shortens the action potential duration, and sympathetic stimulation increases calcium loading and calcium release from the sarcoplasmic reticulum.
The combination of the short action potential duration and longer calcium release induces triggered firing resulting from delayed after-depolarization of the atria/pulmonary veins, as manifested by the high dominant frequency sites. Pulmonary veins isolation and radiofrequency ablation target sites for substrate modification overlap most of the EAT sites (179,250,301). Whether EAT has a physiological role to protect these ganglia against mechanical forces due to cardiac contraction has been suggested (266). By contrast, recent clinical data showed that periatrial EAT is an independent predictor of AF recurrence after ablation (157,202,219,296), supporting that EAT may have a pro-arrhythmic influence.
Furthermore, electrical conductivity of the fat being lower than that of the atrial tissue, EAT volume may directly decrease the chance of the procedure to succeed (297).
Finally, a mechanical effect of EAT on left atrial pressure stretch and wall stress, which is known to favor arrhythmias can not be excluded.
Indirect mechanisms
EAT is a endocrine organ and a source of pro-inflammatory cytokines (such as TNF--1, IL-6, Monocyte Chemoattractant Protein-1 (MCP-1)) and profibrotic factors (such as TGFs and MMPs) acting in a paracrine way on the myocardium (111,115,206). These molecules are thought to diffuse in the pericardial sac and contribute to the structural remodeling of the atria. Indeed, using a unique organo-culture model, we showed that human EAT secretome, induced marked fibrosis of rat atrial myocardium and favored the differentiation of fibroblasts into myofibroblasts (322). This effect was mediated in part by Activin A, a member of the TGF family, and blocked by anti-activin A antibody (322). Constitutive TGF-ß1 overexpression in a transgenic mouse model produces increased atrial fibrosis and episodes of inducible AF while the ventricle remains normal (220,231). This data suggest that EAT could interfere with cardiac electrical activity and with the electrophysiological remodeling of the atria. According this, we previously demonstrated using a transcriptomic approach that periatrial EAT had a unique signature, expressing genes implicated in cardiac muscle contraction and intracellular calcium signaling pathway. Fibrosis is a central process in the alteration of the functional and structural properties of the atrial myocardium [START_REF] Burstein | Atrial fibrosis: mechanisms and clinical relevance in atrial fibrillation[END_REF]172). It causes interstitial expansion between bundles of myocytes. Dense and disorganized collagen weave fibrils physically separate cardiomyocytes, and can create a barrier to impulse propagation (285,300). Other pro-fibrotic factors known to be secreted by EAT may also contribute to remodeling of the atrial myocardium. Matrix metalloproteinases (MMPs), key regulators of extra-cellular matrix turnover, are known to contribute to atrial fibrosis, are upregulated during AF, and their secretion is increased in EAT compared to SAT [START_REF] Boixel | Fibrosis of the left atria during progression of heart failure is associated with increased matrix metalloproteinases in the rat[END_REF]322).
Local inflammatory pathways may also influence structural changes in the left atrium, and occurrence of AF. EAT secretes a myriad of pro-inflammatory cytokines such as IL-6, IL-8, IL-1, TNF-, MCP-1 that may have local effects on the adjacent atrial myocardium, and may induce migration of monocytes and immune cells (146,206). The pro-inflammatory activity of EAT, adjacent to left atrium, atrioventricular groove, and left main artery assessed with positron emission tomography (PET), was confirmed to be higher in AF compared with non AF patients. ( 207).
EAT is also an important source of reactive oxygen species (ROS) with a high oxidative stress activity that could be involved in the genesis of AF ( 271). Ascorbate, an antioxidant and peroxynitrite decomposition catalyst, has been shown to decrease atrial pacing-induced peroxynitrite formation in dogs, and the incidence of postoperative AF in humans [START_REF] Carnes | Ascorbate attenuates atrial pacing-induced peroxynitrite formation and electrical remodeling and decreases the incidence of postoperative atrial fibrillation[END_REF]. This point to a role of oxidative stress and cytokines produced by EAT on atrial remodeling and arrhythmogenesis.
Taken together, all these studies provide uncovered findings that EAT through mechanical, fibrotic, inflammation and oxidative stress mechanisms may exert an impact on the atrial susbtrate and triggering (summarized in Figure 9). An improved understanding of how EAT modifies atrial electrophysiology and struture may yield novel approaches towards preventing AF in obesity.
EAT and cardiac geometry and function:
EAT has local effects on the structure and function of the heart. Numerous clinical studies have unveiled the association between EAT volume and early defects in cardiac structure, volume and function [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF][START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Fontes-Carvalho | Influence of epicardial and visceral fat on left ventricular diastolic and systolic functions in patients after myocardial infarction[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]123,128,131,143,177,328,333). Increased amount of EAT has been associated with increased left ventricular (LV) mass and abnormal right ventricle geometry or subclinical dysfunction [START_REF] Gökdeniz | Relation of epicardial fat thickness to subclinical right ventricular dysfunction assessed by strain and strain rate imaging in subjects with metabolic syndrome: a twodimensional speckle tracking echocardiography study[END_REF]330). This is in accordance with initial necropsic and echographic studies showing an increase in LV mass to be strongly related to EAT, irrespective of CAD or hypertrophy [START_REF] Corradi | The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts[END_REF]128,131). In a study of 208 non CAD patients evaluated by [ 15 O]H2O hybrid positron emission tomography (PET)/CT imaging, EAT volume was associated with LV mass independently of BMI [START_REF] Bakkum | The impact of obesity on the relationship between epicardial adipose tissue, left ventricular mass and coronary microvascular function[END_REF]. EAT thickness and EAT volume were then associated with right and LV diastolic dysfunction, initially in severely obese patients and afterwards in various cohorts of subjects with impaired glucose tolerance, and no apparent heart disease [START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]128,143,152,177,194,228,238,328). In 75 men with or without metabolic syndrome, the amount of EAT correlated negatively with all parameters of LV diastolic function (LV mass-to-volume ratio, end-diastolic, end-systolic, and indexed stroke volumes) and was an independent determinant of LV early peak filling rate (228).
After myocardial infarction, EAT volume was also associated with LV diastolic function after adjustment for classical risk factors and other adiposity parameters [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF]. By contrast, other studies have reported that myocardial fat, but not EAT, was independently associated with cardiac output and work [START_REF] Gaborit | Assessment of epicardial fat volume and myocardial triglyceride content in severely obese subjects: relationship to metabolic profile, cardiac function and visceral fat[END_REF]134). Myocardial fat, which can be assessed by proton magnetic resonance spectroscopy ( LV mass, peak longitudinal and circumferential strains and was a better indicator for cardiac remodeling and dysfunction than BMI z-score or VAT (139). Another study found a persistent association between regional EAT and LV function beyond serum levels of adipokines, which is in favor of a local EAT effect rather than a systemic VAT effect (122).
Healthy men aged 19-94 were evaluated using STE echography, to study the profile of the healthy aging heart. EAT was associated with longitudinal STE LV-dyssynchrony, longitudinal strain, circumferential LV-dyssynchrony, and LV twist [START_REF] Crendal | Increased myocardial dysfunction, dyssynchrony, and epicardial fat across the lifespan in healthy males[END_REF]. Furthermore EAT and hepatic triglyceride content correlated negatively with peak circumferential systolic strain and diastolic strain rate in type 2 diabetes (174). However, this is not consistent with other studies reporting no link of geometry alterations and LV diastolic dysfunction with EAT [START_REF] Bonapace | Nonalcoholic fatty liver disease is associated with left ventricular diastolic dysfunction in patients with type 2 diabetes[END_REF]100,247,252). EAT has been associated with myocardial and hepatic steatosis, which are confounding factors (133,197). Whether EAT, VAT, hepatic fat or myocardial fat is the best predictor of LV function merits further evaluation and large population studies assessing each ectopic fat depot are needed.
The impact of EAT on cardiac function is less evident at a more advanced stage disease.
Interestingly, reduced amount of EAT were found in patients with congestive heart failure (HF), compared to patients with preserved systolic function [START_REF] Doesch | Epicardial adipose tissue in patients with heart failure[END_REF][START_REF] Doesch | Bioimpedance analysis parameters and epicardial adipose tissue assessed by cardiac magnetic resonance imaging in patients with heart failure[END_REF]132). Furthermore, EAT reduction was predictive of cardiac deaths in these patients [START_REF] Doesch | Bioimpedance analysis parameters and epicardial adipose tissue assessed by cardiac magnetic resonance imaging in patients with heart failure[END_REF]. Reduction of EAT volume with the severity of right ventricular systolic dysfunction in patients with chronic obstructive pulmonary disease was also demonstrated (145). EAT reduction might reflect a global fat mass reduction due to disease (124). Burgeiro et al, found reduction of glucose uptake, lipid storage and inflammation-related gene expression in EAT of patients with heart failure compared to SAT [START_REF] Burgeiro | Glucose uptake and lipid metabolism are impaired in epicardial adipose tissue from heart failure patients with or without diabetes[END_REF]. However, the triggering factors causing EAT diminution and phenotype modification in heart failure is still under investigation, yet.
How EAT can participate and initiate LV dysfunction? First, EAT could mechanically enhanced LV afterload that could lead to increase LV output and stroke volume to enable adequate myocardium perfusion. EAT may act as local energy supplier and/or as a buffer against toxic levels of free fatty acids in the myocardium (198). EAT was found to have an enhanced adrenergic activity with increased catecholamine levels and expression of catecholamine biosynthetic enzymes so that EAT could directly contribute to sympathetic nervous system hyperactivity in the heart that accompanies and fosters myocardial sympathetic denervation. Indeed, Parisi et al, studied the relationship between EAT and sympathetic nerve activity assessed by 123I-metaiodobenzylguanidine ( 123 I-MIBG) in patients with HF (237). They found that EAT thickness was correlated to cardiac sympathetic denervation and represented an important source of norepinephrine, whose levels were 2-fold higher than those found in plasma. Because of the EAT proximity to the myocardium, the increase in catecholamine content in this tissue could result in a negative feedback on cardiac sympathetic nerves, thus inducing a functional and anatomic denervation of the heart (237) .
Alternatively, secretory products of EAT and an imbalance between anti-inflammatory and proinflammatory adipocytokines could participate in myocardium remodeling [START_REF] Gaborit | Epicardial fat: more than just an "epi" phenomenon?[END_REF]. The contribution of EAT to cardiac fibrosis, a substratum widely recognized to impair cardiac function, has been recently demonstrated (see also above EAT and AF) (322). EAT, through its capacity to produce and secrete adipo-fibrokines and miRNA could be a main mechanism The reciprocal crosstalk between EAT, myocardium and epicardium is even more complex than what was first suggested. Indeed as described above in paragraph EAT origin, signals from necrotic cardiomyocytes could induce epicardium-to-fat transition, thay may increase EAT volume which may in turn modulate heart disease evolution.
All together, the available studies in humans do not imply causality but suggest that accumulation of EAT is, at least an indirect marker of early cardiac dysfunction in selected stages of disease progression. Wide cohorts evaluating extensively all ectopic fat depots and comprehensively characterizing cardiac geometry and function across the lifespan are needed.
EAT and coronary artery disease
Histological and radiological evidence
Although our limited understanding of the physiological role of EAT, there has been a lot of studies published in recent years, underscoring the strong association of EAT with the onset and development of coronary artery disease (CAD) in humans [START_REF] Chechi | Thermogenic potential and physiological relevance of human epicardial adipose tissue[END_REF][START_REF] Clement | Weight of Pericardial Fat on Coronaropathy[END_REF]234). Initially, a plausible role of EAT in CAD was supported by the histological observations that segments of coronary arteries running in a myocardial bridge (ie free of any immediately adjacent epicardial fat) tended to be free from atherosclerosis (135,260). Necropsic studies have then demonstrated that EAT was higher in patients dead from CAD, and correlated with CAD staging (284). Since then, and although correlations do not necessarily prove causation, a growing body of imaging studies using echocardiography (thickness), computed tomography (CT, reviewed elsewhere (293)) or magnetic resonance imaging (MRI) have confirmed the association of EAT with CAD [START_REF] Gorter | Relation of epicardial and pericoronary fat to coronary atherosclerosis and coronary artery calcium in patients undergoing coronary angiography[END_REF]101,105,156,190,212,264,305,324). Initial large population studies, including the Framingham Heart Study and Multi-Ethnic Study of Atherosclerosis, identified pericardial fat as an independent predictor of cardiovascular risk [START_REF] Ding | The association of pericardial fat with incident coronary heart disease: the Multi-Ethnic Study of Atherosclerosis (MESA)[END_REF]191). Compared to the Framingham Risk Score, pericardial fat volume >300 cm 3 was by far the strongest predictor for coronary atherosclerosis (OR 4.1, 95% CI 3.63-4.33)(101).
Other studies highlighted the add-on predictive value of EAT compared to CAD scores such as coronary calcium score (CAC) (113,138,173). EAT significantly correlated with the extent and severity of CAD, chest pain, unstable angina and coronary flow reserve (233, 269).
In addition, case-control studies identified pericardial fat volume as a strong predictor of myocardial ischemia (113,305). By contrast, some studies did not find such an association between EAT and the extent of CAD in intermediate to high risk patients, suggesting that the relationship is not constant at more advanced stages (263,306). Interestingly, in the positive studies linking EAT with CAD and developing high risk obstructive plaques, the association was independent of adiposity measures, BMI and the presence of coronary calcifications (128,136). Recent studies indicated that EAT could also serve as a marker for the presence and severity of atherosclerosis burden in asymptomatic patients [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF]346), threshold EAT thickness identified at 2.4 mm [START_REF] Bachar | Epicardial adipose tissue as a predictor of coronary artery disease in asymptomatic subjects[END_REF]. All these findings are highly suggestive of a role for EAT in promoting the early stages of atherosclerotic plaque formation. In highly selected healthy volunteers, we reported that a higher EAT volume was associated with a decrease in coronary microvascular response, likely suggesting that EAT could participate in endothelial dysfunction [START_REF] Gaborit | Epicardial fat volume is associated with coronary microvascular response in healthy subjects: a pilot study[END_REF]. By using intravascular ultrasound it could be demonstrated that plaques develop most frequently with a pericardial spatial orientation suggesting a permissive role of EAT (251).
EAT and Clinical outcomes
More recently, the Heinz Nixdorf Recall study including more 4000 patients from general population confirmed the predictive role of EAT on clinical outcomes within 8 years (189). In this prospective trial, EAT volume significantly predicted fatal and nonfatal coronary events independently of cardiovascular risk factors and CAC score. They observed that subjects in the highest EAT quartile had a 4 fold higher risk of coronary events when compared to subjects in the lowest quartile (0.9 versus 4.7 %, p<0.001, respectively). In addition, doubling EAT volume was associated with a 1.5 fold adjusted risk of coronary events [hazard ratio (HR), 1.54; 95% CI, 1.09-2.19] (189). A recent meta-analysis, evaluating 411 CT studies confirmed EAT as a prognostic metric for future clinical adverse events (binary cut-off of 125 mL) (293). This cut-off needs to be evaluated further in prospective cohorts in order to discuss the relevance of its introduction in clinical care. To date, there is a lack of agreement on EAT threshold value associated with increased CAD risk, as various methods are used for its assessment (see Imaging paragraph). In conclusion to all these clinical studies, EAT volume is a strong independent predictor of CAD. Nevertheless, whether a reduction in the amount of EAT could reduce CAD in humans remains to be established.
Pathophysiology of EAT in CAD
The mechanisms by which EAT can cause atherosclerosis are complex and not completely understood. Epicardial fat might alter the coronary arteries through multiple pathways, including oxidative stress, endothelial dysfunction, vascular remodeling, macrophage activation, innate inflammatory response, and plaque destabilization (124, 243)
1/ EAT has a specific profile in coronary artery disease:
EAT in CAD displays a pro-inflammatory phenotype, high levels of ROS and a specific pattern of micro RNA. Epicardial adipocytes have intrinsic proinflammatory and atherogenic secretion profiles [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF][START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF]. In 2003, Mazurek et al., first reported that, in CAD patients EAT exhibited significantly higher levels (gene expression and protein secretion) of chemokines such as monocyte chemotactic protein-1 (MCP-1) and several inflammatory cytokines IL-6, IL-1, and TNF- than SAT (206). They also observed the presence of inflammatory cells infiltrate including macrophages, lymphocytes and mast cells in EAT compared to SAT. The presence of these inflammatory mediators was hypothesized to accentuate vascular inflammation, plaque instability via apoptosis (TNF-), and neovascularization (MCP-1).
Peri-adventitial application of endotoxin, MCP-1, IL-1, or oxidized LDL induces inflammatory cell influx into the arterial wall, coronary vasospasm, or intimal lesions, which suggests that bioactive molecules from the pericoronary tissues may alter arterial homeostasis (279). These observations tend to support the concept of "outside to-inside" cellular cross-talk or "vasocrine/paracrine signaling", in that inflammatory mediators or free fatty acids produced by EAT adjacent to the coronary artery, may have a locally toxic effect on the vasculature, in diffusing passively or in vasa vasorum through the arterial wall, as depicted in Figure 10 (38,266,348). Migration of immune cells between EAT and adjacent adventitia may also occur (133). Nevertheless, direct proofs that these mechanisms operate in vivo are lacking. Since then, other groups have confirmed that EAT is a veritable endocrine organ and a source of a myriad of bioactive locally acting molecules (266). EAT content and release of adiponectin were consistently found to be decreased in CAD patients, suggesting that an imbalance between antiatherogenic, insulinsensitizing and harmful adipocytokines secreted by EAT could initiate inflammation in the vascular wall [START_REF] Cheng | Adipocytokines and proinflammatory mediators from abdominal and epicardial adipose tissue in patients with coronary artery disease[END_REF]129,278). Innate immunity represents one of the potential pathways for proinflammatory cytokines release. Innate immunity can be activated via the toll-like receptors (TLRs), which recognize antigens such as lipopolysaccharide (LPS) (141). Activation of TLRs leads to the translocation of NFκB into the nucleus to initiate the transcription and the release of IL-6, TNF-, and resistin [START_REF] Creely | Lipopolysaccharide activates an innate immune system response in human adipose tissue in obesity and type 2 diabetes[END_REF]164). Remarkably Baker et al, showed that NFκB was activated in EAT of CAD patients [START_REF] Baker | Epicardial adipose tissue as a source of nuclear factor-kappaB and c-Jun N-terminal kinase mediated inflammation in patients with coronary artery disease[END_REF].
TLR-2 and TLR-4 and TNF- gene expression was higher in EAT of CAD patients, and was closely linked to the presence of activated macrophages in the EAT. In another study, EAT amount positively correlated with the CD68+ and CD11c+ cell numbers, NLRP3 inflammasome, IL-1β, and IL-1R expression. NLRP3 inflammasome is a sensor in the nodlike receptor family of the innate immune cell system that activates caspase-1 and mediates the processing and release of IL-1β, and thereby has a central role in the inflammatory response [START_REF] Baroja-Mazo | The NLRP3 inflammasome is released as a particulate danger signal that amplifies the inflammatory response[END_REF]. Interestingly, the ratio of proinflammatory M1 macrophages and antiinflammatory M2 macrophages in EAT was reported to be shifted toward the M1 phenotype MicroRNAs could also be an important actor of this crosstalk between EAT and the coronary artery wall. Indeed miRNAs are small, non-coding RNAs acting as posttranscriptional regulators of gene expression, either interfering with protein translation or reducing transcript levels (176). A nice integrative miRNA and whole genome analyses of EAT identified the signature of miRNAs in EAT of CAD patients (320). The authors described that EAT in CAD displays affected metabolic pathways with suppression of lipid-and retinoid sensing nuclear receptors, transcriptional activities, increased inflammatory infiltrates, activation of innate and adaptive immune response enhanced chemokine signalling (CCL5, CCL13, and CCL5R) and decrease of miR-103-3p as prominent features (320).
Furthermore higher levels of reactive oxygen species (ROS) and lower expression of antioxidant enzymes (such as catalase), have been observed in EAT of individuals with CAD compared with SAT (Figure 10) (271). On the other hand, EAT might also contribute to the accumulation of oxidized lipids within atherosclerotic plaques, as we evidenced increased expression and secretion of Secretory type II phospholipase A2 (sPLA2-IIa) in EAT of CAD patients [START_REF] Dutour | Secretory Type II Phospholipase A2 Is Produced and Secreted by Epicardial Adipose Tissue and Overexpressed in Patients with Coronary Artery Disease[END_REF].
2/ EAT plays a pivotal role in the initiation of atherosclerosis
The negative impact of EAT secretome on adjacent coronary arteries in CAD has been clearly demonstrated. In vitro studies revealed that EAT secreted fatty acids, inflammatory, stress mediators and migrated immune cells may induce endothelial dysfunction and vascular remodeling. EAT can affect the endothelium by inducing cell-surface expression of adhesion molecules such as VCAM-1, and it enhances migration of monocytes to coronary artery endothelial cells (146). Besides, it has been demonstrated that the permeability of endothelial cells in vitro was significantly increased after exposure to EAT supernatrant in patients with acute coronary syndrome, and this effect was normalized by anti-resistin antiserum (167).
Payne et al, showed that perivascular EAT derived leptin electively impaired coronary endothelial-dependent dilation in Ossabaw swine with metabolic syndrome (242). Other in vitro studies support the role of perivascular adipose tissue on vascular remodeling (243).
Conditioned medium of cultured perivascular adipocytes from HFD rats was found to significantly stimulate vascular smooth muscle cells proliferation [START_REF] Barandier | Mature adipocytes and perivascular adipose tissue stimulate vascular smooth muscle cell proliferation: effects of aging and obesity[END_REF]. Other in vitro studies highlighted the role of peri-adventitial fat on neointimal formation after angioplasty (303,304). Finally, in a recent study involving Ossabaw miniature swine, selective surgical excision of EAT surrounding the left anterior descending artery was shown to be associated with slower progression of coronary atherosclerosis over a period of 3 months with atherogenic diet (210). Athough this study was preliminary and without controls, these results support the hypothesis that EAT could locally contribute to the initiation of coronary atherosclerosis, and further suggest that targeting its reduction could reduce CAD progression.
To conlude, EAT is not simply a marker of CAD but seems to play a key role in the initiation [START_REF] Barone-Rochette | Left ventricular remodeling and epicardial fat volume in obese patients with severe obstructive sleep apnea treated by continuous positive airway pressure[END_REF]. These data are consistent with previous studies supporting a negative role of EAT on cardiac function [START_REF] Cavalcante | Association of epicardial fat, hypertension, subclinical coronary artery disease, and metabolic syndrome with left ventricular diastolic dysfunction[END_REF][START_REF] Dabbah | Epicardial fat, rather than pericardial fat, is independently associated with diastolic filling in subjects without apparent heart disease[END_REF][START_REF] Fontes-Carvalho | Influence of epicardial and visceral fat on left ventricular diastolic and systolic functions in patients after myocardial infarction[END_REF]128,130,143,174,238).
The prognostic impact of EAT reduction by CPAP therapy on cardiovascular outcomes need to be further explored by large prospective studies. In all, EAT is increased in OSA patients and is a correlate of OSA severity. Additionally, CPAP therapy can significantly reduce the amount of EAT. Further large prospective studies are needed to evaluate the effect of CPAP therapy on EAT quantity, phenotype, and secretome.
Conclusion and perspectives
To conclude, the unique anatomic location of epicardial adipose tissue likely translates into a unique physiological relevance and pathophysiological role for this cardiac ectopic depot. Far from being an inert and uniform tissue, EAT has been shown to be a dynamic organ with highly developed functions, and a unique trasncriptome that are determined by its developmental epicardial origin, its regenerative potential, and molecular structure. It was poorly studied during a long time because of the small amount of EAT found in rodents and because of the difficulties faced by the researchers for biological studies requiring open cardiac surgery. Since, imaging studies have provided new non invasive tools for EAT quantification, and recent studies have paved the way for identifying new cellular characteristics of EAT by measuring its radiodensity [START_REF] Baba | CT Hounsfield units of brown adipose tissue increase with activation: preclinical and clinical studies[END_REF][START_REF] Franssens | Relation between cardiovascular disease risk factors and epicardial adipose tissue density on cardiac computed tomography in patients at high risk of cardiovascular events[END_REF][START_REF] Gaborit | Looking beyond ectopic fat amount: A SMART method to quantify epicardial adipose tissue density[END_REF].
In addition, an increase of epicardial fat result in an increased propensity not only for the onset but also for the progression and severity of CAD or atrial fibrillation in humans. Many intervention studies have proven that EAT is flexible and is a modifiable factor with weight loss induced by diet, GLP-1 receptor agonists or bariatric surgery [START_REF] Dutour | Exenatide decreases Liver fat content and Epicardial Adipose Tissue in Patients with obesity and Type 2 Diabetes: A prospective randomised clinical trial using Magnetic Resonance Imaging and Spectroscopy[END_REF]254). The type of intervention, in addition to the amount of weight loss achieved, is predictive of the amount of EAT reduction. Hence this depot represents a therapeutic target for the management of CAD, and should be further assessed to identify CAD risk. But whether its reduction will lead to the reduction of cardiac events or cardiac rhythm disorders needs to be addressed in randomized controlled studies. The effect of EAT on cardiac autonomic nerves and the cardiac conduction system also needs to be further explored.
Furthermore, EAT has a beige profile that decreases with age and CAD. In support of this hypothesis is evidence of brown-to-white differentiation trans-differentiation in CAD patients with a decrease in thermogenic genes and up-regulation of white adipogenesis [START_REF] Aldiss | Browning" the cardiac and peri-vascular adipose tissues to modulate cardiovascular risk[END_REF][START_REF] Dozio | Increased reactive oxygen species production in epicardial adipose tissues from coronary artery disease patients is associated with brown-to-white adipocyte trans-differentiation[END_REF]. Teaching points: a variety of termes including "epicardial", "pericardial", "paracardial" and "intra-thoracic" have been used in the literature to describe ectopic fat depots in proximity to the heart or within mediastinum. The use of these terms appears to be a point of confusion, as there is varied use of definitions. Of particular confusion is the term used to define the adipose tissue located within the pericardial sac, between myocardium and visceral pericardium. This has previously been described in the literature as "pericardial fat", while other groups have referred it as "epicardial fat". As illustrated in Figure 1, the most accurate term for the adipose tissue fully enclosed in the pericardial sac that directly surrounds myocardium and coronary arteries is EAT. Pericardial fat (PeriF) refers to paracardial fat (ParaF) plus all adipose tissue located internal to the parietal pericardium. PeriF=ParaF+EAT. In an obesogenic environment and chronic positive energy balance, the ability of subcutaneous adipose tissue (SAT) to expand, and to store the free fatty acids in excess is crucial in preventing the accumulation of fat in ectopic sites, and the development of obesity complications. Healthy SAT and gynoid obesity are associated with a protective phenotype with less ectopic fat and metabolically healthy obesity, while dysfunctional SAT and android obesity are associated with more visceral fat and ectopic fat accumulation with an increased risk of type 2 diabetes, metabolic syndrome and coronary artery disease (CAD). Inflammation or profibrotic processes, hypoxia, and aging could also contribute to ectopic fat development. Mobilization and release of adipose progenitors adipose-derived stem/stromal cells (ASCs) into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation also cannot be excluded. Figure 9. This figure summarizes the possible mechanisms that could link EAT with atrial fibrillation. EAT expansion-induced mechanical stress, direct adipocyte infiltration within atrial myocardium, inflammation, oxidative stress, and EAT producing adipofibrokines are thought to participate in structural and electrical remodeling of the atria, and in cardiac autonomous system activation, hence promoting arrhythmogenesis.
Figure 10. This figures illustrates a transversal and longitudinal view of EAT surrounding a coronary artery. As there is no fascia separating EAT from the vessel wall, free fatty acids or proinflammatory cytokines produced by EAT could diffuse passively or in vasa vasorum through the arterial wall and participate in the early stages of atherosclerosis plaque formation (endothelial dysfunction, ROS production, oxidized LDL uptake, monocyte transmigration, smooth muscle cells proliferation, macrophages transformation into foam cells). An imbalance between antiatherogenic, and harmful adipocytokines secreted by EAT could initiate inflammation in the intima. Innate immunity can be activated via the toll-like receptors (TLRs), which recognize antigens such as lipopolysaccharide (LPS). Activation of TLRs leads to the translocation of NFκB into the adipocyte nucleus to initiate the transcription and the release of proinfammatory molecules such as IL-6, TNF-α, and resistin. NLRP3 inflammasome is a sensor in the nod-like receptor family of the innate immune cell system that activates caspase-1, and mediates the processing and release of IL-1β by the adipocyte, and thereby has a central role in the EAT-induced inflammatory response.
Fat tissues have low T1 value and appear in high signal on most sequences. Usually cine Steady State Free Precession (SSFP) sequences are used to quantify fat volume. Contrast on SSFP images allow a precise distinction between paracardial and epicardial fat and coverage of whole ventricles is always performed on a standard cardiac MR acquisition (161). Recently novel 3D Dixon acquisition using cardiac synchronization and respiratory triggering provide high accuracy and reproducibility for peri and epicardial fat quantification (118).
contributing to the excess deposition of extracellular matrix proteins which distort organ architecture, induce pathological signaling and impair mechano-electric coupling of cardiomyocytes.(163, 291). However, concomitant study of heart fibrosis and EAT molecular characteristics has never been simultaneously performed in humans. In vitro studies from the group of Eckel, have demonstrated in both guinea pigs and humans that secreted factors from EAT can affect contractile function and insulin signaling in cardiomyocytes (103, 104). Highfat feeding of guinea pigs induces qualitative alterations in the secretory profile of EAT, which contributes to the induction of impaired rat cardiomyocyte function, as illustrated by impairments in insulin signaling, sarcomere shortening, cytosolic Ca2+ metabolism and SERCA2a expression (104). Rat cardiomyocytes treated with secretome of EAT from diabetic patients showed reductions in sarcomere shortening, cytosolic Ca 2+ fluxes, expression of sarcoplasmic endoplasmic reticulum ATPase 2a. This result suggests that EAT could contribute to the pathogenesis of cardiac dysfunction in type 2 diabetes, eventhough the development of cardiac dysfunction is likely to be multifactorial, insulinresistance, myocardial fibrosis, endothelial dysfunction, autonomic dysfunction and myocyte damage being probably implicated.
in patients with CAD (115). More recently, Patel et al nicely demonstrated the implication of renin-angiotensin system in the inflammation of EAT (239). In a model of mice lacking angiotensin converting enzyme 2 (ACE2) submitted to a HFD, loss of ACE2 resulted in decreased weight gain, but increased glucose intolerance, and EAT inflammation. Ang 1-7 treatment resulted in ameliorated EAT inflammation and reduced cardiac steatosis, function and lipotoxicity (239).
Figure 1 :
1 Figure 1: Layers of the heart and pericardium Scheme demonstrating epicardial fat between
Figure 2 :
2 Figure 2: Epicardial adipose tissue among speciesanterior and posterior heart photographic
Figure 3 :
3 Figure 3: The origin of epicardial adipose tissue. Epicardial adipocytes derived from
Figure 4 :Figure 5 :Figure 6 :
456 Figure 4: Main factors leading to ectopic fat deposition in humans. FFA: free fatty acids;
Figure 7 :Figure 8 :Figure 9 :Figure 10 :
78910 Figure 7: MR short axis cine sequences at the diastolic phase A, with contouring of the heart
Figure 1 .
1 Figure1. Teaching points: a variety of termes including "epicardial", "pericardial", "paracardial" and "intra-thoracic" have been used in the literature to describe ectopic fat depots in proximity to the heart or within mediastinum. The use of these terms appears to be a point of confusion, as there is varied use of definitions. Of particular confusion is the term used to define the adipose tissue located within the pericardial sac, between myocardium and visceral pericardium. This has previously been described in the literature as "pericardial fat", while other groups have referred it as "epicardial fat". As illustrated in Figure1, the most accurate term for the adipose tissue fully enclosed in the pericardial sac that directly surrounds myocardium and coronary arteries is EAT. Pericardial fat (PeriF) refers to paracardial fat (ParaF) plus all adipose tissue located internal to the parietal pericardium. PeriF=ParaF+EAT.
Figure 2 .
2 Figure 2. This figure illustrates the relative amount of epicardial adipose tissue among species. Humans and swine have much more EAT than rodents.
Figure 3 .
3 Figure 3. This figure illustrates the origin of epicardial adipose tissue. Epicardial adipocytes have a mesothelial origin and derive mainly from epicardium. Cells originating from the (Wilms' tumor gene Wt1) Wt1+ mesothelial lineage, can differentiate into EAT and this epicardium-to-fat transition (ETFT) fate can be reactivated after myocardial infarction.
Figure 4 .
4 Figure 4. This figures illustrates the mechanisms driving the development of ectopic fat deposition and its consequences. In an obesogenic environment and chronic positive energy balance, the ability of subcutaneous adipose tissue (SAT) to expand, and to store the free fatty acids in excess is crucial in preventing the accumulation of fat in ectopic sites, and the development of obesity complications. Healthy SAT and gynoid obesity are associated with a protective phenotype with less ectopic fat and metabolically healthy obesity, while dysfunctional SAT and android obesity are associated with more visceral fat and ectopic fat accumulation with an increased risk of type 2 diabetes, metabolic syndrome and coronary artery disease (CAD). Inflammation or profibrotic processes, hypoxia, and aging could also contribute to ectopic fat development. Mobilization and release of adipose progenitors adipose-derived stem/stromal cells (ASCs) into the circulation and their further infiltration into non adipose tissues leading to ectopic adipocyte formation also cannot be excluded.
Figure 5 to 7 ;
57 Figure 5 to 7; These figures illustrate imaging techniques for EAT quantification. MRI remains the standard reference for adipose tissue quantification. The major advantage of this technique is its excellent spatial resolution and possible distinction between paracardial and epicardial fat. The major limitation of echocardiography is its 2D approach (thickness measurement). The major limitation of computed tomography remains its radiation exposure.
Figure 8 .
8 Figure 8. This figure illustrates microscopic images of human atrial epicardial adipose tissue and myocardium. One can observe fatty infiltration of myocardium with EAT, ie direct adipocytes infiltration into the underlying atrial myocardium, associated with fibrosis. Such direct adipocytes infiltration separating myocytes are supposed to induce remodeled atrial substrate, and lead to conduction defects (conduction slowing or inhomogeneity).
1 H-MRS) refers to the storage of triglyceride droplets within cardiomyocytes, which can generate toxic lipid intermediates ie ceramides, endoplasmic reticulum stress, mitochondrial dysfunction and lipotoxicity (209). In the physiologically aging male heart, myocardial triglyceride content increased in association with the decline in diastolic function and could be thus a potential confounding factor (133). Altough these clinical studies do not infer causality, they point to possible early impact of cardiac adiposity on LV remodeling and function.
More recently, using newly innovative methods such as speckle tracking echocardiography
(STE) or cardiovascular magnetic resonance (CMR) displacement encoded imaging, subtle
changes in cardiac structure, contractile dysfunction and myocardial dyssynchrony were
associated with EAT volume. Indeed, cardiac mechanics (strain, torsion, and synchrony of
contraction) are more sensitive measures of heart function that may detect subtle
abnormalities, preceding clinical manifestations. Using CMR in 41 obese children, Jing et al,
showed that, early in life obese children develop contractile dysfunction with higher LV mass indexed to height compared to healthy weight children
(139)
. In this study, EAT was linked to
of atherosclerosis, by secreting locally many bioactive molecules such as fatty acids, inflammatory, immune, and stress factors, cytokines or chemokines. Current investigations are done to comprehensively understand how factors produced by EAT are able to cross the vessel wall, and to what initiate or precede the change in EAT phenotype. An imbalance between the protective and the deleterious factors secreted by EAT, and between the pro and anti-inflammatory immune cells is likely to trigger CAD development. Despite all the described findings, the pathophysiological link between EAT and CAD needs to be elucidated further, and we really need interventional studies to investigate whether EAT reduction could reduce clinical outcomes.
pressure (CPAP) during 24 weeks significantly reduced EFT in 28 symptomatic OSA patients
with AHI > 15, without significant change in BMI or waist circumference (36). Shorter-term
of CPAP treatment (3 months) in 25 compliant OSA patients also reduced EFT (159), but in
EAT and obstructive sleep apnea another study EAT remained higher in CPAP treated OSA obese patients (n=19, mean BMI
Obstructive sleep apnea (OSA) is a sleep disorder characterized by repetitive episodes of 38 ± 4 kg/m 2 ) compared to age-matched healthy subjects (n=12), and CPAP was not
upper airway obstruction during sleep, resulting in decreased oxygen saturation, disruption of sufficient to alleviate left ventricular concentric hypertrophy, as assessed by mass-cavity ratio,
sleep, and daytime somnolence (71). Repetitive apneic events disrupt the normal physiologic the latter being independently correlated with EAT
interactions between sleep and the cardiovascular system (289, 314). Such sleep
fragmentation and cyclic upper airway obstruction may result in hypercapnia, chronic
intermittent hypoxemia that have been linked to increased sympathetic activation, vascular
endothelial dysfunction, increased oxidative stress, inflammation, decreased fibrinoloytic
activity, and metabolic dysregulation (62, 142, 149, 255). Hence OSA could contribute to the
initiation and progression of cardiac and vascular disease. Conclusive data implicate OSA in
the development of hypertension, CAD, congestive heart failure, and cardiac arrhythmias
(277, 290). We previously reported that EAT is sensitive to OSA status and that bariatric
surgery had little effect on epicardial fat volume (EFV) loss in OSA patients (86). It is
tempting to hypothesize that OSA-induced chronic intermittent hypoxia could modify the
phenotypic features of EAT and may be an initiator of adipose tissue remodeling (fibrosis or
inflammation). However, this has never been investigated in EAT yet.
Two recent studies have reported a relationship between epicardial fat thickness and OSA
severity (184, 200). Mariani et al. reported a significant positive correlation between EFT and
apnea/hypopnea inex (AHI), and EFT values were significantly higher in moderate and severe
OSA groups comparing to mild OSA group (200). A similar study was conducted by Lubrano
et al. in 171 obese patients with and without metabolic syndrome, in which EFT rather than
BMI was the best predictor of OSA (184). Treatment of OSA with continuous positive airway
Table 1 .
1 The thermogenic potential of EAT may represent a useful beneficial property, and another unique target for therapeutic interventions. This is an attractive way of research in that the understanding of EAT browning and factors able to induce the browning of fat is mounting daily. Further experimental research is hence warranted to enhance our understanding of EAT thermogenic and wholesome energy expenditure potential as well as its potential flexibility with life style, medical or surgical treatments.Finally, additional research and understanding on adipose tissue biology in general and mechanisms responsible for ectopic fat formation are needed in the future. Whether epicardium-to-fat-transition reactivation exists in humans, and whether unhealthy subcutaneous adipose tissue could trigger the release of adipose progenitors such as adiposederived stem/stromal cells into the circulation, and whether these adipogenic cells could reach the heart and give rise to new adipocyte development in EAT is a fascinating area of interest for next years. Main anatomical and physiological properties of EAT Main anatomical and physiological properties of EAT
Tables
Localization Between the myocardium and the visceral layer
of the pericardium
Anatomical and functional proximity Myocardium, coronary arteries, nerves and
ganlionated plexi
Origin Epicardium
Blood supply Branches of the coronary arteries
Color White and beige
Cells Small adipocytes
Mixed cellularity with stromal preadipocytes,
fibroblasts, macrophages, mast cells,
lymphocytes (immune cells)
Metabolism High lipogenesis and lipolysis
Thermogenesis
Secretome Source of a myriad of adipocytokines,
chemokines, growth factors, FFA
Way of action Mainly local: paracrine and vasocrine
Transcriptome Extracellular matrix remodeling, inflammation,
immune signaling, coagulation, thrombosis,
beiging and apoptosis enriched pathways
Protective actions Arterial pulse wave, vasomotion
Thermogenic potential
Autonomic nervous system
Immune defence
Regeneration potential (epicardial-to-fat-
transition)
Table 2 . Human EAT bioactive molecules Category Biomarkers Expression Pathological state References
2
α1-glycoprotein mRNA CAD Fain et al., 2010
Chemerin protein, mRNA CAD Spiroglou et al., 2010
CRP secretion CAD Baker et al., 2006
Haptoglobin mRNA CAD Fain et al., 2010
Proinflammatory cytokines sICAM-1 IL-1β mRNA protein, mRNA, CAD CAD Karastergiou et al., 2010 Mazurek et al., 2003
secretion
IL-1Rα secretion CAD, obesity Karastergiou et al., 2010
IL-6 protein, mRNA, secretion CAD Mazurek et al., 2003 Kremen et al., 2006
Acknowledgements
We are grateful to Michel Grino, Marc Barthet, Marie Dominique Piercecchi-Marti, and Franck thuny for their help in collecting rat, swine, and human pictures.
Cardiovasc Imaging 8, 2015. 101. Greif M, Becker A, von Ziegler F, Lebherz C, Lehrke M, Broedl UC, Tittus J, Parhofer K, Becker C, Reiser M, Knez A, Leber AW. Pericardial adipose tissue determined by dual source CT is a risk factor for coronary atherosclerosis. Arterioscler Thromb Vasc Biol 29: 781-786, 2009. 102. Greulich S, Chen WJY, Maxhera B, Rijzewijk LJ, van der Meer RW, Jonker JT, Mueller H, de Wiza DH, Floerke R-R, Smiris K, Lamb HJ, de Roos A, Bax JJ, Romijn JA, Smit JWA, Akhyari P, Lichtenberg A, Eckel J, Diamant M, Ouwens DM. Cardioprotective properties of omentin-1 in type 2 diabetes: evidence from clinical and in vitro studies. PloS One 8: e59697, 2013. 103. Greulich S, Maxhera B, Vandenplas G, de Wiza DH, Smiris K, Mueller H, Heinrichs J, Blumensatt M, Cuvelier C, Akhyari P, Ruige JB, Ouwens DM, Eckel J. Secretory products from epicardial adipose tissue of patients with type 2 diabetes mellitus induce cardiomyocyte dysfunction. Circulation 126: 2324-2334, 2012. 104. Greulich S, de Wiza DH, Preilowski S, Ding Z, Mueller H, Langin D, Jaquet K, Ouwens DM, Eckel J. Secretory products of guinea pig epicardial fat induce insulin resistance and impair primary adult rat cardiomyocyte function. J Cell Mol Med 15: 2399-2410, 2011. 105. Groves EM, Erande AS, Le C, Salcedo J, Hoang KC, Kumar S, Mohar DS, Saremi F, Im J, Agrawal Y, Nadeswaran P, Naderi N, Malik S. Comparison of epicardial adipose tissue volume and coronary artery disease severity in asymptomatic adults with versus without diabetes mellitus. Am J Cardiol 114: 686-691, 2014. 106. Gruver AL, Hudson LL, Sempowski GD. Immunosenescence of ageing. J Pathol 211: 144-156, 2007. 107. Guo SS, Zeller C, Chumlea WC, Siervogel RM. Aging, body composition, and lifestyle: the Fels Longitudinal Study. Am J Clin Nutr 70: 405-411, 1999. 108. Guo W, Pirtskhalava T, Tchkonia T, Xie W, Thomou T, Han J, Wang T, Wong S, Cartwright A, Hegardt FG, Corkey BE, Kirkland JL. Aging results in paradoxical susceptibility of fat cell progenitors to lipotoxicity. Am J Physiol Endocrinol Metab 292: E1041-51, 2007. 109. Gupta OT, Gupta RK. Visceral Adipose Tissue Mesothelial Cells: Living on the Edge or Just Taking Up Space? Trends Endocrinol Metab TEM 26: 515-523, 2015. 110. Han JM, Wu D, Denroche HC, Yao Y, Verchere CB, Levings MK. IL-33 Reverses an Obesity-Induced Deficit in Visceral Adipose Tissue ST2+ T Regulatory Cells and Ameliorates Adipose Tissue Inflammation and Insulin Resistance. J Immunol 194: 4777-
Cross references
Ectopic lipid and inflammatory mechanisms of insulin resistance |
01677497 | en | [
"chim",
"chim.mate"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01677497/file/Michau_19836.pdf | A Michau
F Maury
F Schuster
R Boichot
M Pons
E Monsifrot
email: [email protected]
Chromium carbide growth at low temperature by a highly efficient DLI-MOCVD process in effluent recycling mode
Keywords:
MOCVD process Bis(arene)chromium
The effect of direct recycling of effluents on the quality of Cr x C y coatings grown by MOCVD using direct liquid injection (DLI) of bis(ethylbenzene)chromium(0) in toluene was investigated. The results are compared with those obtained using non-recycled solutions of precursor. Both types of coatings exhibit the same features. They are amorphous in the temperature range 673-823 K. They exhibit a dense and glassy-like microstructure and a high hardness (> 23 GPa). Analyses at the nanoscale revealed a nanocomposite microstructure consisting of free-C domains embedded in an amorphous Cr 7 C 3 matrix characterized by strong interfaces and leading to an overall composition slightly higher than Cr 7 C 3 . The stiffness and strength of these interfaces are mainly due to at least two types of chemical bonds between Cr atoms and free-C: (i) Cr intercalation between graphene sheets and (ii) hexahapto η 6 -Cr bonding on the external graphene sheets of the free-C domains. The density of these interactions was found increasing by decreasing the concentration of the injected solution, as this occurred using a recycled solution. As a result, "recycled" coatings exhibit a higher nanohardness (29 GPa) than "new" coatings (23 GPa). This work demonstrates that using bis(arene)M(0) precursors, direct recycling of effluents is an efficient route to improve the conversion yield of DLI-MOCVD process making it cost-effective and competitive to produce protective carbide coatings of transition metals which share the same metal zero chemistry.
Introduction
For a better control of production cost of manufactured objects that comprise CVD coatings, the economic performance of deposition processes is an important need. The increasing use of metalorganic precursors is a way to reduce the cost of large-scale CVD process because this greatly lowers the deposition temperatures leading to substantial energy savings. This is evidenced for instance by the growth of metallic Cr at 673 K by DLI-MOCVD [START_REF] Michau | Evidence for a Cr metastable phase as a tracer in DLI-MOCVD chromium hard coatings usable in high temperature environment[END_REF] in comparison with the industrial chromizing method of pack cementation which operates at about 1273 K.
A way to reduce the cost of CVD products is to repair the coating or to recycle the substrate. Indeed, though the coating and the substrate generally form strong and inseparable pairs there are examples where the substrate can be separated and recycled in CVD process to reduce the production cost. For instance in diamond coated cutting tools the worn coating was removed to apply a new one by the same CVD process [START_REF] Liu | Recycling technique for CVD diamond coated cutting tools[END_REF] and, in the graphene CVD synthesis the Cu substrate used as catalyst was recycled after delamination because it is an expensive substrate [START_REF] Wang | Electrochemical delamination of CVD-grown graphene film: toward the recyclable use of copper catalyst[END_REF].
Another way to improve economic performance of CVD is to implement the recycling of effluents. Recycling in CVD processes is only mentioned in a basic book on the technique although it is important for applications [START_REF] Rees | Introduction[END_REF]. When expensive molecular precursors are used, as for deposition of precious metals, the by-products are collected at the exit of the CVD reactor then leading recyclers and traders develop in parallel complex effluent treatments either to refine and reuse the collected precursor or to transform by-products and reuse pure metal [START_REF]Recycling International, Japanese recycling process for ruthenium precursors[END_REF]. This approach is also applied in high volume CVD production facilities. For instance a hydrogen recycle system was proposed recently for CVD of poly-Si [START_REF] Revankar | CVD-Siemens reactor process hydrogen recycle system[END_REF]; in this case it is the carrier gas which is recycled. Also in the growth of Si for solar cells the exhaust gases (H 2 , HCl, chlorosilanes) were collected, separated and recycled [START_REF]Poly plant project, off-gas recovery & recycling[END_REF]. Generally these strategies reduce the production cost but they did not act directly on the CVD process itself since the precursor is not directly recycled in a loop.
One of the advantages of CVD processes is the deposition of uniform position temperature, high quality of the coatings…. Direct recycling of effluent using metalorganic precursors was not reported because the growth occurs at lower temperature than in hydride and halide chemistry and, in this condition, the quality of the layer strongly depends on the metal source which motivates many studies on molecular precursors [START_REF] Rees | Introduction[END_REF][START_REF] Maury | Selection of metalorganic precursors for MOCVD of metallurgical coatings: application to Cr-based coatings[END_REF][START_REF] Jones | CVD of Compound Semiconductors[END_REF][START_REF] Kodas | The Chemistry of Metal CVD[END_REF]. Furthermore, these compounds generally undergo complex decomposition mechanisms producing many unstable metal-containing by-products. Kinetics plays a major role and the growth occurs far from thermodynamic equilibrium. Examples of complexity of decomposition pathways of Cr precursors are reported in [START_REF] Rees | Introduction[END_REF][START_REF] Kodas | The Chemistry of Metal CVD[END_REF][START_REF] Maury | Evaluation of tetra-alkylchromium precursors for OMCVD: Ifilms grown using Cr[CH 2 C(CH 3 ) 3 ] 4[END_REF]. The bis(arene)M(0) precursors, where M is a transition metal in the oxidation state zero of the columns 5 and 6 are an important family of CVD precursors for low temperature deposition of carbides, nitrides and even metal coatings. This is supported by several works using these precursors on carbides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF], Nb [17], Ta [17], Cr [START_REF] Anantha | Chromium deposition from dicumene-chromium to form metal-semiconductor devices[END_REF][START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF][START_REF] Polikarpov | Chromium films obtained by pyrolysis of chromium bisarene complexes in the presence of chlorinated hydrocarbons[END_REF], Mo [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF] and W [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF], nitrides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF] and Cr [START_REF] Schuster | Characterization of chromium nitride and carbonitride coatings deposited at low temperature by OMCVD[END_REF] and metal V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF] and Cr [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Luzin | Chromium films produced by pyrolysis of its bis-arene complexes in the presence of sulfur-containing additives[END_REF], as well as nanostructured multilayer Cr-based coatings [START_REF] Maury | Multilayer chromium based coatings grown by atmospheric pressure direct liquid injection CVD[END_REF].
Chromium carbides are of great interest as tribological coatings for the protection of steel and metallic alloy components owing to their good resistance to corrosion and wear and their high hardness and melting point. They are used in many fields such as transports (automobile, shipping, aeronautic), mechanical and chemical industries and tools [START_REF] Drozda | Tool and manufacturing engineers handbook[END_REF][START_REF] Bryskin | Innovative processing technology of chromium carbide coating to apprise performance of piston rings[END_REF].
Our greater knowledge of the growth mechanisms of Cr-based coatings [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Vahlas | A thermodynamic approach to the chemical vapor deposition of chromium and of chromium carbides starting from Cr(C 6 H 6 ) 2[END_REF], thermodynamic modeling without [START_REF] Vahlas | A thermodynamic approach to the chemical vapor deposition of chromium and of chromium carbides starting from Cr(C 6 H 6 ) 2[END_REF] and with direct liquid injection (DLI) to feed the reactor [START_REF] Douard | Thermodynamic simulation of Atmospheric DLI-CVD processes for the growth of chromium based hard coatings using bis(benzene) chromium as molecular source[END_REF], the determination of a kinetic model and the simulation of the CVD process [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF] led us to study the effect of direct recycling of effluents on the quality of chromium carbide (Cr x C y ) coatings grown by DLICVD using bis(ethylbenzene)chromium(0) as representative of this family. The results are compared with those obtained using a non-recycled solution of precursor. Both types of coatings exhibit the same features (composition, structure, hardness), demonstrating that using this specific chemical system, direct recycling of effluents is an efficient route to improve the conversion yield of DLI-MOCVD process making it very competitive to develop industrial applications. The barely significant difference of hardness is commented and selection criteria for molecular precursors are also discussed so that they can be implemented in CVD processes with recycling of effluent.
Experimental
Deposition process
The growth was carried out at low temperature by direct liquid injection of metalorganic precursors in a CVD reactor (namely DLI-MOCVD process). It is a horizontal, hot-wall, Pyrex tubular reactor (300 mm long and 24 mm in internal diameter) with an isothermal zone around 150 mm. Stainless steel (304 L) plates and Si(100) wafers passivated by an amorphous SiN x thin layer acting as a barrier were used as substrates. They were placed on a planar horizontal sample-holder in the isothermal zone. More details are reported elsewhere [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF]. The total pressure was automatically monitored and kept constant at 6.7 kPa and deposition temperature was set at 723 K.
Commercial bis(ethylbenzene)chromium (BEBC) from Strem (CAS 12212-68-9) was used as chromium precursor. It is in fact a viscous liquid mixture of several bis(arene)chromium compounds with the general formula [(C 2 H 5 ) x C 6 H 6-x ] 2 Cr where x = 0-4 and BEBC is the major constituent. A solution in anhydrous toluene (99.8%) from Sigma-Aldrich (CAS 108-88-3) was prepared under inert atmosphere with a concentration of 3 × 10 -1 mol•L -1 (4 g of BEBC in 50 mL of toluene). This precursor solution was injected in a flash vaporization chamber heated at 473 K using a Kemstream pulsed injector device. A liquid flow rate of 1 mL•min -1 was set by adjusting the injection parameters in the ranges: frequency 1-10 Hz and opening time 0.5-5 ms. Nitrogen was used as carrier gas with a 500 sccm mass flow rate and was heated at approximately 453 K before entering the flash vaporization chamber to prevent condensation.
In this paper, "new" coatings refer to coatings elaborated using a freshly prepared liquid solution of as-received precursor and solvent, while "recycled" coatings concern coatings deposited using directly a recycled liquid solution of precursor, by-products and solvent. The same experimental parameters were used for new and recycled coatings: temperatures, pressure, injection parameters, carrier gas flow rate (the deposition time was about 1 h to inject about 160 mL of solution). The only difference was that the precursor concentration of the recycled solution was significantly lower due to consumption during previous runs. As a result, the growth rate in recycling mode was significantly lower. No attempt was made to change the deposition parameters in order to compare the characteristics of the coatings under identical growth conditions. The main CVD conditions are reported in Table 1.
The recycling mode investigated at this stage was based on an openloop. This means gaseous by-products going out of the CVD reactor were forced to pass through a liquid nitrogen trap. Thus undecomposed molecules of BEBC and solvent were condensed in a tank with the reactions by-products (except light hydrocarbons as C 2 species whose trapping is not effective because of their high volatility). After returning to room temperature, a homogenous "daughter" liquid solution was obtained and stored in a pressurized tank under argon before further use. Several CVD runs were required to recover a sufficient amount of "daughter" solution for a recycled run. For example, we succeeded in obtaining a 1.5 μm thick coating with a recycled solution originating from two "new" deposition experiments which had each produced a 5 μm thick coating.
Each trapped solution could also be analyzed, for instance by UV coatings on 3D components with a high conformal coverage. This is achieved when the process operates in the chemical kinetic regime, i.e. at low pressure and low temperature. However, under these particular conditions the conversion efficiency of reactants is low (typically < 30%). Consequently to develop large-scale CVD processes using expensive reactants recycling of precursor become necessary to achieve a high conversion yield. For instance in the CVD production of boron fibers the selective condensation of unconverted BCl 3 is reused directly in the growth process [START_REF] Rees | Introduction[END_REF]. Also the gas mixture CH 4 /H 2 recycling for diamond growth was reported [START_REF] Lu | Economical deposition of a large area of high quality diamond film by a high power DC arc plasma jet operating in a gas recycling mode[END_REF] and a closed gas recycling CVD process has been proposed for solar grade Si [START_REF] Noda | Closed recycle CVD process for mass production of SOG-Si from MG-Si[END_REF]. Furthermore it was shown that a recycle loop is very useful for the management of the axial coating thickness uniformity of poly-Si in a horizontal low pressure CVD reactor [START_REF] Collingham | Effect of recycling on the axial distribution of coating thickness in a low pressure CVD reactor[END_REF], in agreement with the fact that the regime of the reactor is close to a Continuous Stirred Tank Reactor as previously demonstrated [START_REF] Jensen | Modeling and analysis of low pressure CVD reactors[END_REF]. In these few examples the precursor is a hydride or a halide. Metalorganic precursors become very important CVD sources thanks to the diversity of their molecular structures which allows controlling their chemical, physical and thermal properties. This allows satisfying the stringent requirements for the CVD process, e.g. low de-for new and recycled coatings, typically rms = 18 ± 2 nm. Both types of coatings exhibit a very good conformal coverage on substrates with high surface roughness and on non-planar surfaces (edges, trenches…; not shown here). This is a great advantage of low pressure DLI-MOCVD which combines both a high diffusivity of gaseous reactants and decomposition at low temperature leading to a growth in the reactioncontrolled regime. The only difference at this stage is the growth rate: recycled coatings were deposited with a lower growth rate than new coatings because of the lower precursor concentration of the recycled solution. Of course, this can be adjusted later.
Structure
A typical XRD pattern of a new coating 3.4 μm thick grown at 723 K on Si substrate is presented and compared to a recycled coating in Fig. 2. In both cases, there is no evidence of diffraction peaks of polycrystalline phases. The pattern of the new coating is characteristic of an amorphous material with 4 weak and broad bumps, around 2θ = 13.8°, 28.6°, 42.5°and 79.0°(better seen in the inset zoom). No difference was found for the recycled coating, except that due to its lower thickness, a broad peak originating from the substrate (amorphous SiN x barrier layer) is observed at about 69°and the small bump at 79°is not as well marked.
The last two bumps from the coating at 42.5°and 79.0°corresponds to amorphous chromium carbides such as Cr 7 C 3 (JCPDS 00-036-1482) and Cr 3 C 2 (JCPDS 35-804) which both exhibits their main diffraction peaks in these two angular ranges. The FWHM of the most intense bump at 42.5°gives an average size of coherent domains close to 1 nm using Scherrer's formula, confirming the amorphous character of this carbide phase.
In carbon-containing materials the presence of graphite crystallites spectrophotometry to determine its precursor concentration. Indeed, BEBC exhibits a characteristic absorption band around 315 nm that could be used to measure the concentration according to the Beer-Lambert law (Supplementary material, Fig. S1). Also, as an improvement of the process, a closed-loop recycling system could also be installed and automated (currently in progress).
Coating characterization
The surface morphology and cross sections of coatings were characterized by scanning electron microscopy (SEM; Leo-435VP), and by electron probe micro-analysis (EPMA; Cameca SXFive) for the chemical composition. The crystalline structure was investigated at room temperature and ambient atmosphere by X-ray diffraction (XRD) in 2θ range [8-105°] using a Bruker D8-2 diffractometer equipped with a graphite monochromator (Bragg-Brentano configuration; Cu K α radiation). The microstructure of the coatings was also studied by transmission electron microscopy (TEM; Jeol JEM 2100 equipped with a 200 kV FEG, and a Bruker AXS Quantax EDS analyzer). For TEM observations, the samples were cut and thinned perpendicular to the surface by mechanical polishing, then prepared by a dimpling technique and thinned using a precision ion polishing system (PIPS, Gatan). By this method, the electron transparent area that can be observed is a cross section including the coating and the interface with the substrate.
The chemical environment of each element of the coating was investigated by X-ray photoelectron spectroscopy (XPS; Thermo Scientific K-Alpha) equipped with a monochromatic Al X-ray source and a low energy Ar + gun (1 keV) for surface cleaning and depth profile analysis. Raman spectroscopy (Horiba Jobin Yvon LabRAM HR 800 with a 532 nm laser) was also used to analyze chemical bonding, in particular CeC bonds.
Hardness and Young's modulus were determined by nanoindentation using a Nano Scratch Tester (CSM Instruments). The loading/unloading cycle was respectively from 0 to 30 mN at 60 mN•min -1 , a pause of 30 s, then from 30 to 0 mN at 60 mN•min -1 . With this cycle the indenter penetration depth was lower than 1/10 of the coating thickness. For the thickest coatings, Vickers hardness was also measured using a BUEHLER OmniMet 2100 tester.
Results
General appearance and morphology
New and recycled coatings grown in the conditions reported in Table 1 exhibit the same glassy and dense microstructure, typical of an amorphous material. Interestingly, no grain boundary is observed by SEM, even at higher magnification than in Fig. 1, and it will be confirmed by TEM analysis in the Section 3.3. They have a metallic glossy appearance and a mirror-like surface morphology. Surface roughness of Cr x C y coatings on Si substrates measured by AFM gave similar values is evidenced by the diffraction of the (002) plane expected at 2θ = 26.6°in well crystallized graphite. However, disorder in hexagonal graphite structure (e.g. inside and between basal planes, stacking order between the graphene sheets, slipping out of alignment, folding…) leads to broadening and shifting of this peak. For instance pyrolytic carbon can exhibit a more or less disordered turbostratic or graphitic structure. Consequently the bump at 28.6°is assigned to pyrolytic carbon nanoparticles, namely free-C.
The first bump at 2θ = 13.8°is not related to amorphous chromium carbides or to free-C. At this small angle (large interplanar spacing), this could be a compound with a lamellar structure derived from graphite, such as graphite oxide (GO). Indeed it has been reported that GO (001) plane diffracted from 2θ = 2 to 12°depending on the presence of oxygen-containing groups [START_REF] Blanton | Characterization of X-ray irradiated graphene oxide coatings using X-ray diffraction, X-ray photoelectron spectroscopy, and atomic force microscopy[END_REF]. However, the oxygen content of our coatings is lower than 5 at.% (Table 2), which is too low to support this hypothesis. Therefore the first bump at 13.8°was assigned to another derivative from graphite: the intercalation of Cr atoms between two graphene sheets as in graphite intercalation compound (GIC). Recently there are several reports dealing with the interactions between Cr and different forms of carbon, including graphene, nanotubes and fullerenes. For instance, a structural feature of the functionalization of graphene surface is the grafting of Cr which recreates locally the same type of bonding than in bis(arene)chromium, i.e. with an η 6 -bonding to the aromatic cycles [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. On the other hand, in sandwich graphene-Cr-graphene nanostructures, the representative distance of an ordered staking is twice the spacing between two consecutive graphene sheets, i.e. 6.556 Å, because Cr cannot be intercalated between two consecutive interlayer spaces as described in [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. This corresponds to a diffraction angle 2θ = 13.5°considering the (001) plane, which is very close to our 13.8°experimental value (Fig. 2).
Microstructure
From approximately the same magnification as in SEM images (Fig. 1) and to larger values up to high resolution, TEM images showed the same glassy microstructure for Cr x C y coatings, as it can be seen on Fig. 3a. Again no significant difference was found for recycled coatings. A high resolution TEM analysis (Supplementary material, Fig. S2) has revealed a dense and very finely granular structure with homogeneous and monodisperse distribution of contrasted domains. The average size of these domains is of the order of magnitude of 1 nm, in good agreement with the value found by XRD.
The selected area electron diffraction pattern of the micrograph shown in Fig. 3a revealed two diffuse rings (Fig. 3b) as for high resolution TEM analysis in Fig. S2. They are centered on interplanar distances 2.097 Å and 1.223 Å. In accordance with the Bragg relation, they correspond to theoretical XRD diffraction angles at 2θ = 43.1°and 78.1°. Therefore, the inner and outer rings on TEM diffraction patterns correspond to the first and the second Cr x C y bumps on XRD pattern found at 2θ = 43.5°and 79.0°, respectively (Fig. 2). This is supported by the fact that both crystalline Cr 7 C 3 and Cr 3 C 2 phases show their strongest XRD contributions in the 2θ range 39-44°and they also exhibit a second bunch of peaks with a lower intensity around 2θ = 80°. The two bumps on XRD pattern at 2θ = 13.8°and 28.6°assigned to GIC and free-C respectively, were not seen on TEM diffraction pattern likely because they were too weak and diffuse for this technique.
Chemical composition
Atomic composition determined by EPMA of new and recycled coatings is reported in Table 2. No significant difference is observed between both coatings; it is typically Cr 064 C 0.33 O 0.03 . The level of oxygen contamination is slightly higher in recycled coatings but it does not exceed about 5 at.%. This was attributed to the handling of the recycled solution that was stored in a pressurized tank. Although handled under Ar atmosphere this container had to be opened and closed several times in order to recover enough solution after several deposition experiments for a further use in a recycling CVD run. By neglecting traces of oxygen, the total carbon content of these carbide coatings (C:Cr = 0.50 ± 0.02) is intermediate between Cr 7 C 3 (C:Cr = 0.43) and Cr 3 C 2 (C:Cr = 0.67) but it is closer to Cr 7 C 3 . The overall atomic composition is consistent with a nanocomposite structure consisting of an amorphous carbide matrix a-Cr x C y and free-C, as supposed from XRD and TEM results, and the ratio y:x in the matrix is lower than 0.50, i.e. even closer to the Cr 7 C 3 stoichiometry.
XPS analyses did not reveal significant difference between both types of coatings, except a higher contribution of oxygen bonded to Cr for recycled coatings, in agreement with EPMA data. As-deposited samples exhibit Cr 2p 3/2 peaks characteristic of Cr(III) oxide (575.8 eV) and CreOH (576.8 eV), O 1s peaks were assigned to Cr 2 O 3 (530.8 eV) and CeO/OH (532.0 eV), and C 1s core level was characteristic to adventitious carbon with the components CeC/CeH at 284.8 eV and OeC]O at 288.0 eV. A depth profile analysis of C 1s showed that Ar + ion etching of the sample for about a minute at 1 keV removes readily the surface contamination without significant secondary effect of sputtering (Fig. 4a). The C 1s region of as-deposited sample shows the main features of adventitious carbon with the CeC/CeH and OeC]O components (Fig. 4b). After removal of the contamination layer by ion etching for 220 s the C 1s peak reveals two forms of carbon present in the coating: the carbide (282.8 eV) and the free-C (~284 eV) as The coatings being of metallic nature, Raman spectra should not be a priori readily informative. Raman signal originates only from surface oxides and carbon components; no response was expected from a metallic matrix. Fig. 5 compares the Raman spectra of a new and a recycled coating for the 200 to 1800 cm -1 spectral range. Overall, the large width of the bands reveals the absence of long-range order, either because of the amorphous character of the phases or because of defects. At first glance the spectra appear very different but they are both constituted of two zones with different intensities. The bands in the first zone (200-1000 cm -1 ) are essentially due to chromium oxides on the surface of the sample [START_REF] Iliev | Raman spectroscopy of ferromagnetic CrO 2[END_REF][START_REF] Yu | Phase control of chromium oxide in selective microregions by laser annealing[END_REF][START_REF] Barshilia | Structure and optical properties of pulsed sputter deposited Cr x O y /Cr/Cr 2 O 3 solar selective coatings[END_REF] while the bands in the second zone (1000-1800 cm -1 ) are characteristic of carbon in different environments [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. In comparison with the new coating (Fig. 5a), the higher intensity of the bands in the first zone for the recycled coating is consistent with a substantially greater oxidation (Fig. 5b), in good agreement with EPMA and XPS analyses. Spectra deconvolution on Fig. 5c and d reveals the D band at 1340 cm -1 and the G band at 1570 cm -1 which are representative of CeC bonds [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. G stands for graphite and D for disorder in graphitic structures. The bands at 1225 and 1455 cm -1 were assigned to transpolyacetylene, a strong polymeric chain e(C 2 H 2 ) n e where carbon adopts sp 2 configuration [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF]. Because of overlaps with the bands of trans-polyacetylene, the presence of C sp 3 cannot be ruled out since it is expected at 1180 and 1332 cm -1 in nanocrystalline and cubic diamond respectively, and at 1500 cm -1 in DLC (disordered sp 3 hybridization) [START_REF] Chu | Characterization of amorphous and nanocrystalline carbon films[END_REF].
In carbon materials, correlations were established between the development of the disorder from sp 2 structural model (graphite) to sp 3 (diamond) and the variation of the intensity ratio I(D)/I(G) as well as the FWHM and the position of the G band [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF][START_REF] Chu | Characterization of amorphous and nanocrystalline carbon films[END_REF][START_REF] Cançado | Quantifying defects in graphene via Raman spectroscopy at different excitation energies[END_REF]. Fig. 5 shows that the intensity ratio I(D)/I(G) significantly increases from new coatings (~0.6) to recycled coatings (~1.2), suggesting that the disorder within graphitic nanostructures is higher for the recycled sample. The fact that the relative intensity of the bands at 1220 and 1460 cm -1 assigned to trans-polyacetylene and possibly to C sp 3 does not change from a new coating to a recycled one suggests that the evolution of the disorder cannot be interpreted in terms of C sp 3 proportion. As a result, it is more appropriate to consider interactions between Cr and free-C as structural defects (both in-plane and between graphene sheets) that cause increasing disorder when their number increases.
The average size of graphitic nanoparticles in the basal plane determined from the FWHM of the G band, namely L a [START_REF] Ferrari | Raman spectroscopy of amorphous, nanostructured, diamond-like carbon, and nanodiamond[END_REF], remains constant at around 35 nm for both new and recycled coatings. On the other hand, a disorder measurement L D can be determined from the intensity ratio I(D)/I(G); it represents the average distance between two point defects in graphene planes [START_REF] Cançado | Quantifying defects in graphene via Raman spectroscopy at different excitation energies[END_REF]. As suggested above, Cr grafting on graphene sheet can be considered as a defect which induces disorder. Thus, L D can be considered as the average distance between two grafted Cr and it was found decreasing from 15.5 to 6.2 nm for new and recycled coatings respectively. This means that despite an identical overall atomic composition, recycled coatings exhibit a higher defect density, as interactions at the free-C/a-Cr 7 C 3 interfaces, e.g. both as hexahapto-η 6 -Cr grafting on external graphene sheets and Cr intercalation between graphene layers.
Hardness
For the thickest Cr x C y new coatings (~5 μm), Vickers hardness around 2300 HV was measured which is quite high for CVD chromium carbide coatings. Values in the range 700-1600 HV were previously reported for polycrystalline MOCVD chromium carbides coatings [START_REF] Aleksandrov | Vapor-phase deposition of coatings from bis-arene chromium compounds on aluminum alloys[END_REF][START_REF] Yurshev | Surface hardening of tools by depositing a pyrolytic chromium carbide coating[END_REF] while electrodeposited Cr x C y coatings did not exceed 1300 HV 100 [START_REF] Zeng | Tribological and electrochemical behavior of thick Cr-C alloy coatings electrodeposited in trivalent chromium bath as an alternative to conventional Cr coatings[END_REF][START_REF] Protsenko | Improving hardness and tribological characteristics of nanocrystalline Cr-C films obtained from Cr(III) plating bath using pulsed electrodeposition[END_REF] and electroplating hard Cr is lower than 1200 HV [START_REF] Lausmann | Electrolytically deposited hardchrome[END_REF][START_REF] Liang | Structure characterization and tribological properties of thick chromium coating electrodeposited from a Cr(III) electrolyte[END_REF]. Only PVD processes manage to reach even higher hardness, from 2000 to 3000 HV [START_REF] Aubert | Hard chrome and molybdenum coatings produced by physical vapour deposition[END_REF][START_REF] Cholvy | Characterization and wear resistance of coatings in the Cr-C-N ternary system deposited by physical vapour deposition[END_REF][START_REF] Wang | Synthesis of Cr 3 C 2 coatings for tribological applications[END_REF].
For the thinnest Cr x C y coatings (≤3.5 μm), hardness was determined by nanoindentation on five samples corresponding to three new and two recycled coatings. The values of hardness (H) and Young's modulus (E) reported in Table 3 are While there is no difference on the Young's modulus between new and recycled coatings (285 and 295 GPa, respectively), the nanoindentation hardness of recycled coatings could be considered slightly higher (29 GPa) than that of new coatings (23 GPa), despite the standard deviations. This will be discussed in the next section. The ratio H 3 /E 2 is often referred as a durability criteria; it is proportional to the contact loads needed to induce plasticity and consequently it characterizes the resistance to induce plastic deformation [START_REF] Tsui | Nanoindentation and nanoscratching of hard carbon coatings for magnetic disks[END_REF][START_REF] Musil | Hard and superhard Zr-Ni-N nanocomposite films[END_REF]. Comparison of both types of coatings revealed a better behavior of recycled coatings as a result of their higher hardness, the ratio H 3 /E 2 being increased almost 3-fold.
Discussion
Chromium carbide coatings were successfully deposited using directly recycled solutions in the same DLI-MOCVD conditions as using new bis(ethylbenzene)chromium solutions. The growth rate in effluent recycling mode was found lower (around 0.5-1 μm•h -1 instead of 5-10 μm•h -1 ) essentially because the recycled BEBC solution in toluene was less concentrated due to consumption in previous runs. Chemical and structural characterizations of both types of coatings did not reveal significant differences. The coatings exhibit a smooth surface morphology, a dense and glassy-like microstructure and an amorphous structure (XRD and TEM analyses). The overall atomic composition was found to be Cr 0.64 C 0.33 O 0.03 (Table 2). Interestingly, both coatings exhibit a high conformal coverage on non-planar surfaces at relatively low deposition temperature (723 K).
A high hardness
This section discusses on the one hand the high values of hardness of the coatings whatever the precursor solution injected (new or recycled) and, on the other hand, on the difference of nanohardness between recycled and new coatings, assuming therefore that the difference is significant enough.
Both Vickers hardness (2300 HV) and nanoindentation hardness (23-29 GPa) have revealed high values, at the level of those previously reported for PVD Cr x C y coatings [START_REF] Su | Effect of chromium content on the dry machining performance of magnetron sputtered Cr x C coatings[END_REF][START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF][START_REF] Romero | Nanometric chromium nitride/chromium carbide multilayers by R.F. magnetron sputtering[END_REF]. A great advantage of DLI-MOCVD Cr x C y coatings is that they are amorphous, without grain boundaries, while those deposited by other processes are polycrystalline. It is generally reported that crystalline chromium carbides coatings grown by PVD [START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF], cathodic arc evaporation [START_REF] Esteve | Cathodic chromium carbide coatings for molding die applications[END_REF] and electrodeposition [START_REF] Zeng | Tribological and electrochemical behavior of thick Cr-C alloy coatings electrodeposited in trivalent chromium bath as an alternative to conventional Cr coatings[END_REF] are harder than amorphous ones. Interestingly the hardness of our amorphous coatings is already at the level of these polycrystalline Cr x C y coatings. Consequently, their amorphous structure cannot explain their high hardness. We are aware that Cr 3 C 2 is the hardest phase of CreC system and therefore its presence, even in the amorphous state, should significantly increase the hardness. However, we have shown that the matrix of our coatings has the stoichiometry a-Cr 7 C 3 . Also it is known that for nanocrystalline Cr x C y coatings, the nanohardness increased by decreasing the average grain size [START_REF] Protsenko | Improving hardness and tribological characteristics of nanocrystalline Cr-C films obtained from Cr(III) plating bath using pulsed electrodeposition[END_REF]. Without evidence for nanocrystalline structure this claim does not hold for our coatings. It was also reported that a high hardness was achieved for high Cr contents [START_REF] Su | Effect of chromium content on the dry machining performance of magnetron sputtered Cr x C coatings[END_REF], or that stoichiometric CreC phases must be privileged, meaning a C excess must be avoided [START_REF] Romero | Nanometric chromium nitride/chromium carbide multilayers by R.F. magnetron sputtering[END_REF]. We will discuss below that in our case a C excess, compared to Cr 7 C 3 stoichiometry, on the contrary plays a key role.
Among the factors that influence the hardness of coatings, residual stresses are probably the most important [START_REF] Nowak | The effect of residual stresses on nanoindentation behavior of thin W-C based coatings[END_REF]. The influence of other factors as coating thickness, growth conditions, micro-and -1.20 and -1.25 GPa were found for the 4.0 and 6.0 μm thick coatings, respectively. Reliable data could not be obtained by this method for recycled coatings because they were too thin. Assuming a rigid substrate, the maximum thermal stress can be calculated according to the equation:
= ′ - ∆ σ E (α α ) T. t f f s
where ∆ T is the variation of temperature and α i the thermal expansion coefficient (α s = 18.3 × 10 -6 K -1 and α f = 10.1 × 10 -6 K -1 ). For ∆ T = 430 K, calculated thermal stresses are -1.32 GPa. These values are generally found for ceramic coatings as TiN on stainless steel substrates [START_REF] Wu | Modified curvature method for residual thermal stress estimation in coatings[END_REF]. These results are confirming the dominant contribution of thermal stresses to residual stresses. The comparison of hardness of new (23 ± 2 GPa) and recycled (29 ± 4 GPa) coatings reveals a small but significant difference, taking into account the standard deviations, which raises the question: why should recycled coatings be harder? This could be discussed in terms of residual stresses but no data are available for recycled coatings. However, the residual stresses are largely dominated by thermal stress which has the same value for both coatings since they were deposited at the same temperature and their thickness was relatively close (3.5 and 1.0 μm, respectively). Consequently this is likely not a major factor to explain the difference of hardness. One of the best ways to comment on this difference in hardness is to focus on the specific nanocomposite structure of these hard coatings since it is known that this influences the hardness.
A specific amorphous nanocomposite structure
The morphology, microstructure and composition of new and recycled coatings are the same. Furthermore, XRD, XPS and Raman analyses gave evidence for an amorphous nanocomposite structure. The only significant differences between both types of coatings were found by Raman spectroscopy (Fig. 5).
Basically, the microstructure is composed of 2 phases with interfaces acting as strong interphases. The dominant phase is an amorphous carbide matrix with the Cr 7 C 3 stoichiometry (namely a-Cr 7 C 3 ). Nanometric free-C domains (L a = 35 nm in-plane correlation length) are embedded in this amorphous carbide matrix. They are related to pyrolytic C, which means they exhibit a disordered graphitic structure (turbostratic stacking) with likely some covalent bonding between graphene sheets via open cycles at the edges (generating C sp 3 sites). Furthermore, some graphene sheets are also connected by trans-polyacetylene chains at the edges of these C domains. The relative amount of free-C does not exceed 20% of the total carbon (XPS and EPMA data). An important finding was to identify signatures by Raman and XRD revealing interactions between Cr and free-C. This particular nanostructure is shown schematically in Fig. 6. Due to their layered structure, the free-C domains exhibit two types of interfaces with the a-Cr 7 C 3 matrix: the one which is parallel to the graphene sheets (parallel interface) and that which is roughly perpendicular to the stacking of graphene planes (perpendicular interface). Cr atoms from the amorphous carbide matrix can be grafted on external graphene sheets as hexahapto η 6 -Cr complexes of graphene [START_REF] Sarkar | Organometallic chemistry of[END_REF]. These specific bonds are very similar to those in the BEBC precursor. They contribute to the strengthening of the parallel interfaces. Also, individual Cr atoms can be intercalated between consecutive graphene sheets as in graphite intercalation compounds as supported by XRD data (Fig. 2) [START_REF] Bui | Graphene-Cr-Graphene intercalation nanostructures: stability and magnetic properties from density functional theory investigations[END_REF][START_REF] Sarkar | Organometallic chemistry of[END_REF]. All these Cr interactions can be considered as point defects in ideal graphene sheets. Despite a graphitic base structure, these interactions and interconnections between free-C and a-Cr 7 C 3 , through C sp 3 , transpolyacetylene and η 6 -Cr bonding rigidify the free-C domains, strengthen the interfaces and consolidate a 3D structural network between the carbide matrix and free-C through strong interphases.
The defect density on the external graphene sheets of free-C nanostructures has been estimated from Raman data as the average distance L D between two point defects in graphene sheet (Fig. 6). Interestingly L D was found to decrease from 15.5 to 6.2 nm for new and recycled coatings, respectively, while the average size of graphene sheet given by in-plane correlation length L a is constant (35 nm). This means the defect density, i.e. the density of interactions between the carbide matrix and free-C is significantly higher for recycled coatings than for the new ones. This trend suggests a correlation with the higher hardness of recycled coatings (29 ± 4 GPa) compared to the new coatings (23 ± 2 GPa). The nanohardness would increase with the density of chemical bonds both within graphene sheets of the free-C nanostructures and between these free-C domains and the amorphous carbide matrix.
Basically the growth mechanism is the same for "new" and "recycled" coatings. A simple chemical mechanism reduced to 4 limiting reactions (1 homogeneous, 3 heterogeneous) was proposed for kinetic nanostructure has been reported but they act both on the stresses and the hardness, and so their effect is difficult to decouple.
For two new coatings 3.5 and 35.0 μm thick the nanohardness and Young modulus were found constant at 23.6 ± 2.0 GPa and approximately 293 GPa, respectively. This means the nanohardness is independent of the thickness for values higher than 3.5 μm. At this stage no data is available for thinner coatings to comment on the possible influence of thicknesses lower than 3.5 μm. It is noteworthy that our experimental value of the Young's modulus is in good agreement with the Cr 7 C 3 theoretical value of 302 GPa [START_REF] Xiao | Mechanical properties and chemical bonding characteristics of Cr 7 C 3 type multicomponent carbides[END_REF].
In coating-substrate systems prepared by CVD, residual stresses (σ r ) originate from the sum of thermal stresses (σ t ) induced by the mismatch of thermal expansion between the coating and the substrate, and intrinsic stresses (σ i ) induced by the growth mechanism. The residual stresses were determined for two "new" coatings (thickness t f = 4.0 and 6.0 μm) deposited on 304 L steel strip 0.5 mm thick (t s ). As the ratio t f /t s (or E f ′ t f /E s ′ t s ) is ≤ 1% [START_REF] Klein | How accurate are Stoney's equation and recent modifications[END_REF], the deformation of the substrate can be negligible. The E i ′ are the biaxial moduli (E i / (1ν i )), t i the thicknesses, ν i the Poisson's ratio and the subscripts s and f respectively denotes the substrate and the film, which leads to E f ′ = 363 GPa and E s ′ = 278 GPa using ν CrC = 0.2 and E CrC = 290 GPa (average data of Cr 7 C 3 and Cr 3 C 2 ). The Stoney's equation is applicable with an error which does not exceed 5%. From the measurement of the change of curvature before and after deposition, compressible residual stresses of reactive, thermally stable in the deposition temperature range, and therefore does not participate in the growth mechanism as found using toluene and cyclohexane [START_REF] Maury | Multilayer chromium based coatings grown by atmospheric pressure direct liquid injection CVD[END_REF][START_REF] Douard | Dépôt de carbures, nitrures et multicouches nanostructurées à base de chrome sous pression atmosphérique par DLI-MOCVD: nouveaux procédés et potentialités de ces revêtements métallurgiques[END_REF].
The solution recovered in the cold trap at the exit of the CVD reactor contains undecomposed BEBC, toluene (solvent) and a mixture of organic by-products originating from the released ligands and the heterogeneous decomposition of a small part of them producing ethylbenzene, diethylbenzene, benzene, ethyltoluene, toluene [START_REF] Travkin | Thermal decomposition of bisarene compounds of chromium[END_REF] as well as lighter and non-aromatic hydrocarbons and hydrogen [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF]. The lighter hydrocarbons and hydrogen are not efficiently trapped because of their high volatility. The organic by-products originating from the ligands are of the same family as the solvent. Consequently, the trapped solution that can be directly recycled contains unreacted BEBC and solvents constituted of a mixture of several aromatic hydrocarbons. The major difference with the new solution is that the BEBC concentration in the recycled solution is lower due to its consumption. Finally, direct recycling of all effluents can be implemented using this chemical system in a close-loop to reach a conversion rate of the precursor near 100% (currently in progress).
Conclusions
The impact of the high cost of metalorganic precursors on the economic viability of MOCVD can be overcome by maximizing the conversion yield. It was demonstrated that direct recycling of effluent is possible using appropriate bis(arene)Cr(0) precursors.
Chromium carbide coatings were deposited by DLI-MOCVD using either a new bis(ethylbenzene)chromium solution in toluene or a recycled solution recovered at the exit of the reactor. Chemical and structural characteristics of both types of coatings are very similar. They are amorphous with a composition slightly higher than Cr 7 C 3 . The nanohardness is particularly high with values in the range 23-29 GPa. This high hardness is essentially due to the nanocomposite microstructure, without grain boundary, and strong interphases between free-C domains embedded in an amorphous Cr 7 C 3 matrix. The slightly higher hardness of recycled coatings was assigned to a higher density of chemical bonds both within the C domains (C sp 3 and trans-polyacetylene bridges) and at the interfaces with the amorphous carbide matrix (Cr grafting and intercalation). A gradual filling of prismatic and octahedral C sites of the matrix also likely plays a role in strengthening the interphase.
It is a breakthrough for MOCVD because the process can be extended to metals of columns 5 and 6 for which the same M(0) chemistry can be implemented and the carbides also have many practical applications as protective metallurgical coatings. Recycling in a closed-loop is currently in progress to reach a conversion rate near 100% in a onestep CVD run. modeling and simulation of the process. It is based on site competition reactions [START_REF] Michau | Chromium Carbide Growth by Direct Liquid Injection Chemical Vapor Deposition in Long and Narrow Tubes, Experiments, Modeling and Simulation[END_REF]. The lower concentration of recycled BEBC solutions would induce a lower supersaturation of BEBC near the growing surface. As a result, this would favor a higher mobility of adsorbed chemical species or would influence adsorption competition and finally would facilitate locally the formation of chemical bonds both within the C domains (C sp 3 , trans-polyacetylene bridges) and at the parallel and perpendicular interfaces with the amorphous carbide matrix (Cr grafting and intercalation, respectively). Subsequently the nanostructure of the coating is overall strengthened and its nanohardness is increased.
Another hypothesis about strong interphases instead of sharp and weak interfaces, not supported here by experimental data, is to be aware that in the crystallographic structure of Cr 7 C 3 carbon atoms are in trigonal prisms connected in chains while it was reported that in amorphous Cr 1-x C x for x > 33% carbon progressively filled octahedral interstitial sites as the C content increased, suggesting that C coexisted in both prismatic and octahedral sites [START_REF] Bauer-Grosse | Thermal stability and crystallization studies of amorphous TM-C films[END_REF]. It is reasonable to assume that at the interface a-Cr 7 C 3 /free-C a carbon enrichment of the carbide matrix is possible by the gradual occupation of both prismatic and octahedral sites. For instance in C-rich amorphous Cr x C y grown by PVD, C atoms were located in a mixture of prismatic and octahedral sites with a distribution depending on the total C content [START_REF] Magnuson | Electronic structure and chemical bonding of amorphous chromium carbide thin films[END_REF]. These polyhedral units are characterized by strong covalent Cr 3d-C 2p bonding. Locally, at the a-Cr 7 C 3 /free-C interface, the proportion of C atom filling octahedral sites probably depends on growth conditions. If the growth rate of the a-Cr 7 C 3 matrix is slow enough, for instance because the mole fraction of precursor is low, in a competitive pathway C can diffuse to fill octahedral sites and thus strengthen the interphase.
At this stage it is not reasonable to speculate more on the difference of hardness between "new" and "recycled" coatings because the difference is not so large and it must be confirmed by other experiments. However, it can be retained that both the density of interactions between the carbide matrix and free-C (grafting and intercalation of Cr), supported by experimental data, and the assumed gradual occupation of prismatic and octahedral sites of the carbide by the carbon generate strong interphases which influence the mechanical properties.
Key points making recycling possible: selection of precursor
A barrier in the implementation of MOCVD recycling is that the decomposition of the metalorganic precursor is complex and often produces many by-products which, if recycled, significantly affect the composition and microstructure of the coatings. Consequently a tedious and expensive separation of the by-products is necessary to recover the precursor which has not reacted. The key is therefore to use metalorganic precursors whose decomposition mechanism is very simple, and which do not produce metalorganic by-products that could modify the growth mechanism. This is the case of bis(arene)M(0) compounds where the metal M is in the zero valence state, as in the deposited metal or carbide coatings. This important family of precursors was used for low temperature MOCVD of carbides of V [START_REF] Abisset | Low temperature MOCVD of V-C-N coatings using bis(arene) vanadium as precursors[END_REF], Nb [17], Ta [17], Cr [START_REF] Anantha | Chromium deposition from dicumene-chromium to form metal-semiconductor devices[END_REF][START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF][START_REF] Polikarpov | Chromium films obtained by pyrolysis of chromium bisarene complexes in the presence of chlorinated hydrocarbons[END_REF], Mo [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF] and W [START_REF] Whaley | Carbonaceous solid bodies and processes for their manufacture[END_REF]. In the deposition process the metal, and in particular Cr, stays in the zero valence state. For instance no hexavalent Cr(VI) compound is formed which entirely satisfies European regulation REACH or related rules. The ligands are stable aromatic molecules; they are readily released by selective bond breaking during the deposition process without undergoing significant pyrolysis [START_REF] Maury | Low temperature MOCVD routes to chromium metal thin films using bis(benzene)chromium[END_REF][START_REF] Travkin | Thermal decomposition of bisarene compounds of chromium[END_REF]. It is then recommended in DLI-MOCVD to use solvent of the same family as the ligands (e.g. toluene for BEBC) to avoid uncontrolled side-reactions.
The main characteristics of the coatings are independent of the nature of the ligands as deduced from the use of Cr(C 6 H 6 ) 2 , Cr (C 6 H 5 i Pr) 2 and Cr(C 6 H 5 Et) 2 [START_REF] Maury | Structural characterization of chromium carbide coatings deposited at low temperature by LPCVD process using dicumene chromium[END_REF][START_REF] Schuster | Influence of organochromium precursor chemistry on the microstructure of MOCVD chromium carbide coatings[END_REF]. As a result, a mixture of different bis(arene)Cr precursors can be used as in [START_REF] Devyatykh | Composition of impurities in bis-ethylbenzene chromium produced according to the Friedel-Crafts method[END_REF][START_REF] Gribov | Super-pure materials from metal-organic compounds[END_REF] and in this work. Also the nature of the solvent is not very important provided that it is non-
Fig. 1 .
1 Fig. 1. Cross section of Cr x C y coatings grown at 723 K and 6.7 kPa on Si substrates using (a) a new BEBC solution in toluene and (b) a recycled solution. The lower thickness in (b) originates from the lower concentration of the recycled solution.
Fig. 2 .
2 Fig. 2. Typical XRD pattern of a "new" Cr x C y coating grown by DLI-MOCVD with a new solution of BEBC in toluene (black) compared to that of a "recycled" coating (grey) grown in the same conditions.
Fig. 3 .
3 Fig. 3. (a) TEM micrograph of a new Cr x C y coating observed in cross section; (b) corresponding selected area electron diffraction showing two diffuse rings of the amorphous carbide phase.
minority component (Fig.4c). After the surface cleaning by ion etching, the O 1s intensity is significantly decreased and only one component is found at 530.8 eV (CreO). Regarding Cr 2p 3/2 region, the oxygenated components have almost disappeared and the peak is shifted to 574.0 eV as for Cr metal or carbide (CreC bonds). This XPS analysis confirms the presence of free-C and a carbidic form in the coatings as observed by XRD. After in situ surface cleaning the atomic composition of the surface analyzed by XPS is Cr 0.57 C 0.33 O 0.10 that is in good agreement with EPMA data (Table2). From the relative intensity of the two components of C 1s peak of Fig.4cthe proportion of free-C to the total-C is approximately 20%. On the other hand, considering the EPMA composition Cr 0.64 C 0.33 O 0.03 (Table2) as a representative formula and neglecting oxygen content, comparison with the stoichiometric Cr 7 C 3 phase reveals a carbon excess as free-C of 18 at.%. These two values of relative content of free-C determined by XPS and EPMA are in good agreement and confirm the presence of free-C nanostructures identified in XRD. Due to the presence of free-C, it is confirmed that the matrix a-Cr x C y has the composition a-Cr 7 C 3 .
Fig. 4 .
4 Fig. 4. XPS analysis of a new Cr x C y coating: (a) depth profile of C 1s components, (b) C 1s spectra of as-received sample (0 s ion etching time) and (c) C 1s spectra after 220 s ion etching time.
the average of at least ten successful indentations per sample. Considering standard deviations of measurements, new and recycled coatings have substantially comparable H and E values. Nanoindentation measurements confirmed Vickers hardness tests. With values in the range 23-29 GPa for coatings on 304 L thicker than 1 μm, both types of coatings exhibit nanoindentation hardness as high as those of Cr x C y coatings deposited by PVD, e.g. 24.2 GPa [50], 22 GPa [51] and 21 GPa [52], as well as MOCVD, 25 GPa [53]. It is noteworthy that our coatings are amorphous, whereas those in the cited references were polycrystalline.
Fig. 5 .
5 Fig. 5. Raman spectra of Cr x C y coatings grown on a Si substrate with a new BEBC solution (a) (c), and a recycled one (b) (d): spectral range 200-1800 cm -1 (a) (b), and the C region (c) (d). The proposed deconvolution of the C bands is commented in the text.
Fig. 6 .
6 Fig. 6. Schematic representation of the amorphous and nanocomposite microstructure of Cr x C y coatings deposited by DLI-MOCVD showing the main structural features at the interface between free C nanostructures embedded in an amorphous Cr 7 C 3 matrix (Cr atoms are the red circles). The L a and L D distances shown are discussed in the text. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Table 1
1 Experimental DLICVD conditions of new Cr x C y coatings. The growth conditions in recycling mode are the same except that the BEBC solution injected resulting from cryogenic trapping during previous CVD runs had a lower concentration.
T (K) P (kPa) BEBC in toluene (mol/L) BEBC gas flow rate (sccm) Toluene gas flow rate (sccm) N 2 gas flow (sccm) Frequency (Hz) Opening time (ms)
723 6.7 0.3 9 216 500 1-10 0.5-5
Table 2
2 Atomic composition of coatings grown with a new and recycled solution (EPMA data).
Table 3
3 Nanoindentation results for new and recycled coatings deposited on 304 L stainless steel substrate.
Coating Thickness (μm) H (GPa) E (GPa) H 3 /E 2 (GPa)
New Cr x C y Recycled Cr x C y 3.5 1.0 23 ± 2 29 ± 4 285 ± 20 295 ± 40 1.4 × 10 -1 3.9 × 10 -1
Acknowledgements
This work was supported by the Centre of Excellence of Multifunctional Architectured Materials "CEMAM" [grant number AN-10-LABX-44-01]. We thank Sofiane Achache and Raphaël Laloo for their help in hardness measurements, Jerome Esvan and Olivier Marsan for their assistance in XPS and Raman spectroscopies.
Appendix A. Supplementary data
Supplementary data to this article can be found online at https:// doi.org/10.1016/j.surfcoat.2017.06.077. |
01766530 | en | [
"info.info-fl"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766530/file/main.pdf | Thomas Chatain
Maurice Comlan
David Delfieu
Loïg Jezequel
Olivier H Roux
Pomsets and Unfolding of Reset Petri Nets
Reset Petri nets are a particular class of Petri nets where transition firings can remove all tokens from a place without checking if this place actually holds tokens or not. In this paper we look at partial order semantics of such nets. In particular, we propose a pomset bisimulation for comparing their concurrent behaviours. Building on this pomset bisimulation we then propose a generalization of the standard finite complete prefixes of unfolding to the class of safe reset Petri nets.
Introduction
Petri nets are a well suited formalism for specifying, modeling, and analyzing systems with conflicts, synchronization and concurrency. Many interesting properties of such systems (reachability, boundedness, liveness, deadlock,. . . ) are decidable for Petri nets. Over time, many extensions of Petri nets have been proposed in order to capture specific, possibly quite complex, behaviors in a more direct manner. These extensions offer more compact representations and/or increase expressive power. One can notice, in particular, a range of extensions adding new kinds of arcs to Petri nets: read arcs and inhibitor arcs [START_REF] Baldan | Contextual Petri nets, asymmetric event structures and processes[END_REF][START_REF] Montanari | Contextual nets[END_REF] (allowing to read variables values without modifying them), and reset arcs [START_REF] Araki | Some decision problems related to the reachability problem for Petri nets[END_REF] (allowing to modify variables values independently of their previous value). Reset arcs increase the expressiveness of Petri nets, but they compromise analysis techniques. For example, boundedness [START_REF] Dufourd | Boundedness of reset P/T nets[END_REF] and reachability [START_REF] Araki | Some decision problems related to the reachability problem for Petri nets[END_REF] are undecidable. For bounded reset Petri nets, more properties are decidable, as full state spaces can be computed.
Full state-space computations (i.e. using state graphs) do not preserve partial order semantics. To face this problem, Petri nets unfolding has been proposed and has gained the interest of researchers in verification [START_REF] Esparza | Unfoldings -A Partial-Order Approach to Model Checking[END_REF], diagnosis [START_REF] Benveniste | Diagnosis of asynchronous discreteevent systems: a net unfolding approach[END_REF], and planning [START_REF] Hickmott | Planning via Petri net unfolding[END_REF]. This technique keeps the intrinsic parallelism and prevents the combinatorial interleaving of independent events. While the unfolding of a Petri net can be infinite, there exist algorithms for constructing finite prefixes of it [START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF][START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF]. Unfolding have the strong interest of preserving more behavioral properties of Petri nets than state graphs. In particular they preserve concurrency and its counterpart: causality. Unfolding techniques have also been developed for extensions of Petri nets, and in particular Petri nets with read arcs [START_REF] Baldan | Efficient unfolding of contextual Petri nets[END_REF].
Our contribution: Reachability analysis is known to be feasible on bounded reset Petri nets, however, as far as we know, no technique for computing finite prefixes of unfolding exists yet, and so, no technique preserving concurrency and causality exists yet. This is the aim of this paper to propose one. For that, we characterise the concurrent behaviour of reset Petri nets by defining a notion of pomset bisimulation. This has been inspired by several works on pomset behaviour of concurrent systems [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF][START_REF] Van Glabbeek | Equivalence notions for concurrent systems and refinement of actions[END_REF][START_REF] Vogler | Bisimulation and action refinement[END_REF]. From this characterization we can then express what should be an unfolding preserving the concurrent behaviour of a reset Petri net. We show that it is not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. Then we propose a notion of finite complete prefixes of unfolding of safe reset Petri nets that allows for reachability analysis while preserving pomset behaviour. As a consequence of the two other contributions, these finite complete prefixes do have reset arcs.
This paper is organized as follows: We first give basic definitions and notations for (safe) reset Petri nets. Then, in Section 3, we propose the definition of a pomset bisimulation for reset Petri nets. In Section 4 we show that, in general, there is no Petri net without resets which is pomset bisimilar to a given reset Petri net. Finally, in Section 5 -building on the results of Section 4 -we propose a finite complete prefix construction for reset Petri nets.
Reset Petri nets
Definition 1 (structure). A reset Petri net structure is a tuple (P , T , F , R) where P and T are disjoint sets of places and transitions, F ⊆ (P × T ) ∪ (T × P ) is a set of arcs, and R ⊆ P × T is a set of reset arcs.
An element x ∈ P ∪ T is called a node and has a preset • x = {y ∈ P ∪ T : (y, x) ∈ F } and a postset x • = {y ∈ P ∪ T : (x, y) ∈ F }. If, moreover, x is a transition, it has a set of resets x = {y ∈ P : (y, x) ∈ R}.
For two nodes x, y ∈ P ∪ T , we say that: x is a causal predecessor of y, noted x ≺ y, if there exists a sequence of nodes x 1 . . . x n with n ≥ 2 so that ∀i ∈ [1..n-1], (x i , x i+1 ) ∈ F , x 1 = x, and x n = y. If x ≺ y or y ≺ x we say that x and y are in causal relation. The nodes x and y are in conflict, noted x#y, if there exists two sequences of nodes x 1 . . . x n with n ≥ 2 and ∀i ∈ [1..n -1], (x i , x i+1 ) ∈ F , and y 1 . . . y m with m ≥ 2 and ∀i ∈ [1..m -1], (y i , y i+1 ) ∈ F , so that x 1 = y 1 is a place, x 2 = y 2 , x n = x, and y m = y.
A marking is a set M ⊆ P of places. It enables a transition t ∈ T if ∀p ∈ • t, p ∈ M . In this case, t can be fired from M , leading to the new marking
M = (M \ ( • t ∪ t)) ∪ t • .
The fact that M enables t and that firing t leads to M is denoted by M [t M .
Definition 2 (reset Petri net).
A reset Petri net is a tuple (P , T , F , R, M 0 ) where (P , T , F , R) is a reset Petri net structure and M 0 is a marking called the initial marking. A marking M is said to be reachable in a reset Petri net if there exists a sequence M 1 . . . M n of markings so that: ∀i ∈ [1..n -1], ∃t ∈ T , M i [t M i+1 (each marking enables a transition that leads to the next marking in the sequence), M 1 = M 0 (the sequence starts from the initial marking), and M n = M (the sequence leads to M ). The set of all markings reachable in a reset Petri net N R is denoted by [N R .
A reset Petri net with an empty set of reset arcs is simply called a Petri net.
Definition 3 (underlying Petri net). Given N R = (P , T , F , R, M 0 ) a reset Petri net, we call its underlying Petri net the Petri net N = (P , T , F , ∅, M 0 ).
The above formalism is in fact a simplified version of the general formalism of reset Petri nets: arcs have no multiplicity and markings are sets of places rather than multisets of places. We use it because it suffices for representing safe nets.
Definition 4 (safe reset Petri net). A reset Petri net (P , T , F , R, M 0 ) is said to be safe if for any reachable marking M and any transition
t ∈ T , if M enables t then (t • \ ( • t ∪ t)) ∩ M = ∅.
The reader familiar with Petri nets will notice that our results generalize to larger classes of nets: unbounded reset Petri nets for our pomset bisimulation (Section 3), and bounded reset Petri nets for our prefix construction (Section 5).
In the rest of the paper, unless the converse is specified, we consider reset Petri nets so that the preset of each transition t is non-empty: • t = ∅. Notice that this is not a restriction to our model: one can equip any transition t of a reset Petri net with a place p t so that p t is in the initial marking and • p t = p • t = {t}. One may need to express that two (reset) Petri nets have the same behaviour. This is useful in particular for building minimal (or at least small, that is with few places and transitions) representatives of a net; or for building simple (such as loop-free) representatives of a net. A standard way to do so is to define a bisimulation between (reset) Petri nets, and state that two nets have the same behaviour if they are bisimilar.
The behaviour of a net will be an observation of its transition firing, this observation being defined thanks to a labelling of nets associating to each transition an observable label or the special unobservable label ε.
Definition 5 (labelled reset Petri net). A labelled reset Petri net is a tuple (N R , Σ, λ) so that: N R = (P , T , F , R, M 0 ) is a reset Petri net, Σ is a set of transition labels, and λ : T → Σ ∪ {ε} is a labelling function.
In such a labelled net we extend the labelling function λ to sequences of transitions in the following way: given a sequence t
1 . . . t n (with n ≥ 2) of tran- sitions, λ(t 1 . . . t n ) = λ(t 1 )λ(t 2 . . . t n ) if λ(t 1 ) ∈ Σ and λ(t 1 . . . t n ) = λ(t 2 . . . t n ) else (that is if λ(t 1 ) = ε).
From that, one can define bisimulation as follows.
Definition 6 (bisimulation). Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two labelled reset Petri nets with N R,i = (P i , T i , F i , R i , M 0,i ). They are bisimilar if and only if there exists a relation ρ ⊆
[N R,1 × [N R,2 (a bisimulation) so that: 1. (M 0,1 , M 0,2 ) ∈ ρ, 2. if (M 1 , M 2 ) ∈ ρ, then (a) for every transition t ∈ T 1 so that M 1 [t M 1,n there exists a sequence t 1 . . . t n of transitions from T 2 and a sequence M 2,1 . . . M 2,n of markings of N R,2 so that: M 2 [t 1 M 2,1 [t 2 . . . [t n M 2,n , λ 2 (t 1 . . . t n ) = λ 1 (t), and (M 1,n , M 2,n ) ∈ ρ (b) the other way around (for every transition t ∈ T 2 . . . ) p1 t1 p2 p3 t2 p4 N R,1 p1 t1 p2 p3 t2 p4 p5 N R,2
Fig. 2. Two bisimilar nets
This bisimulation however hides an important part of the behaviours of (reset) Petri nets: transition firings may be concurrent when transitions are not in causal relation nor in conflict. For example, consider Figure 2 where N R,1 and N R,2 are bisimilar (we identify transition names and labels). In N R,1 , t 1 and t 2 are not in causal relation while in N R,2 they are in causal relation.
To avoid this loss of information, a standard approach is to define bisimulations based on partially ordered sets of transitions rather than totally ordered sets of transitions (the transition sequences used in the above definition). Such bisimulations are usually called pomset bisimulations.
Pomset bisimulation for reset Petri nets
In this section, we propose a definition of pomset bisimulation for reset Petri nets. It is based on an ad hoc notion of processes (representations of the executions of a Petri net, concurrent counterpart of paths in automata).
Processes of reset Petri nets
We recall a standard notion of processes of Petri nets and show how it can be extended to reset Petri nets. As a first step, we define occurrence nets which are basically Petri nets without loops.
Definition 7 (occurrence net). An occurrence net is a (reset) Petri net (B, E, F O , R O , M O 0 ) so that, ∀b ∈ B, ∀x ∈ B ∪ E: (1) | • b| ≤ 1, (2) x is not in causal relation with itself, (3) x is not in conflict with itself, (4) {y ∈ B ∪E : y ≺ x} is finite, (5) b ∈ M O 0 if and only if • b = ∅.
Places of an occurrence net are usually referred to as conditions and transitions as events. In an occurrence net, if two nodes x, y ∈ B ∪ E are so that x = y, are not in causal relation, and are not in conflict, they are said to be concurrent. Moreover, in occurrence net, the causal relation is a partial order.
There is a price to pay for having reset arcs in occurrence nets. With no reset arcs, checking if a set E of events together form a feasible execution (i.e. checking that the events from E can all be ordered so that they can be fired in this order starting from the initial marking) is linear in the size of the occurrence net (it suffices to check that E is causally closed and conflict free). With reset arcs the same task is NP-complete as stated in the below proposition.
Proposition 1. The problem of deciding if a set E of events of an occurrence net with resets forms a feasible execution is NP-complete.
Proof. (Sketch) Graph 3-coloring reduces to executability of an occurrence net.
The branching processes of a Petri net are then defined as particular occurrence nets linked to the original net by homomorphisms.
Definition 8 (homomorphism of nets). Let N 1 and N 2 be two Petri nets such that
N i = (P i , T i , F i , ∅, M 0,i ). A mapping h : P 1 ∪ T 1 → P 2 ∪ T 2 is an homomorphism of nets from N 1 to N 2 if ∀p 1 ∈ P 1 , ∀p 2 ∈ P 2 , ∀t ∈ T 1 : (1) h(p 1 ) ∈ P 2 , (2) h(t) ∈ T 2 , (3) p 2 ∈ • h(t) ⇔ ∃p 1 ∈ • t, h(p 1 ) = p 2 , (4) p 2 ∈ h(t) • ⇔ ∃p 1 ∈ t • , h(p 1 ) = p 2 , (5) p 2 ∈ M 0,2 ⇔ ∃p 1 ∈ M 0,1 , h(p 1 ) = p 2 . Definition 9 (processes of a Petri net). Let N = (P , T , F , ∅, M 0 ) be a Petri net, O = (B, E, F O , ∅, M O 0
) be an occurrence net, and h be an homomorphism of nets from
O to N . Then (O, h) is a branching process of N if ∀e 1 , e 2 ∈ E, ( • e 1 = • e 2 ∧ h(e 1 ) = h(e 2 )) ⇒ e 1 = e 2 . If, moreover, ∀b ∈ B, |b • | ≤ 1, then (O, h) is a process of N .
Finally, a process of a reset Petri net is obtained by adding reset arcs to a process of the underlying Petri net (leading to what we call below a potential process) and checking that all its events can still be enabled and fired in some order.
Definition 10 (potential processes of a reset Petri net). Let N R = (P , T , F , R, M 0 ) be a reset Petri net and N be its underlying Petri net, let O = (B, E, F O , R O , M O 0 ) be an occurrence net, and h be an homomorphism of
nets from O to N R . Then (O, h) is a potential process of N R if (1) (O , h) is a process of N with O = (B, E, F O , ∅, M O 0 ), (2) ∀b ∈ B, ∀e ∈ E, (b, e) ∈ R O if and only if (h(b), h(e)) ∈ R. Definition 11 (processes of a reset Petri net). Let N R = (P , T , F , R, M 0 ) be a reset Petri net, O = (B, E, F O , R O , M O 0
) be an occurrence net, and h be an
homomorphism of nets from O to N R . Then (O, h) is a process of N R if (1) (O, h) is a potential process of N R , and (2) if E = {e 1 , . . . , e n } then ∃M 1 , . . . , M n ⊆ B so that M O 0 [e k1 M 1 [e k2 .
. . [e kn M n with {k 1 , . . . , k n } = {1, . . . , n}. Notice that processes of reset Petri nets and processes of Petri nets do not exactly have the same properties. In particular, two properties are central in defining pomset bisimulation for Petri nets and do not hold for reset Petri nets.
Property 1. In any process of a Petri net with set of events E, consider any sequence of events e 1 e 2 . . . e n (1) that contains all the events in E and (2) such that ∀i, j ∈ [1..n] if e i ≺ e j then i < j. Necessarily, there exist markings M 1 , . . . , M n so that
M O 0 [e 1 M 1 [e 2 .
. . [e n M n . This property (which, intuitively, expresses that processes are partially ordered paths) is no longer true for reset Petri nets. Consider for example the reset Petri net of Figure 1 (left). Figure 1 (right) is one of its processes (the occurrence net with the homomorphism h below). As not e 2 ≺ e 1 , their should exist markings
M 1 , M 2 so that M 0 [e 1 M 1 [e 2 M 2 . However, M 0 = {c 1 , c 3 } indeed enables e 1 , but the marking M 1 such that M 0 [e 1 M 1 is {c 2 }, which does not enable e 2 .
Property 2. In a process of a Petri net all the sequences of events e 1 e 2 . . . e n verifying (1) and ( 2) of Property 1 lead to the same marking (i.e. M n is always the same), thus uniquely defining a notion of maximal marking of a process. This property defines the marking reached by a process. As a corollary of Property 1 not holding for reset Petri nets, there is no uniquely defined notion of maximal marking in their processes. Back to the example {c 2 } is somehow maximal (no event can be fired from it) as well as {c 2 , c 4 }.
To transpose the spirit of Properties 1 and 2 to processes of reset Petri nets, we define below a notion of maximal markings in such processes. In other words, the maximal markings of a process are all the marking that are reachable in it using all its events. This, in particular, excludes {c 2 } in the above example.
Abstracting processes
We show how processes of labelled reset Petri nets can be abstracted as partially ordered multisets (pomsets) of labels.
Definition 13 (pomset abstraction of processes). Let (N R , Σ, λ) be a labelled reset Petri net and (O, h) be a process of
N R with O = (B, E, F O , R O , M O 0 )
. Define E = {e ∈ E : λ(h(e)) = ε}. Define λ : E → Σ as the function so that ∀e ∈ E , λ (e) = λ(h(e)). Define moreover < ⊆ E × E as the relation so that e 1 < e 2 if and only if e 1 ≺ e 2 (e 1 is a causal predecessor of e 2 in O). Then, (E , < , λ ) is the pomset abstraction of (O, h). This abstraction (E, < , λ ) of a process is called its pomset abstraction because it can be seen as a multiset of labels (several events may have the same associated label by λ ) that are partially ordered by the < relation. In order to compare processes with respect to their pomset abstractions, we also define the following equivalence relation.
Definition 14 (pomset equivalence).
Let (E, < , λ) and (E , < , λ ) be the pomset abstractions of two processes P and P . These processes are pomset equivalent, noted P ≡ P if and only if there exists a bijection f : E → E so that ∀e 1 , e 2 ∈ E: (1) λ(e 1 ) = λ (f (e 1 )), and (2) e 1 < e 2 if and only if f (e 1 ) < f (e 2 ).
Intuitively, two processes are pomset equivalent if their pomset abstractions define the same pomset: same multisets of labels with same partial orderings. Finally, we also need to be able to abstract processes as sequences of labels.
Definition 15 (linear abstraction). Let (N R , Σ, λ) be a labelled reset Petri net, let P = (O, h) be a process of
N R with O = (B, E, F O , R O , M O 0 )
, and let M be a reachable marking in O. Define λ : E → Σ as the function so that ∀e ∈ E, λ (e) = λ(h(e)). The linear abstraction of P with respect to M is the set lin(M , P) so that a sequence of ω is in lin(M , P) if and only if in O there exist markings M 1 , . . . , M n-1 and events e 1 , . . . , e n so that
M O 0 [e 1 M 1 [e 2 . . . M n-1 [e n M and λ (e 1 .
. . e n ) = ω.
Pomset bisimulation
We now define a notion of pomset bisimulation between reset Petri nets, inspired by [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF][START_REF] Van Glabbeek | Equivalence notions for concurrent systems and refinement of actions[END_REF][START_REF] Vogler | Bisimulation and action refinement[END_REF]. Intuitively, two reset Petri nets are pomset bisimilar if there exists a relation between their reachable markings so that the markings that can be reached by pomset equivalent processes from two markings in relation are themselves in relation. This is formalized by the below definition.
Definition 16 (pomset bisimulation for reset nets). Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two labelled reset Petri nets with N R,i = (P i , T i , F i , R i , M 0,i ).
They are pomset bisimilar if and only if there exists a relation ρ
⊆ [N R,1 ×[N R,2
(called a pomset bisimulation) so that:
1.
(M 0,1 , M 0,2 ) ∈ ρ, 2. if (M 1 , M 2 ) ∈ ρ, then (
a) for every process P 1 of (P 1 , T 1 , F 1 , R 1 , M 1 ) there exists a process P 2 of (P 2 , T 2 , F 2 , R 2 , M 2 ) so that P 1 ≡ P 2 and
-∀M 1 ∈ M max (P 1 ), ∃M 2 ∈ M max (P 2 ) so that (M 1 , M 2 ) ∈ ρ, -∀M 1 ∈ M max (P 1 ), ∀M 2 ∈ M max (P 2 ), (M 1 , M 2 ) ∈ ρ ⇒ lin(M 1 , P 1 ) = lin(M 2 , P 2
). (b) the other way around (for every process P 2 . . . ) Notice that, in the above definition, taking the processes P 1 and P 2 bisimilar (using the standard bisimulation relation for Petri nets) rather than comparing lin(M 1 , P 1 ) and lin(M 2 , P 2 ) would lead to an equivalent definition.
Remark that pomset bisimulation implies bisimulation, as expressed by the following proposition. The converse is obviously not true. Proposition 2. Let (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) be two pomset bisimilar labelled reset Petri nets, then (N R,1 , Σ 1 , λ 1 ) and (N R,2 , Σ 2 , λ 2 ) are bisimilar.
Proof. It suffices to notice that Definition 6 can be obtained from Definition 16 by restricting the processes considered, taking only those with exactly one transition whose label is different from ε.
From now on, we consider that (reset) Petri nets are finite, i.e. their sets of places and transitions are finite.
N 0R b1(p0) b2(p2) e1 (t1) e2 (t3) b4(p0) b5(p1) b6(p2) e3 (t2) b3(p3) b7(p3) F0 R Fig. 3. A remarkable pattern N pat
R and its structural transformation N pat str , a labelled reset Petri net N 0 R including the pattern N R, and a finite complete prefix F0 R of N 0 R . Transition labels are given on transitions.
In this section, we prove that it is, in general, not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. More precisely, we prove that it is not possible to build a safe labelled Petri net (while this is out of the scope of this paper, the reader familiar with Petri nets may notice that this is the case for bounded labelled Petri net) without reset arcs which is pomset bisimilar to a given safe labelled reset Petri net. For that, we exhibit a particular pattern -Figure 3 (left) -and show that a reset Petri net including this pattern cannot be pomset bisimilar to a Petri net without reset arcs.
As a first intuition of this fact, let us consider the following structural transformation that removes reset arcs from a reset Petri net.
Definition 17 (Structural transformation). Let (N R , Σ, λ) be a labelled reset Petri net such that N R = (P , T , F , R, M 0 ), its structural transformation is the labelled Petri net (N R,str , Σ str , λ str ) where N R,str = (P str , T str , F str , ∅, M 0,str ) so that:
P str = P ∪ P with P = {p : p ∈ P ∧ ∃t ∈ T , (p, t) ∈ R}, T str = T ∪ T with T = {t : t ∈ T ∧ t = ∅}, F str = F ∪ {(p, t) : t ∈ T , (p, t) ∈ F } ∪ {(t, p) : t ∈ T , (t, p) ∈ F } (1)
∪ {(p, t) : p ∈ P , (t, p) ∈ F } ∪ {(t, p) : p ∈ P , (p, t) ∈ F } (2)
∪ {(p, t) ∈ P × T : (t, p) ∈ F } ∪ {(t, p) ∈ T × P : (p, t) ∈ F } (3) ∪ {(p, t), (p, t), (t, p), (t, p) : (p, t) ∈ R}, (4)
M 0,str = M 0 ∪ {p ∈ P : p / ∈ M 0 },
and moreover, Σ str = Σ, ∀t ∈ T, λ str (t) = λ(t), and ∀t ∈ T , λ str (t) = λ(t).
Intuitively, in this transformation, for each reset arc (p, t), a copy p of p and a copy t of t are created. The two places are so that p is marked if and only if p is not marked, the transition t will perform the reset when p is marked and t will perform it when p is not marked (i.e when p is marked). For that, new arcs are added to F so that: t mimics t (1), the link between p and p is enforced (2, 3), and the resets are either performed by t or t depending of the markings of p and p (4). This is examplified in Figure 3 (left and middle left). Lemma 1. A labelled reset Petri net (N R , Σ, λ) and its structural transformation (N R,str , Σ str , λ str ) as defined in Definition 17 are bisimilar.
Proof. (Sketch) The bisimulation relation is
ρ ⊆ [N R,1 × [N R,2 defined by (M, M struct ) ∈ ρ iff ∀p ∈ P, M (p) = M struct (p) and ∀p ∈ P such that p ∈ P , we have M struct (p) + M struct (p) = 1.
For the transformation of Definition 17, a reset Petri net and its transformation are bisimilar but not always pomset bisimilar. This can be remarked on any safe reset Petri net including the pattern N pat R of Figure 3. Indeed, this transformation adds in N pat str a causality relation between the transition labelled by t 1 and each of the two transitions labelled by t 3 . From the initial marking of N pat str , for any process whose pomset abstraction includes both t 1 and t 3 , these two labels are causally ordered. While, from the initial marking of N pat R there is a process which pomset abstraction includes both t 1 and t 3 but does not order them. We now generalize this result.
Let us consider the labelled reset Petri Net N 0 R of Figure 3 (middle right). It uses the pattern N pat R of Figure 3 in which t 1 and t 3 can be fired in different order infinitely often. In this net, the transitions with labels t 1 and t 3 are not in causal relation. Proposition 3. There is no finite safe labelled Petri net (i.e. without reset arc) which is pomset bisimilar to the labelled reset Petri net N 0 R . Proof. We simply remark that any finite safe labelled Petri net with no reset arcs which is bisimilar to N 0 R has a causal relation between two transitions labelled by t 1 and t 3 respectively (Lemma 2). From that, by Proposition 2, we get that any such labelled Petri net N which would be pomset bisimilar to N 0 R would have a process from its initial marking whose pomset abstraction is such that some occurrence of t 1 and some occurrence of t 3 are ordered, while this is never the case in the processes of N 0 R . This prevents N from being pomset bisimilar to N 0 R , and thus leads to a contradiction, proving the proposition. Lemma 2. Any safe labelled Petri net with no reset arcs which is bisimilar (see definition 6) to N 0 R has a causal relation between two transitions labelled by t 1 and t 3 respectively.
Proof. (Sketch) The firing of t 3 prevents the firing of t 2 ; then t 3 and t 2 are in conflict and share an input place which has to be marked again after the firing of t 1 . This place generates a causality between t 1 and t 3 .
In this section, we propose a notion of finite complete prefixes of unfolding of safe reset Petri nets preserving reachability of markings and pomset behaviour. As a consequence of the previous section, these finite complete prefixes do have reset arcs.
The unfolding of a Petri net is a particular branching process (generally infinite) representing all its reachable markings and ways to reach them. It also preserves concurrency.
Definition 18 (Unfolding of a Petri net). The unfolding of a net can be defined as the union of all its branching processes [START_REF] Esparza | Unfoldings -A Partial-Order Approach to Model Checking[END_REF] or equivalently its largest branching process (with respect to inclusion).
In the context of reset Petri nets, no notion of unfolding has been defined yet. Accordingly to our notion of processes for reset Petri nets and because of Proposition 4 below we propose Definition 19. In it and the rest of the paper, nets and labelled nets are identified (each transition is labelled by itself) and labellings of branching processes are induced by homomorphisms (as for pomset abstraction).
Definition 19 (Unfolding of a reset Petri net). Let N R be a safe reset Petri net and N be its underlying Petri net. Let U be the unfolding of N . The unfolding of N R is U R , obtained by adding reset arcs to U according to (2) in Definition 10. Proof. (Sketch) This extends a result of [START_REF] Van Glabbeek | Petri net models for algebraic theories of concurrency[END_REF], stating that two Petri nets having the same unfolding (up to isomorphism) are pomset bisimilar (for a notion of bisimulation coping with our in absence of resets).
Petri nets unfolding is however unpractical for studying Petri nets behaviour as it is generally an infinite object. In practice, finite complete prefixes of it are preferred [START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF][START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF].
Definition 20 (finite complete prefix, reachable marking preservation). A finite complete prefix of the unfolding of a safe Petri net N is a finite branching processe (O, h) of N verifying the following property of reachable marking preservation: a marking M is reachable in N if and only if there exists a reachable marking M in O so that M = {h(b) : b ∈ M }.
In this section, we propose an algorithm for construction of finite complete prefixes for safe reset Petri nets. For that, we assume the existence of a black-box algorithm for building finite complete prefixes of safe Petri nets (without reset arcs). Notice that such algorithms indeed do exist [START_REF] Mcmillan | Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits[END_REF][START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF].
Because of Proposition 3, we know that such finite prefixes should have reset arcs to preserve pomset behaviour. We first remark that directly adding reset arcs to finite complete prefixes of underlying nets would not work. Proposition 5. Let U be the unfolding of the underlying Petri Net N of a safe reset Petri net N R , let F be one of its finite and complete prefixes. Let F be the object obtained by adding reset arcs to F according to (2) in Definition 10. The reachable marking preservation is in general not verified by F (with respect to N R ).
The proof of this proposition relies on the fact that some reachable markings of N R are not represented in F . This suggests that this prefix is not big enough. We however know an object that contains, for sure, every reachable marking of N R along with a way to reach each of them: its structural transformation N R,str (Definition 17). We thus propose to compute finite prefixes of reset Petri nets from their structural transformations: in the below algorithm, F str is used to determine the deepness of the prefix (i.e. the length of the longest chain of causally ordered transitions).
Algorithm 1 (Finite complete prefix construction for reset Petri nets) Let N R be a safe reset Petri net, (step 1) compute the structural transformation N R,str of N R , (step 2) compute a finite complete prefix F str of N R,str , (step 3) compute a finite prefix F of U (the unfolding of the underlying net N ) that simulates F str (a labelled net N 2 simulates a labelled net N 1 if they verify Definition 6 except for condition 2.b.), (step 4) compute F R by adding reset arcs from N R to F according to (2) in Definition 10. The output of the algorithm is F R .
Applying this algorithm to the net N 0 R of Figure 3 (middle right) -using the algorithm from [START_REF] Esparza | An improvement of McMillan's unfolding algorithm[END_REF] at step 2 -leads to the reset Petri net F 0 R of Figure 3 (right).
Notice that the computation of F str -step 1 and 2 -can be done in exponential time and space with respect to the size of N R . The computation of F from F str (step 3) is linear in the size of F. And, the addition of reset arcs (step 4) is at most quadratic in the size of F.
We conclude this section by showing that Algorithm 1 actually builds finite complete prefixes of reset Petri nets. Proposition 6. The object F R obtained by Algorithm 1 from a safe reset Petri net N R is a finite and complete prefix of the unfolding of N R .
Proof. Notice that if N R is safe, then N R,str is safe as well. Thus F str is finite by definition of finite complete prefixes of Petri nets (without reset arcs). F str is finite and has no node in causal relation with itself (i.e. no cycle), hence any net bisimilar with it is also finite, this is in particular the case of F. Adding reset arcs to a finite object does not break its finiteness, so F R is finite.
Moreover, F str is complete by definition of finite complete prefixes of Petri nets (without reset arcs). As F simulates F str it must also be complete (it can only do more). The reset arcs addition removes semantically to F only the unexpected sequences (i.e. the sequence which are possible in F but not in F str ). Therefore, F R is complete.
Our contribution in this paper is three-fold. First, we proposed a notion of pomset bisimulation for reset Petri nets. This notion is, in particular, inspired from a similar notion that has been defined for Petri nets (without reset arcs) in [START_REF] Best | Concurrent bisimulations in Petri nets[END_REF]. Second, we have shown that it is not possible to remove reset arcs from safe reset Petri nets while preserving their behaviours with respect to this pomset bisimulation. And, third, we proposed a notion of finite complete prefixes of unfolding of safe reset Petri nets that allows for reachability analysis while preserving pomset behaviour. As a consequence of the two other contributions, these finite complete prefixes do have reset arcs.
Figure 1 (
1 Figure 1 (left) is a graphical representation of a reset Petri net. It has five places (circles) and three transitions (squares). Its set of arcs contains seven elements (arrows) and there is one reset arc (line with a diamond).
Fig. 1 .
1 Fig. 1. A reset Petri net (left) and one of its processes (right)
Definition 12 (
12 maximal markings). Let P = (O, h) be a process with set of events E = {e 1 , . . . , e n } and initial marking M O 0 of a reset Petri net. The set M max (P) of maximal markings of P contains exactly the markings M so that ∃M 1 , . . . , M n-1 , verifying M O 0 [e k1 M 1 [e k2 . . . M n-1 [e kn M for some {k 1 , . . . , k n } = {1, . . . , n}.
Proposition 4 .
4 Any safe (labelled) reset Petri net N R and its unfolding U R are pomset bisimilar. |
01766650 | en | [
"math.math-pr",
"math.math-fa",
"math.math-sp"
] | 2024/03/05 22:32:13 | 2020 | https://hal.science/hal-01766650/file/GH.pdf | Nathaël Gozlan
Ronan Herry
MULTIPLE SETS EXPONENTIAL CONCENTRATION AND HIGHER ORDER EIGENVALUES
.
Introduction
Let (M, g) be a smooth compact connected Riemannian manifold with its normalized volume measure µ and its geodesic distance d. The Laplace-Beltrami operator ∆ is then a non-positive operator whose spectrum is discrete. Let us denote by λ (k) , k = 0, 1, 2 . . ., the eigenvalues of -∆ written in increasing order. With these notations λ (0) = 0 (achieved for constant functions) and (by connectedness) λ (1) > 0 is the socalled spectral gap of M .
The study of the spectral gap of Riemannian manifolds is, by now, a very classical topic which has found important connections with numerous geometrical and analytical questions and properties. The spectral gap constant λ (1) is for instance related to Poincaré type inequalities and governs the speed of convergence of the heat flow to equilibrium. It is also related to Ricci curvature via the classical Lichnerowicz theorem [START_REF] Lichnerowicz | Géométrie des groupes de transformations[END_REF] and to Cheeger isoperimetric constant via Buser's theorem [START_REF] Buser | A note on the isoperimetric constant[END_REF]. We refer to [START_REF] Bakry | Analysis and geometry of Markov diffusion operators[END_REF][START_REF] Chavel | Eigenvalues in Riemannian geometry[END_REF] and the references therein for a complete picture.
Another important property of the spectral gap constant, first observed by Gromov and Milman [START_REF] Gromov | A topological application of the isoperimetric inequality[END_REF], is that it controls exponential concentration of measure phenomenon for the reference measure µ. The result states as follows. Define for all Borel sets A ⊂ M , its r-enlargement A r as the (open) set of all x ∈ E such that there exists y ∈ A with d(x, y) < r. Then, for any A ⊂ M such that µ(A) ≥ 1/2 it holds
µ(A r ) ≥ 1 -be -a √ λ (1) r , ∀r > 0,
where a, b > 0 are some universal constants (according to [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Theorem 3.1], one can take b = 1 and a = 1/3). Note that this implication is very general and holds on any metric space supporting a Poincaré inequality (see [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Corollary 3.2]). See also [6,[START_REF] Schmuckenschläger | Martingales, Poincaré type inequalities, and deviation inequalities[END_REF][START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF][START_REF] Nathael Gozlan | From dimension free concentration to the Poincaré inequality[END_REF] for alternative derivations, generalizations or refinements of this result. This note is devoted to a multiple sets extension of the above result. Roughly speaking, we will see that if A 1 , . . . , A k are sets which are pairwise separated in the sense that d(A i , A j ) := inf{d(x, y) : x ∈ A i , y ∈ A j } > 0 for any i = j and A is their union then the probability of A r goes exponentially fast to 1 at a rate given by √ λ (k) as soon as r is such that the sets A i,r , i = 1, . . . , k remain separated. More precisely, it follows from Theorem 1.1 (whose setting is actually more general) that, if A 1 , . . . , A k are such that µ(A i ) ≥ 1 k+1 and d(A i,r , A j,r ) > 0 for all i = j, then, denoting
A = A 1 ∪ . . . ∪ A k , it holds (0.1) µ(A r ) ≥ 1 - 1 k + 1
exp -c min(r 2 λ (k) ; r λ (k) ) ,
for some universal constant c. This kind of probability estimates first appeared, in a slightly different but essentially equivalent formulation in the work of Chung, Grigor'yan and Yau [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF][START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] (see also the related paper [START_REF] Friedman | Laplacian eigenvalues and distances between subsets of a manifold[END_REF] by Friedman and Tillich). Nevertheless, the method of proof we use to arrive at (0.1) (based on the Courant-Fischer min-max formula for the λ (k) 's) is quite different from the one of [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF][START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] and seems more elementary and general. This is discussed in details in Section 1.5. The paper is organized as follows. In Section 1, we prove (0.1) in an abstract metric space framework. This framework contains, in particular, the compact Riemannian case equipped with the Laplace operator presented above. The Section 1.5 contains a detailed discussion of our result with the one of Chung, Grigor'yan & Yau. In Section 2, we recall various bounds on eigenvalues on several non-negatively curved manifolds. Section 3 gives an extension of (0.1) to discrete Markov chains on graphs. In Section 4, we give a functional formulation of the results of Sections 1 and 3. As a corollary of this functional formulation, we obtain a deviation inequality as well as an estimate for difference of two Lipschitz extensions of a Lipschitz function given on k subsets. Finally, Section 5 discusses open questions related to this type of concentration of measure phenomenon.
Multiple sets exponential concentration in abstract spaces
1.1. Courant-Fischer formula and generalized eigenvalues in metric spaces. Let us recall the classical Courant-Fischer min-max formula for the k-th eigenvalue (k ∈ N) of -∆, noted λ (k) , on a compact Riemannian manifold (M, g) equipped with its (normalized) volume measure µ:
(1.1) λ (k) = inf V ⊂C ∞ (M ) dim V =k+1 sup f ∈V \{0} ´|∇f | 2 dµ ´f 2 dµ ,
where ∇f is the Riemannian gradient, defined through the Riemannian metric g (see e.g [START_REF] Chavel | Eigenvalues in Riemannian geometry[END_REF]) and |∇f | 2 = g(∇f, ∇f ). The formula (1.1) above does not make explicitly reference to the differential operator ∆. It can be therefore easily generalized to a more abstract setting, as we shall see below.
In all what follows, (E, d) is a complete, separable metric space and µ a reference Borel probability measure on E. Following [START_REF] Cheeger | Differentiability of Lipschitz functions on metric measure spaces[END_REF], for any function f : E → R and x ∈ E, we denote by |∇f |(x) the local Lipschitz constant of f at x, defined by
|∇f |(x) = 0 if x is isolated lim sup y→x |f (x)-f (y)| d(x,y)
otherwise.
Note that when E is a smooth Riemannian manifold, equipped with its geodesic distance d, then, the local Lipschitz constant of a differentiable function f at x coincides with the norm of ∇f (x) in the tangent space T x E. With this notion in hand, a natural generalization of (1.1) is as follows (we follow [23, Definition 3.1]):
(1.2) λ (k) d,µ := inf V ⊂H 1 (µ) dim V =k+1 sup f ∈V \{0} ´|∇f | 2 dµ ´f 2 dµ , k ≥ 0,
where H 1 (µ) denotes the space of functions f ∈ L 2 (µ) such that ´|∇f | 2 dµ < +∞. In order to avoid heavy notations, we drop the subscript and we simply write
λ (k) instead of λ (k)
d,µ within this section.
a i + k j=1 a j ≥ 1, ∀i ∈ {1, . . . , k}.
Recall the classical notation
d(A, B) = inf{d(x, y) : x ∈ A, y ∈ B} of the distance between two sets A, B ⊂ E.
The following theorem is the main result of the paper and is proved in Section 1.3.
Theorem 1.1.
There exists a universal constant c > 0 such that, for any k ≥ 1 and for all sets
A 1 , . . . , A k ⊂ E such that min i =j d(A i , A j ) > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the set A = A 1 ∪ A 2 ∪ • • • ∪ A k satisfies µ(A r ) ≥ 1 -(1 -µ(A)) exp -c min(r 2 λ (k) ; r λ (k) ) , for all 0 < r ≤ 1 2 min i =j d(A i , A j )
, where λ (k) ≥ 0 is defined by (1.2). Note that, since (1/(k + 1), . . . , 1/(k + 1)) ∈ ∆ k , Theorem 1.1 immediately implies Inequality (0.1).
Inverting our concentration estimate, we obtain the following statement that provides a bound on the λ (k) 's. (E,d,µ) be a metric measured space and λ (k) be defined as in (1.2). Let A 1 , . . . , A k be measurable sets such that (µ(A 1 ), . .
Proposition 1.2. Let
. , µ(A
k )) ∈ ∆ k , then, with r = 1 2 min i =j d(A i , A j ) and A 0 = E \ (∪A i ) r , λ (k) ≤ 1 r 2 ψ 1 c min i ln µ(A i ) µ(A 0 ) ,
where ψ(x) = max(x, x 2 ).
Proof. Let A = ∪ i A i . Inverting the formula in Theorem 1.1, we obtain
λ (k) ≤ 1 r 2 ψ 1 c ln 1 -µ(A) 1 -µ(A r ) , where ψ(x) = max(x, x 2 ). By definition of ∆ k , 1 -µ(A) = 1 - i µ(A i ) ≤ min i µ(A i ).
Therefore, letting A 0 = E \ A r , we obtain the announced inequality by non-decreasing monotonicity of ψ and ln.
The collection of sets ∆ k , k ≥ 1 has the following useful stability property:
Lemma 1.3. Let I 1 , I 2 , . . . , I n be a partition of {1, . . . , k}, k ≥ 1. Let a = (a 1 , . . . , a k ) ∈ R k and define b = (b 1 , . . . , b n ) ∈ R n by setting b i = j∈I i a j , i ∈ {1, . . . , n}. If a ∈ ∆ k then b ∈ ∆ n .
Proof. The proof is obvious and left to the reader.
Thanks to this lemma it is possible to iterate Theorem 1.1 and to obtain a general bound for µ(A r ) for all values of r > 0. This bound will depend on the way the sets A 1,r , . . . , A k,r coalesce as r increases. This is made precise in the following definition.
Definition 1.1 (Coalescence graph of a family of sets). Let A 1 , . . . , A k be subsets of E. The coalescence graph of this family of sets is the family of graphs G r = (V, E r ), r > 0, where V = {1, 2, . . . , k} and the set of edges E r is defined as follows: {i, j}
∈ E r if d(A i,r , A j,r ) = 0. Corollary 1.4. Let A 1 , . . . , A k be subsets of E such that min i =j d(A i , A j ) > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k .
For any r > 0, let N (r) be the number of connected components in the coalescence graph G r associated to A 1 , . . . , A k . The function (0, ∞) → {1, . . . , k} : r → N (r) is non-increasing and right-continuous. Define r i = sup{r > 0 : N (r) ≥ k -i + 1}, i = 1, . . . , k and r 0 = 0 then it holds
(1.3) µ(A r ) ≥ 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) , ∀r > 0,
where φ(x) = min(x; x 2 ), x ≥ 0 and c is the universal constant appearing in Theorem 1.1.
Observe that, contrary to usual concentration results, the bound given above depends on the geometry of the set A.
µ(A r ) ≥ 1 -(1 -µ(A)) exp -cφ(r λ (k) ) , for all 0 < r ≤ 1 2 min i =j d(A i , A j ). Let k 1 = N ( 1 2 min i =j d(A i , A j )) and let i 1 = k -k 1 .
Then, for all i ∈ {1, . . . , i 1 }, r i = 1 2 min i =j d(A i , A j ). So that, for all 0 < r ≤ r i 1 , the preceding bound can be rewritten as follows (note that only the term of index i = 1 gives a non zero contribution)
µ(A r ) ≥ 1 -(1 -µ(A)) exp -c i 1 i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) = 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) (1.4)
which shows that (1.3) is true for 0 < r ≤ r i 1 . Now let I 1 , . . . , I k 1 be the connected components of G r 1 and define, for all i ∈ {1, . . . , k 1 },
B i = ∪ j∈I i A j,r 1 . It follows eas- ily from Lemma 1.3 that (µ(B 1 ), . . . , µ(B k 1 )) ∈ ∆ k 1 . Since min i =j d(B i , B j ) > 0, the induction hypothesis implies that µ(B s ) ≥ 1 -(1 -µ(B)) exp -c k 1 i=1 φ [s ∧ s i -s i-1 ] + λ (k 1 -i+1) , ∀s > 0,
where
B = B 1 ∪ • • • ∪ B k 1 = A r 1 and s i = sup{s > 0 : N ′ (s) ≥ k 1 -i + 1}, i ∈ {1, . . . , k 1 } (s 0 = 0) with N ′ (s) the number of connected components in the graph G ′ s associated to B 1 , . . . , B k 1 . It is easily seen that r i 1 +i = r i 1 + s i , for all i ∈ {0, 1 . . . , k 1 }. Therefore, we have that, for r > r i 1 , µ(A r ) ≥ µ(B r-r i 1 ) ≥ 1 -(1 -µ(A r i 1 )) exp -c k i=i 1 +1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) ≥ 1 -(1 -µ(A)) exp -c k i=1 φ [r ∧ r i -r i-1 ] + λ (k-i+1) ,
where the last line is true by (1.4).
To prove Theorem 1.1, we need some preparatory lemmas. Given a subset A ⊂ E, and x ∈ E, the minimal distance from x to A is denoted by
d(x, A) = inf y∈A d(x, y). Lemma 1.5. Let A ⊂ E and ǫ > 0, then (E \ A ǫ ) ǫ ⊂ E \ A. Proof. Let x ∈ (E \ A ǫ ) ǫ . Then, there exists y ∈ E \ A ǫ (in particular d(y, A) ≥ ǫ) such that d(x, y) < ǫ. Since the function z → d(z, A) is 1-Lipschitz, one has d(x, A) ≥ d(y, A) -d(x, y) > 0 and so x ∈ E \ A. Remark 1. In fact, we proved that (E \ A ǫ ) ǫ ⊂ E \ Ā. The converse is, in general, not true. Lemma 1.6. Let A 1 , . . . , A k be a family of sets such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k and r := 1 2 min i =j d(A i , A j ) > 0. Let 0 < ǫ ≤ r and set A = ∪ 1≤i≤k A i and A 0 = E \ (A ǫ ). Then, (1.5) max i=0,...,k µ(A i,ǫ ) µ(A i ) ≤ 1 -µ(A) 1 -µ(A ǫ ) .
Proof. First, this is true for i = 0. Indeed, by definition A 0 = E \ (A ǫ ) and, according to Lemma 1.5, (A 0 ) ǫ ⊂ A c (the equality is not always true), which proves (1.5) in this case. Now, let us show (1.5) for the other values of i. Since ǫ ≤ r, the A j,ǫ 's are disjoint sets. Thence, (1.5) is equivalent to
1 - k j=1 µ(A j,ǫ ) µ(A i,ǫ ) ≤ 1 - k j=1 µ(A j ) µ(A i ).
This inequality is true as soon as
(1 -µ(A i,ǫ ) -m i ) µ(A i,ǫ ) ≤ (1 -µ(A i ) -m i ) µ(A i ), denoting m i = k j =i µ(A j ). The function f i (u) = (1 -u -m i )u, u ∈ [0, 1], is decreasing on the interval [(1 -m i )/2, 1]. We conclude from this that (1.5) is true for all i ∈ {1, . . . , k}, as soon as µ(A i ) ≥ (1 -m i )/2 for all i ∈ {1, . . . , k} which amounts to (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . For p > 1, we define the function χ p : [0, ∞[→ [0, 1] by χ p (x) = (1 -x p ) p , for x ∈ [0, 1] and χ p (x) = 0 for x > 1. It is easily seen that χ p (0) = 1, χ ′ p (0) = χ p (1) = χ ′ p (1) = 0, that χ p takes values in [0, 1]
and that χ p is continuously differentiable on [0, ∞[. We use the function χ p to construct smooth approximations of indicator functions on E, as explained in the next statement.
Lemma 1.7. Let A ⊂ E and consider the function
f (x) = χ p (d(x, A)/ǫ), x ∈ E, where ǫ > 0 and p > 1. For all x ∈ E, it holds |∇f |(x) ≤ p 2 ǫ -1 1 Aǫ\A
Proof. Thanks to the chain rule for the local Lipschitz constant (see e.g. [2, Proposition 2.1]),
∇χ p d(•, A) ǫ (x) ≤ ǫ -1 χ ′ p d(•, A) ǫ |∇d(•, A)|(x).
The function d(•, A) being Lipschitz, its local Lipschitz constant is ≤ 1 and, thereby,
|∇f |(x) ≤ χ ′ p d(x, A) ǫ .
In particular, thanks to the aforementioned properties of χ, |∇f | vanishes on A (and even on A) and on {x ∈ E : d(x, A) ≥ ǫ} = E \ A ǫ . On the other hand, a simple calculation shows that |χ ′ p | ≤ p 2 which proves the claim.
Proof of Theorem 1.1. Take Borel sets A 1 , . . . , A k with 1 2 min i =j d(A i , A j ) ≥ r > 0 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k and consider A = A 1 ∪ • • • ∪ A k . Let us show that, for any 0 < ǫ ≤ r, it holds (1.6) 1 + λ (k) ǫ 2 (1 -µ(A ǫ )) ≤ (1 -µ(A)). Let A 0 = E \ (A ǫ ) and set f i (x) = χ p (d(x, A i )/ǫ), x ∈ E, i ∈ {0, . . . , k}, where p > 1.
According to Lemma 1.7 and the fact that
f i = 1 on A i , we obtain (1.7) ˆ|∇f i | 2 dµ = p 4 ǫ 2 µ(A i,ǫ \ A i ) and ˆf 2 i dµ ≥ µ(A i ).
Since the f i 's have disjoint supports they are orthogonal in L 2 (µ) and, in particular, they span a k + 1 dimensional subspace of H 1 (µ). Thus, by definition of λ (k) ,
λ (k) ≤ sup a∈R k+1 ´|∇ k i=0 a i f i | 2 dµ ´ k i=0 a i f i 2 dµ ≤ sup a∈R k+1 ´ k i=0 |a i ||∇f i | 2 dµ ´ k i=0 a i f i 2 dµ ,
where the second inequality comes from the following easy to check sub-linearity property of the local Lipschitz constant:
|∇ (af + bg) | ≤ |a||∇f | + |b||∇g|.
Since the f ′ i s and the |∇f i | ′ s are two orthogonal families, we conclude using (1.7), that
λ (k) ǫ 2 p 4 ≤ sup a∈R k+1 k i=0 a 2 i (µ(A i,ǫ ) -µ(A i )) k i=0 a 2 i µ(A i )
, which amounts to
(1.8) 1 + λ (k) ǫ 2 p 4 ≤ max i=0,...,k µ(A i,ǫ ) µ(A i ) .
Applying Lemma 1.6 and sending p to 1 gives (1.6). Now, if n ∈ N and 0 < ǫ are such that nǫ ≤ r, then iterating (1.6) immediately gives
1 + λ (k) ǫ 2 n (1 -µ(A nǫ )) ≤ 1 -µ(A).
Optimizing this bound over n for a fixed ε gives
(1 -µ(A r )) ≤ (1 -µ(A)) exp -sup ⌊r/ǫ⌋ log 1 + λ (k) ǫ 2 : ǫ ≤ r .
Thus, letting
(1.9) Ψ(x) = sup ⌊t⌋ log 1 + x t 2 : t ≥ 1 , x ≥ 0, it holds (1 -µ(A r )) ≤ (1 -µ(A)) exp -Ψ λ (k) r 2 .
Using Lemma 1.8 below, we deduce that Ψ λ (k) r 2 ≥ c min(r 2 λ (k) ; r √ λ (k) ), with c = log(5)/4, which completes the proof.
Lemma 1.8. The function Ψ defined by (1.9) satisfies
Ψ(x) ≥ log(5) 4 min(x; √ x), ∀x ≥ 0.
Proof. Taking t = 1, one concludes that Ψ(x) ≥ log(1 + x), for all x ≥ 0. The function x → log(1 + x) being concave, the function x → log(1+x)
x is non-increasing. Therefore, log(1 + x) ≥ log (5) 4 x for all x ∈ [0, 4]. Now, let us consider the case where x ≥ 4. Observe that ⌊t⌋ ≥ t/2 for all t ≥ 1 and so, for x ≥ 4,
Ψ(x) ≥ 1 2 sup t≥1 t log 1 + x t 2 ≥ log(5) 4 √ x, by choosing t = √ x/2 ≥ 1. Thereby, Ψ(x) ≥ log(5) 4 x1 [0,4] (x) + √ x1 [4,∞) (x) ≥ log(5) 4 min(x; √ x),
which completes the proof.
Remark 2. The conclusion of Lemma Lemma 1.8 can be improved. Namely, it can be shown that
Ψ(x) = max 1 + ⌊ √ x a ⌋ log 1 + x 1 + ⌊ √ x a ⌋ 2 ; ⌊ √ x a ⌋ log 1 + x ⌊ √ x a ⌋ 2 ,
(the second term in the maximum being treated as 0 when √ x < a) where 0 < a < 2 is the unique point where the function (0, ∞) → R : u → log(1 + u 2 )/u achieves its supremum. Therefore,
Ψ(x) ∼ log(1 + a 2 ) a √ x
when x → ∞. The reader can easily check that log(1+a 2 ) a ≃ 0.8. In particular, it does not seem possible to reach the constant c = 1 in Theorem 1.1 using this method of proof. 1.4. Two more multi-set concentration bounds. The condition (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k can be seen as the multi-set generalization of the condition, standard in concentration of measure, that the size of the enlarged set has to be bigger than 1/2. Indeed, the reader can easily verify that ( 1 k+1 , . . . , 1 k+1 ) ∈ ∆ k . However, in practice, this condition can be difficult to check. We provide two more multi-set concentration inequalities that hold in full generality. The method of proof is the same as for Theorem 1.1 and is based on (1.8). Proposition 1.9. Let (E, d, µ) be a metric measured space and λ (k) be defined as in (1.2). Let (A 1 , . . . , A k ) be k Borel sets, A = ∪ i A i and A 0 = E \ A r . Then, with a (1) = min 1≤i≤k µ(A i ), the following two bounds hold:
1 -µ(A r ) ≤ (1 -µ(A)) 1 k i=1 µ(A i ) exp -c min r 2 λ (k) , r λ (k) ; 1 -µ(A r ) ≤ (1 -µ(A)) 1 µ(A) µ(A)/a (1) exp -c min r 2 λ (k) , r λ (k) .
Proof. Fix N ∈ N and ǫ > 0 such that N ǫ ≤ r. For i = 1, . . . , k and n ≤ N , we define
α i (n) = µ(A i,nǫ ) µ(A i,(n-1)ǫ )
;
M n = max 1≤i≤k α i (n) ∨ 1 -µ(A (n-1)ǫ ) 1 -µ(A nǫ ) ; L n = {i ∈ {1, . . . , k}|M n = α i (n)}; N i = ♯{n ∈ {1, . . . , N }|i = inf L n }; N 0 = N - k i=1 N i .
Roughly speaking, the number N i (0 ≤ i ≤ k) counts the number of time where the set A i growths in iterating (1.8). Lemma 1.6 asserts that in the case where (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , then N 0 = N . However, we still obtain from (1.8), for 1
≤ i ≤ k, (1.10) 1 µ(A i ) ≥ N n=1 α i (n) ≥ 1 + λ (k) ǫ 2 N i .
The first inequality is true because µ(A i,N ǫ ) ≤ 1 and a telescoping argument. The second inequality is true because, as n ranges from 1 to N , by definition of the number N i and (1.8), there are, at least N i terms appearing in the product that can be bounded by (1 + λ (k) ǫ 2 ). The other terms are bounded above by 1. The case of i = 0 is handled in a similar fashion and we obtain:
1 -µ(A N ǫ ) ≤ (1 -µ(A)) 1 + λ (k) ǫ 2 -N 0 = (1 -µ(A)) 1 + λ (k) ǫ 2 -N k i=1 1 + λ (k) ǫ 2 N i .
(1.11)
The announced bounds will be obtain by bounding the product appearing in the righthand side and an argument similar to the end of the proof of Theorem 1.1. From (1.10), we have that,
(1.12)
k i=1 1 + λ (k) ǫ 2 N i ≤ 1 k i=1 µ(A i )
.
Also, from (1.10),
µ(A i,N ǫ ) ≥ 1 + λ (k) ǫ 2 N i µ(A i ).
Because N ǫ ≤ r, the sets A 1,N ǫ , . . . , A k,N ǫ are pairwise disjoint and, thereby,
1 ≥ µ(A i,N ǫ ) ≥ k i=1 1 + λ (k) ǫ 2 N i µ(A i ).
Fix θ > 0 to be chosen later. By convexity of exp,
1 + (1 -µ(A)) 1 + λ (k) ǫ 2 θ ≥ exp k i=1 µ(A i )N i + (1 -µ(A))θ log 1 + λ (k) ǫ 2 ≥ exp a (1) k i=1 N i + (1 -µ(A))θ log 1 + λ (k) ǫ 2 .
Finally, with p = 1 -µ(A) and t = θ log(1 + λ (k) ǫ 2 ), we obtain
k i=1 1 + λ (k) ǫ 2 N i ≤ e -pt
+p e (1-p)t 1/a (1) .
We easily check that, the quantity in the right-hand side is minimal for t = log 1 1-p at which it takes the value (1 -p) p-1 = µ(A) -µ(A)/a (1) . Thus, (1.13) (1) .
k i=1 (1 + λ (k) ǫ 2 ) N i ≤ 1 µ(A) µ(A)/a
Combining (1.12) and (1.13) with (1.11) and the same argument as for (1.9), we obtain the two announced bounds.
From Proposition 1.9, we can derive bounds on the λ (k) 's. The proof is the same as the one of Proposition 1.2 and is omitted. Proposition 1.10. Let (E, d, µ) be a metric measured space and λ (k) be defined as in (1.2). Let A 1 , . . . , A k be measurable sets, then, with r = 1 2 min i =j d(A i , A j ) and
A 0 = E \ (∪A i ) r , λ (k) ≤ 1 r 2 ψ 1 c ln a (1) µ(A 0 ) + 1 c k ln 1 a (1) ; λ (k) ≤ 1 r 2 ψ 1 c ln a (1) µ(A 0 ) + 1 c µ(A) a (1) ln 1 µ(A) ,
where ψ(x) = max(x, x 2 ) and a (1) = min 1≤i≤k µ(A i ).
1.5. Comparison with the result of Chung-Grigor'yan-Yau. In [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF], the authors obtained the following result:
Theorem 1.11 (Chung-Grigoryan-Yau [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF]). Let M be a compact connected smooth Riemannian manifold equipped with its geodesic distance d and normalized Riemannian volume µ. For any k ≥ 1 and any family of sets A 0 , . . . , A k , it holds
(1.14) λ (k) ≤ 1 min i =j d 2 (A i , A j ) max i =j log ( 4 µ(A i )µ(A j ) ) 2
,
where 1 = λ (0) ≤ λ (1) ≤ • • • λ (k) ≤ • • • denotes the discrete spectrum of -∆.
Let us translate this result in terms of concentration of measure. Let A 1 , . . . , A k be sets such that r = 1 2 min 1≤i<j≤k d(A i , A j ) > 0 and define , so that (1.15) is equivalent to the following statement:
A = A 1 ∪ • • • ∪ A k and A 0 = M \A s , for some 0 < s ≤ r. Then,
(1.16) µ(A s ) ≥ 1 - 4 a (1) exp(-λ (k) s), ∀s ∈ [min(s o , r); r].
We note that (1.16) holds for any family of sets, whereas the inequality given in Theorem 1.1 is only true when (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Also due to the fact that the constant c appearing in Theorem 1.1 is less than 1, (1.16) is asymptotically better than ours (see also Remark 2 above). On the other hand, one sees that (1.16) is only valid for s large enough (and its domain of validity can thus be empty when s o > r) whereas our inequality is true on the whole interval (0, r]. It does not seem also possible to iterate (1.16) as we did in Corollary 1.4. Finally, observe that the method of proof used in [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF] and [START_REF] Chung | Eigenvalues and diameters for manifolds and graphs[END_REF] is based on heat kernel bounds and is very different from ours. Let us translate Theorem 1.11 in a form closer to our Proposition 1.2. Fix k sets A 1 , . . . , A k such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Let 2r = min d(A i , A j ), where the infimum runs on i, j = 1, . . . , k with i = j. We have to choose a (k + 1)-th set. In view of Theorem 1.11, the most optimal choice is to choose A 0 = E \ (∪A i ) r . Indeed, it is the biggest set (in the sense of inclusion) such that min d(A i , A j ) = r where this time the infimum runs on i, j = 0, . . . , k and i = j. We let a (0) = µ(A 0 ) and we remark that if (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k then a (0) ≤ a [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] . The bound (1.14) can be read: for all r > 0,
λ (k) ≤ 1 r 2 log 4 a (1) a (0) 2 .
Therefore, to compare it to our bound, we need to solve
φ -1 1 c log a (1) a (0) 2 ≤ log 4 a (1) a (0) 2 .
Because the right-hand side is always ≥ 1, taking the square root and composing with the non-decreasing function φ yields
1 c log a (1) a (0) ≤ log 4 a (1) a (0)
.
That is a 1+c
(1) ≤ 4 c a 1-c (0) . In other words, on some range our bound is better and in some other range their bound is better. However, if the constant c = 1 could be attained in Theorem 1.1, this would show that our bound is always better. Note that comparing the bounds obtained in Proposition 1.10 and the one of [START_REF] Chung | Upper bounds for eigenvalues of the discrete and continuous Laplace operators[END_REF] is not so clear as, without the assumption that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k it is not necessary that a (0) ≤ a [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] and in that case we would have to compare different sets.
Eigenvalue estimates for non-negatively curved spaces
We recall the values of the λ (k) 's that appear in Theorem 1.1 in the case of two important models of positively curved spaces in geometry. Namely:
(i) The n-dimensional sphere of radius n-1 ρ , S n,ρ endowed with the natural geodesic distance d n,ρ arising from its canonical Riemannian metric and its normalized volume measure µ n,ρ which has constant Ricci curvature equals to ρ and dimension n.
(ii) The n-dimensional Euclidean space R n endowed with the n-dimensional Gaussian measure of covariance ρ -1 Id,
γ n,ρ (dx) = ρ n/2 e -ρ|x| 2 /2 (2π) n/2 dx.
This space has dimension ∞ and curvature bounded below by ρ in the sense of [START_REF] Bakry | Diffusions hypercontractives[END_REF]. These models arise as weighted Riemannian manifolds without boundary having a purely discrete spectrum. In that case, it was proved in [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF]Proposition 3.2] that the λ k 's of (1.2) are exactly the eigenvalues (counted with multiplicity) of a self-adjoint operator that we give explicitly in the following. Using a comparison between eigenvalues of [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF], we obtain an estimates for eigenvalues in the case of log-concave probability measure over the Euclidean R n .
Example 1 (Spheres). On S n,ρ , the eigenvalues of minus the Laplace-Beltrami operator (see for instance [START_REF] Atkinson | Spherical harmonics and approximations on the unit sphere: an introduction[END_REF]Chapter 3]) are of the form ρ -2 (n -1) 2 l(l + n -1) for l ∈ N and the dimension of the corresponding eigenspace
H l,n is dim H l,n = 2l + n -1 l l + n -2 l -1 , if l > 0and dim H l,n = 1, if l = 0.
Consequently,
D l,n := dim l l ′ =0 H l ′ ,n = n + l l + n + l -1 l -1 ,
and
λ (k) = ρ -2 (n -1) 2 l(l + n -1) if and only if D l-1,n < k ≤ D l,n where λ (k)
is the k-th eigenvalues of -∆ S n,ρ and coincides with the variational definition given in (1.2).
Example 2 (Gaussian spaces). On the Euclidean space R n , equipped with the Gaussian measure γ n,ρ , the corresponding weighted Laplacian is ∆ γn,ρ = ∆ R n -ρx • ∇. The eigenvalues of -∆ γn,ρ are exactly of the form ρ 2 q and the dimension of the associated eigenspace H q,n is dim H q,n = n + q -1 q .
Consequently,
D q,n := dim q q ′ =0 H q ′ ,n = n + q q ,
and λ (k) = ρ -2 q if and only if D q-1,n < k ≤ D q,n where λ (k) is the k-th eigenvalues of -∆ γn,ρ and coincides with the variational definition given in (1.2).
Example 3 (Log-concave Euclidean spaces). We study the case where E = R n , d is the Euclidean distance and µ is a strictly log-concave probability measure. By this we mean that µ(dx) = e -V (x) dx, where Proposition 4] that such a condition on V implies that the semigroup generated by the solution of the stochastic differential equation dX t = √ 2dB t -∇V (X t )dt, where B is a Brownian motion on R n , satisfies the curvature-dimension CD(∞, K) of Bakry-Emery and, therefore, holds the log-Sobolev inequality, for all
V : R n → R such that V is C 2 and satisfying ∇ 2 V ≥ K for some K > 0. It is a consequence of [4,
f ∈ C ∞ c (R n ), Ent µ f 2 ≤ 2 K ˆ|∇f (x)| 2 µ(dx).
Such an inequality implies the super-Poincaré of [27, Theorem 2.1] that in turns implies that the self-adjoint operator L = -∆ + ∇V • ∇ has a purely discrete spectrum. In that case, the λ (k) of (1.2) corresponds to these eigenvalues and [START_REF] Milman | Spectral Estimates, Contractions and Hypercontractivity[END_REF] showed that
λ (k) ≥ λ (k) γn,ρ , where λ (k)
γn,ρ is the eigenvalues of -∆ γn,ρ of the previous example.
Extension to Markov chains
As in the classical case (see [START_REF] Ledoux | The concentration of measure phenomenon[END_REF]Theorem 3.3]), our continuous result admits a generalization on finite graphs or more broadly in the setting of Markov chains on a finite state space. We consider a finite set E and X = (X n ) n∈N be a irreducible time-homogeneous Markov chain with state space E. We write p(x, y) = P(X 1 = y|X 0 = x) and we regard p as a matrix. We assume that p admits a reversible probability measure µ on E : p(x, y)µ(x) = p(y, x)µ(y) for all x, y ∈ E (which implies in particular that µ is invariant). The Markov kernel p induces a graph structure on E by the following procedure. Set the elements of E as the vertex of the graph and for x, y ∈ E connect them with an edge if p(x, y) > 0. As the chain is irreducible, this graph is connected. We equip E with the induced graph distance d. We write L = p -I, where I stands for the identity matrix. The operator -L is a symmetric positive operator on L 2 (µ). We let λ (k) be the eigenvalues of this operator. Then, our Theorem 1.1 extends as follows:
Theorem 3.1. For any k ≥ 1 and for all sets A 1 , . . . , A k ⊂ E such that min i =j d(A i , A j ) ≥ 1 and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k the set B = A 1 ∪ A 2 ∪ • • • ∪ A k satisfies µ(B n ) ≥ 1 -(1 -µ(B)) 1 + λ (k) -n , for all 1 ≤ n ≤ 1 2 min i =j d(A i , A j ) where λ (k)
is the k-th eigenvalue of the operator -L acting on L 2 (µ).
Proof. We let Π(x, y) = p(x, y)µ(x) and
E (f, g) = 1 2 (f (y) -f (x))(g(y) -g(x))Π(x, y) = f, -Lg µ .
For any set A, we define the discrete boundary of A as
∂A = A 1 \ A ∪ (A C ) 1 \ A C .
Let (X n ) be the Markov chain with transition kernel p and initial distribution µ. By reversibility of µ, (X 0 , X 1 ) is an exchangeable pair of law Π whose the marginals are given by µ. Then, for a set U , we have
E (1 U ) = E1 U (X 0 )(1 U (X 0 ) -1 U (X 1 )) = P(X 0 ∈ U, X 1 ∈ U ) ≤ P(X 1 ∈ ∂U ) = µ(∂U ).
Observe that if d(U, V ) ≥ 1, U and V are disjoint and U × V ∈ supp Π so that E (1 U , 1 V ) = 0. By Courant-Fischer's min-max theorem
λ (k) = min dim V =k+1 max f ∈V E (f, f ) µ(f 2 ) . Choose sets A 1 , . . . , A k with d(A i , A j ) ≥ 2n (i = j) and (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Set f i = 1 A i .
The f i 's have disjoint support and so they are orthogonal in L 2 (µ). By the previous variational representation of λ (k) , we have
λ (k) ≤ sup a i E k i=0 a i f i ´ k i=0 a i f i 2 dµ = sup a i a i a i ′ E (f i , f i ′ ) a i a i ′ ´fi f i ′ dµ = sup a i k i=0 a 2 i E (f i ) k i=0 a i ´f 2 i dµ .
In other words,
λ (k) ≤ max i=0,...,k µ((A i ) 1 ) + µ((A C i ) 1 ) -1 µ(A i ) ≤ µ((A i ) 1 ) -µ(A i ) µ(A i ) ,
where the last inequality comes from the fact that, by Lemma 1.5,
µ(E \ (E \ A) 1 ) ≥ µ(A). Consider the set B = ∪ k i=1 A i and choose A 0 = E \B 1 .
In that case, by Lemma 1.6 with ǫ = 1, we have
max i=0,...,k µ((A i ) 1 ) µ(A i ) ≤ 1 -µ(B) 1 -µ(B 1
) .
Thus, we proved that
(1 + λ (k) )(1 -µ(B 1 )) ≤ (1 -µ(B)).
We derive the announced result by an immediate recursion.
Functional forms of the multiple sets concentration property
We investigate the functional form of the multi-sets concentration of measure phenomenon results obtained in Sections 1 and 3. (1) For all Borel sets A 1 , . . . ,
A k ⊂ E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the set A = A 1 ∪ • • • ∪ A k satisfies (4.1) µ(A r ) ≥ 1 -(1 -µ(A))α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
(2) For all 1-Lipschitz functions f 1 , . . . , f k : E → R such that the sublevel sets
A i = {f i ≤ 0} are such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k , the function f * = min(f 1 , . . . , f k ) satisfies µ(f * < r) ≥ 1 -µ(f * ≤ 0)α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
Together with Theorem 1.1 or Theorem 3.1, one thus sees that the presence of multiple wells can improve the concentration properties of a Lipschitz function.
Proof. It is clear that (2) implies (1) when applied to f i (x) = d(x, A i ), in which case A i = {f i ≤ 0} and f * (x) = d(x, A). The converse is also very classical. First, observe that {f * < r} = ∪ k i=1 {f i < r}. Then, since f i is 1-Lipschitz, it holds A i,r ⊂ {f i < r} with A i = {f i ≤ 0} and so letting A = A 1 ∪ • • • ∪ A k , it holds A r ⊂ {f * < r}. Therefore, applying [START_REF] Aida | Moment estimates derived from Poincaré and logarithmic Sobolev inequalities[END_REF] to this set A gives (2). When (4.1) holds, we will say that the probability metric space (E, d, µ) satisfies the multi-set concentration of measure property of order k with the concentration profile α k .
In the usual setting (k = 1), the concentration of measure phenomenon implies deviation inequalities for Lipschitz functions around their median. The next result generalizes this well known fact to k > 1.
Proposition 4.2. Let (E, d, µ) be a probability metric space satisfying the multi-set concentration of measure property of order k with the concentration profile α k and f :
E → R be a 1-Lipschitz function. If I 1 , . . . , I k ⊂ R are k disjoint Borel sets such that (µ(f ∈ I 1 ), . . . , µ(f ∈ I k )) ∈ ∆ k , then it holds µ f ∈ ∪ k i=1 I i,r ≥ 1 -(1 -µ(f ∈ ∪ k i=1 I i ))α k (r), ∀0 < r ≤ 1 2 min i =j d(I i , I j )
Proof. Let ν be the image of µ under the map f . Since f is 1-Lipschitz, the metric space (R, | • |, ν) satisfies the multi-set concentration of measure property of order k with the same concentration profile α k as µ. Details are left to the reader.
Let us conclude this section by detailing an application of potential interest in approximation theory. Suppose that f : E → R is some 1-Lipschitz function and A 1 , . . . , A k are (pairwise disjoint) subsets of E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k . Let us assume that the restrictions f |A i , i ∈ {1, . . . , k} are known and that one wishes to estimate or reconstruct f outside A = ∪ k i=1 A i . To that aim, one can consider an explicit 1-Lipschitz extension of f |A , that is to say a 1-Lipschitz function g : E → R (constructed based on our knowledge of f on A exclusively) such that f = g on A. There are several canonical ways to perform the extension of a Lipschitz function defined on a sub domain (known as Kirszbraun-McShane-Whitney extensions [START_REF] Kirszbraun | Uber die zusammenziehende und lipschitzsche transformationen[END_REF][START_REF] Mcshane | Extension of range of functions[END_REF][START_REF] Whitney | Analytic extensions of differentiable functions defined in closed sets[END_REF]). One can consider for instance the functions
g + (x) = inf y∈A {f (y) + d(x, y)} or g -(x) = sup y∈A {f (y) -d(x, y)}, x ∈ E.
It is a very classical fact that functions g -and g + are 1-Lipschitz extensions of f |A and moreover that any extension g of f |A satisfies g -≤ g ≤ g + (see e.g [START_REF] Heinonen | Lectures on Lipschitz analysis[END_REF]).
The following simple result shows that, for any 1-Lipschitz extension g of f |A , the probability of error µ(|f -g| > r) is controlled by the multi-set concentration profile α k . In particular, in the framework of our Theorem 1.1, this probability of error is expressed in terms of λ (k) . Proposition 4.3. Let (E, d, µ) be a probability metric space satisfying the multi-set concentration of measure property of order k with the concentration profile α k and f :
E → R be a 1-Lipschitz function. Let A 1 , . . . A k be subsets of E such that (µ(A 1 ), . . . , µ(A k )) ∈ ∆ k ; then for any 1-Lipschitz extension g of f |A , it holds µ(|f -g| ≥ r) ≤ (1 -µ(A))α k (r/2), ∀0 < r ≤ min i =j d(A i , A j ).
Proof.
: [0, ∞) → [0, ∞) and β k : [0, ∞) k → [0, ∞] such that for all Borel sets A 1 , . . . , A k ⊂ E, the set A = A 1 ∪ • • • ∪ A k satisfies µ(A r ) ≥ 1 -β k (µ(A 1 ), • • • , µ(A k ))α k (r), ∀0 < r ≤ 1 2 min i =j d(A i , A j ).
This framework contains the preceding one, by choosing β k (a) = 1 -k i=1 a i if a = (a 1 , . . . , a k ) ∈ ∆ k and +∞ otherwise. It also contains the concentration bounds obtained in Proposition 1.9, corresponding respectively to
β k (a) = 1 -k i=1 a i k i=1 a i , and β k (a) = 1 -k i=1 a i k i=1 a i k i=1 a i / min(a 1 ,••• ,a k )
, a = (a 1 , . . . , a k ).
Open questions
We list open questions related to the multi-set concentration of measure phenomenon. γn,ρ denotes the kth eigenvalue of the n-dimensional centered Gaussian measure with covariance matrix ρ -1 Id. Since the measure µ satisfies the log-Sobolev inequality, it is well known that it satisfies a (classical) Gaussian concentration of measure inequality. Therefore, it is natural to conjecture that µ satisfies a multi-set concentration of measure property of order k ≥ 1 with a profile of the form β k (r) = exp -C k,ρ,n r 2 , r ≥ 0, for some constant C k,ρ,n depending solely on its arguments. In addition, it would be interesting to see how usual functional inequalities (Log-Sobolev, transport-entropy, . . . ) can be modified to catch such a concentration of measure phenomenon.
Equivalence between multi-set concentration and lower bounds on eigenvalues in non-negative curvature.
Let us quickly recall the main finding of E. Milman [START_REF] Milman | On the role of convexity in isoperimetry, spectral gap and concentration[END_REF][START_REF] Milman | Isoperimetric and concentration inequalities: equivalence under curvature lower bound[END_REF], that is, under non-negative curvature assumptions, a concentration of measure estimate implies a bound on the spectral gap. Let µ be a probability measure with a density of the form e -V on a smooth connected Riemannian manifold M with V a smooth function such that (5.1) Ric + Hess V ≥ 0.
Assume that µ satisfies a concentration inequality of the form: for all A ⊂ M such that µ(A) ≥ 1/2 µ(A r ) ≥ 1 -α(r), r ≥ 0, where α is a function such that α(r o ) < 1/2 for at least one value r o > 0. Then, letting λ (1) be the first non zero eigenvalue of the operator -∆ + ∇V • ∇, it holds λ (1) ≥ 1 . It would be very interesting to extend Milman's result to a multiset concentration setting. More precisely, if µ satisfies the curvature condition (5.1) and the multi-set concentration of measure property of order k with a profile of the form α k (r) = exp(-min(ar 2 , √ ar)), r ≥ 0, can we find a universal function ϕ k such that λ (k) ≥ ϕ k (a)? This question already received some attention in recent works by Funano and Shioya [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF][START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]. In particular, let us mention the following improvement of the Chung-Grigor'yan-Yau inequality obtained in [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF]. There exists a universal constant c > 1 such that if µ is a probability measure satisfying the non-negative curvature assumption (5.1), it holds: for any family of sets A 0 , A 1 , . . . , A l with 1 ≤ l ≤ k (5.2)
λ (k) ≤ c k-l+1 1 min i =j d 2 (A i , A j ) max i =j log ( 4 µ(A i )µ(A j ) ) 2 .
Note that the difference with (1.14) is that λ (k) is estimated by a reduced number of sets. Using (5.2) (with l = 1) together with Milman's result recalled above, Funano showed that there exists some constant C k depending only on k such that under the curvature condition (5.1), it holds λ (k) ≤ C k λ (1) (recovering the main result of [START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]). The constant C k is explicit (contrary to the constant of [START_REF] Funano | Concentration, Ricci curvature, and eigenvalues of Laplacian[END_REF]) and grows exponentially when k → ∞. This result has been then improved by Liu [START_REF] Liu | An optimal dimension-free upper bound for eigenvalue ratios[END_REF], where a constant C k = O(k 2 ) has been obtained. As observed by Funano [START_REF] Funano | Estimates of eigenvalues of the Laplacian by a reduced number of subsets[END_REF], a positive answer to the open question stated above would yield that under (5.1) the ratios λ (k+1) /λ (k) are bounded from above by a universal constant.
Proposition 4 . 1 .
41 Let (E, d) be a metric space equipped with a Borel probability measure µ. Let α k : [0, ∞) → [0, ∞). The following properties are equivalent:
5. 1 .
1 Gaussian multi-set concentration. Using the terminology introduced in Section 4, Theorem 1.1 and the material exposed in Section 2 tell us that, if µ has a density of the form e -V with respect to Lebesgue measure on R n with a smooth function V such that Hess V ≥ ρ > 0, then the probability metric space (R n , | • |, µ) satisfies the multi-set concentration of measure property of order k with the concentration profile α k (r) = exp -c min(r 2 λ (k) γn,ρ ; r λ
4 1 -
1 2α(ro) ro 2
Optimizing over y∈ A gives that h(x) ≤ 2d(x, A). Therefore {h ≥ r} ⊂ {x : d(x, A) ≥ r/2} = A r/2 c and so, if 0 < r ≤ min i =j d(A i , A j ), it holds µ(|f -g| ≥ r) ≤ (1 -µ(A))α k (r/2).
Remark 3. Let us remark that Propositions 4.1 to 4.3 can be immediately extended
under the following more general (but notationally heavier) multi-set concentration of
measure assumption: there exists functions α k
The function h : E → R defined by h(x) = |f -g|(x), x ∈ E, is 2-Lipschitz and vanishes on A. Therefore, for any x ∈ E and y ∈ A, it holds h(x) ≤ h(y) + 2d(x, y) = 2d(x, y). |
01766657 | en | [
"phys.phys.phys-optics",
"spi.opti"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766657/file/Tench_Romano_Delavaux_SPIE_Photonics_West_2018_13%20page%20manuscript_11212017.pdf | Robert E Tench
email: [email protected]
Clément Romano
Jean-Marc Delavaux
Optimized Design and Performance of a Shared Pump Single Clad 2 µm TDFA
We report the design, experimental performance, and simulation of a single stage, co-and counter-pumped Tmdoped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of <3.5 dB are demonstrated. Simulations of TDFA performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.
Introduction
Simplicity and optimization of design are critical for the practical realization of wide bandwidth, high power single clad Thulium-doped fiber amplifiers (TDFAs) for 2 µm telecommunications applications. Recent TDFAs [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF][START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] have reported 2 µm band amplifiers with output power > 2W, gain > 55 dB, noise figure < 4 dB, and optical bandwidth greater than 120 nm. While these designs achieve high optical performance, they employ two or more optical stages and multiple pump sources. Therefore, it is desirable to investigate designs using one amplifier stage and one pump source. In this paper we report on the design, simulation, and experimental performance of a one-stage single clad TDFA using an L-band (1567 nm) shared fiber laser pump source, as a function of pump coupling ratio, active fiber length, pump power, and signal wavelength. Our one-stage TDFA data compare well with recently reported performance of multi-stage, multi-pump amplifiers [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. In addition, the simplicity of the single clad pump shared design and its potential for cost reduction offer a broad selection of performance for different applications.
The paper is organized as follows: Section 2 presents our experimental setup, a single stage TDFA with variable coupling in the pump ratio between co-pumping and counter-pumping the active fiber. Section 3 covers the dependence of simulated amplifier performance on active fiber length, pump coupling ratio, slope efficiency, and signal wavelength. Section 4 compares measurement and simulation of the TDFA performance. Section 5 contrasts our simple TDFA design with performance of a two-stage, three-pump amplifier as reported previously in [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. Finally, Section 6 discusses design parameter tradeoffs for different TDFA applications.
Experimental Setup for Shared Pump Amplifier
The optical design of our one-stage single pump TDFA is shown in Figure 1. A single frequency 2 µm DML source (Eblana Photonics) is coupled through attenuator A and into the active fiber F1. Pump light from a multiwatt fiber laser P1 at 1567 nm is split by coupler C1 with a variable coupling ratio k (%).The two pump signals WDM = Wavelength Division Multiplexer. co-pump and counter-pump fiber F1, with k = 100% and k = 0% corresponding to all counter-pumping and all co-pumping, respectively. The value of k was changed in the simulations and experiments to optimize amplifier performance. Isolators I1 and I2 ensure unidirectional operation and suppress spurious lasing. Input and output signal powers, and co-pump and counter-pump powers, are respectively referenced to the input and output of Tm-doped fiber F1 (7 meters of OFS TmDF200).
Simulated Amplifier Performance
We begin the design of a high performance optical amplifier by studying the critically important variations of fiber signal gain (G) and output power (Pout) as a function of active fiber length (L) and input signal power (Ps).
To do this we turn to the simulated amplifier performance [START_REF] Romano | Characterization of the 3F4 -3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF][START_REF] Jackson | Theoretical modeling of Tm-doped silica fiber lasers[END_REF] shown in Figure 2, where G is plotted vs. L for four input signal power levels. Here the total 1567 nm pump power (co-+ counter-) (Pp) is 2.5 W, the signal wavelength λs is 1952 nm, and the coupling ratio k = 50%. We note that a similar set of gain curves can be generated for different wavelength bands of the TDFA, and this behavior will be investigated later in the section.
In Figure 2 we have measured the dependence of G vs. L for a 32 dB input dynamic range in signal power. The different Ps values illustrate the amplifier operating from a linear/unsaturated regime (Ps= -30 dBm) to a highly saturated regime (Ps = + 2 dBm). The equally important dependence of noise figure on these parameters will be dealt with later.
The first observation drawn from Figure 2 is that for low input signals (e.g. for Ps = -30 dBm) G is maximized for long fiber lengths of 12 meters or greater, while for saturating input powers (e.g. Ps = +2 dBm) G reaches a maximum value for lengths of about 2 meters. It is also clear that for small signal or unsaturated gain, most of the gain (i.e. more than 80%) is achieved in the first 5 meters of the fiber, while for saturated gain most of the gain occurs within the first 1.5 meters. The second observation is that saturated gain varies only slightly with active fiber length for values greater than 3 meters, indicating that a wide range of fiber lengths can be chosen for design of a power booster amplifier. However, later we will see that the choice of the fiber length affects the useful amplifier bandwidth. The next design study for the shared pump amplifier is to examine the dependence of the saturated output power on active fiber length L and coupling ratio k. To study this issue, we plot the output signal power with pump coupling ratio k for four active fiber lengths (i.e. L = 3, 5, 7 and 9 m) as shown in Figure 3. In this simulation Ps is set to +2 dBm at 1952 nm to saturate the amplifier, with the total pump set at Pp = 2.5 W at 1567 nm.
We first note that for a given fiber length, Pout increases linearly when moving from co-pumping (k = 0%) to nearly all counter-pumping (k = 95%) and then drops down for full counter-pumping (k = 100%). For all fiber lengths, the maximum output power is achieved for k = 95%. This behavior is not surprising because counterpumping maximizes the pump power available at the output of the fiber where the amplified signal power is the largest. We next observe that the maximum output power is achieved for a ratio of 95% counter-pumping to 5% co-pumping. This indicates that a small amount of co-pumping provides signal gain that offsets fiber absorption loss. Therefore full counter-pumping is not the most efficient way to pump this fiber.
For k = 50% in Figure 3, the relatively small variation in output signal power Pout with fiber length is consistent with the small variation in gain seen in Figure 2 as a function of fiber length for Ps = +2 dBm. We further note that as the fiber length is decreased from 9 m to 3 m, the output power Pout consistently increases. For very short fibers the difference between co-and counter-pumping will become negligible. However, as we will illustrate later, this comes at the expense of the amplifier operating bandwidth shifting from higher to shorter wavelengths. The amplifier performance illustrated in Figure 3 shows that we may consider three cases for design: k = 0%, k= 50%, and k = 95%. For k = 0%, Pout variation with fiber length is 18%. For k = 50%, it is as much as 10.9%, and for k = 95% it is about 2%. This indicates that a mostly counter-pumped amplifier will be less sensitive to changes in active fiber length than a co-pumped amplifier. Now let's consider the important design consideration of the dependence of saturated output power as a function of active fiber length L and pump power Pp. This behavior is illustrated in Figure 4 for a signal wavelength of 1952 nm, a coupling ratio k = 50%, and 1567 nm pump powers of 0.83 W, 1.7 W and 2.55 W, respectively.
In this plot we see that the maximum saturated output power is obtained for L = 2 m, relatively independent of fiber length and the pump power. It is apparent that above L = 2 m, Pout decreases slightly with increases in fiber length. This behavior is consistent with the simulation in Figure 2, and it illustrates that the optimum Pout for a saturated amplifier is not greatly dependent on L. The curves in Figure 4 lead to the important observation that the output power scales linearly with increases in the pump power. Therefore saturated output powers much higher than the 2.6 W already demonstrated [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] can be achieved with this type of Thulium-doped fiber up to the stimulated Brillouin scattering (SBS) threshold which is estimated to be 10-20 W for a fiber length of 7 m [START_REF] Sincore | SBS Threshold Dependence on Pulse Duration in a 2053 nm Single-Mode Fiber Amplifier[END_REF].
So far our simulations have been carried out for the signal wavelength of 1952 nm. To more fully study performance of the amplifier, we now look at the amplifier slope efficiency η vs. signal wavelength λs and active fiber length L. Slope efficiency η = ΔPsat/ΔPp is defined as: the ratio of the change in saturated output signal power Psat to a change in pump power, for a given fiber length L and signal wavelength λs. It measures the efficiency of conversion of pump light into signal light and is an important figure of merit for the amplifier. The saturated output power Psat in our experiments and simulations is measured for a high input signal power Ps = +2 dBm.
Figure 5 shows simulations of η over the wavelength region of 1900 nm to 2050 nm, for fiber lengths ranging from 1.5 to 9 meters. Clearly the bandwidth of the amplifier shifts toward longer signal wavelengths for the longer fiber such as 9 meters. Shorter fibers such as 1.5 and 2 meters shift the operating bandwidth region toward shorter wavelengths.
Figure 5 indicates that for short fibers of 1.5 and 2 meters, η is optimum below 1950 nm, then diminishes rapidly with wavelength around 2000 nm and is negligible above 2020 nm. For longer fibers of 5 to 9 meters, η decreases more gradually with increasing wavelength and allows for a modest efficiency (i.e. 35%) up to 2050 nm. The simulated slope efficiencies in Figure 5 give a value at 1952 nm of 73% which is fully consistent with the value of 73% determined from Figure 4. Based on the simulation results, for this single stage configuration we can draw four conclusions. First, the most significant gain occurs in the first couple of meters of the active fiber. Second, the saturated output power scales with pump power and is not significantly affected by the fiber length. Third, the optimum coupling ratio k for a combination of large dynamic range and saturated output power is achieved for medium fiber lengths of 6-8 meters and a k value around 50%. Fourth, the choice of the fiber length affects the operating bandwidth of the TDFA and the slope efficiency η: shorter lengths yield shorter operating wavelengths, while longer lengths give longer wavelength operating regions. This last point will be discussed further in Section 4.
Comparison of Simulation and Experiment
We now turn to comparisons of simulation and experiment for the single stage amplifier of Figure 1. In all these comparisons, the experimental fiber length is 7 meters.
We start by looking at the signal output power Pout as a function of 1952 nm signal input power Ps over a range of -30 dBm to + 2 dBm. Pump powers Pp at 1567 nm range from 0.89 W to 3.09 W and the coupling ratio k is 50%. As illustrated in Figure 6, the simulations (in solid lines) agree well with the experimental data (points) with an average difference between simulation and experiment of 0.6 dB.
For 0 dBm input power, the measured output powers are 1.11 W and 1.86 W for pump powers of 1.93 W and 3.09 W, respectively. This corresponds to optical power conversion efficiencies of 58% and 60%, respectively. Figure 8 shows the dependence of G and NF on coupling ratio k for Ps = -30 dBm at 1952 nm and Pp= 1.70 W at 1567 nm. Experimental data are shown in points and the simulations in solid lines. The optimum operating setpoint for small signal gain is different from the optimum for noise figure, with the largest small signal gain occurring for k = 50% and the lowest noise figures for k = 0%. The noise figure increases slowly at first with k, and then rapidly to 6.1 dB as k reaches 100% which corresponds to counter-pump only. The agreement between simulation and experiment is good, validating the performance of our simulator over the full range of k values. From this graph, we observe that a good balance between optimum gain and optimum noise figure is achieved for a coupling ratio of k = 50%. In Figure 9 we plot the dependence of G and NF on signal wavelength λs over the range of 1900 -2050 nm for a coupling ratio of k = 50%, Pp = 1.12 W, and input signal power Ps = -30 dBm. The highest measured unsaturated gain is achieved at 1910 nm , with a small decrease with λ at 1952 nm and then a steady decrease up to 2050 nm. For G > 30 dB, the amplifier bandwidth is >120 nm. By extending the investigation to values of λ lower than 1900 nm, we can expect even larger bandwidths.
The smallest measured NF of 3.5 dB is at 1952 nm, and NF variation with wavelength is small (i.e.< 1.4 dB).
We observe that the agreement between simulation and experiment is good, and this shows our simulation predicts well the small signal gain G and noise figure NF as a function of λs. We now investigate the slope efficiency η as a function of total pump power Pp for a saturating input power of Ps ≈ + 2 dBm, Figure 10 shows experimental and simulated values for saturated output power as a function of pump power, for four different values of λs across the transmission band with k = 50%. The agreement between simulation (solid lines) and experiment (points) is good, illustrating the accuracy of our simulator over a wide region of λs and over pump powers from 0.3 to 3.2 W. Notice that at λ = 2050 nm the number of data points is limited by the onset of lasing due to the large ASE produced as the pump power increases. The experimental variation in signal output power with pump power is linear in all cases as expected from theory. The maximum measured output power is 2.00 W for λ = 1910 nm and pump power of 3.09 W, corresponding to an optical power conversion efficiency of 65%. In Figure 11 we compare the slope efficiencies measured in Figure 10 with the simulation of Figure 5, with an expanded span for λ of 1760 nm -2060 nm. The experimental slope efficiencies (points) agree reasonably well with the theory and this demonstrates that our simulation is valid over a wide range of values for λs for a saturated amplifier. The maximum measured slope efficiency is 68.2% at 1910 nm. This can be compared with the simulated value at this signal wavelength of 76.0%. Using a slope efficiency of greater than 50% as a criterion, Figure 11 shows that the simulated operating bandwidth BW and center operating wavelength λc of the amplifier vary significantly with fiber length. For a short fiber (3 m) the operating bandwidth BW at 50% slope efficiency is 198 nm as indicated by the horizontal arrows in the figure, and λc is 1896 nm. For the longest fiber simulated (9 m) BW is reduced to 160 nm, and λc is shifted up in wavelength to 1940 nm. Results for all the fiber lengths Table 2. Operating Bandwidth BW and Center Wavelength λc as a Function of Fiber Length L.
studied are summarized in Table 2. It is evident that shorter fiber lengths give greater operating bandwidths and lower center wavelengths. This behavior is consistent with previously reported results [START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF]. We note that Figure 11 and Table 2 are the first detailed comparisons of TDFA simulation and theory, since previous work on spectral performance has been either wholly experimental [START_REF] Li | Diodepumped wideband thulium-doped fiber amplifiers for optical communications in the 1800-2050 nm window[END_REF][START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF] or theoretical [START_REF] Gorjan | Model of the amplified spontaneous emission generation in thulium-doped silica fibers[END_REF][START_REF] Khamis | Theoretical Model of a Thulium-doped Fiber Amplifier Pumped at 1570 nm and 793 nm in the Presence of Cross Relaxation[END_REF].
Comparison of Multistage Amplifier Performance
In Sections 3 and 4, we have shown that the shared pump topology can deliver high performance that is fully in agreement with simulation results. Here we will compare the shared pump amplifier with a two stage-three pump TDFA [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. A summary comparison of the two amplifiers is given below in Table 3. The table reveals that there is no major difference in performance between the two TDFAs. Comparing the maximum saturated output powers, we see that the shared pump TDFA achieves 1.9 W output for 3.2 W available pump, while the 2 stage amplifier achieves 2.6 W for 3.6 W of available pump. The output power performance of the two amplifiers is seen to be comparable when the maximum pump power available is accounted for. NF values for the two amplifiers are similar, as are the operating dynamic ranges (measured over an input power span of -30 dBm to +2 dBm). The two-stage amplifier has a slightly higher small signal gain with 56 dB compared to 51 dB for the shared pump single stage TDFA.
Fiber
The difference in slope efficiencies, with 66% for the 1 stage shared pump configuration and 82% for the 2 stage, 3 pump configuration, can be explained by referring to the architecture of the 3 pump configuration [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF].
Here we recall the definition of slope efficiency η from Section 3: η = ΔPsat/ΔPp. Remembering that Psat is the output power for a highly saturated amplifier, we observe that in the 2 stage TDFA the first stage boosts the input signal power of +2 dBm to an intermediate level of about +20 dBm which is then input to the second fiber stage. This boost in power to +20 dBm increases the conversion efficiency for available pump power in the second stage and so increases η. Indeed the two stage amplifier brings the measured efficiency closer to the simulated value as shown in Figure 11.
In comparing the amplifier bandwidth for the two configurations, we see that the 167 nm simulated bandwidth for the one stage, shared pump amplifier is consistent with the estimated value of >120 nm for the two stage, three pump amplifier. The >120 nm value was obtained by measuring the 10 dB width of the ASE noise background in the saturated output spectrum of the two stage TDFA. We believe that a simulation of the slope efficiency for the two stage amplifier (currently in progress) will result in a more precise value for its bandwidth.
The comparisons in Table 3 illustrate that our single stage shared pump TDFA can match the performance targets of a complex two stage three pump amplifier. The simplicity of the architecture of the shared pump TDFA is a considerable advantage in the design simulation of TDFAs for broadband telecommunications systems.
Discussion of Parameter Optimization for TDFA Architecture
The data reported in Figures 2 -11 illustrate several salient points about the operation of the shared pump TDFA.
From our experimental and theoretical studies, it is evident that input power levels, saturated output power targets, noise figure specifications, small signal gain specifications, and operating signal bandwidths all depend in an interrelated way on the amplifier architecture. Design of an optimized amplifier requires a careful balancing of all these performance targets as a function of fiber length L and coupling ratio k.
For gain amplifiers, Figure 2 shows that it is very important to consider the input signal power when choosing an optimum fiber length. For example, at the coupling ratio of 50%, the fiber gain for -30 dBm input is highest for a fiber length of 14 meters. For -15 dBm input, the optimum gain occurs for lengths of 7-8 meters. Clearly the design specifications of the TDFA must be carefully considered when choosing an optimum fiber length for a preamplifier designed to operate at low signal input powers. For these low input powers the NF value remains close to the quantum limit of 3 dB.
For power amplifiers, maximum simulated output power occurs for a coupling ratio of k = 95% and an optimized fiber length of about 3.5 meters for a signal wavelength of 1952 nm. This optimized fiber length agrees well with the values obtained in Figures 2 and4 where the optimum length for maximum output power is between 3 and 4 meters for a pump coupling ratio of 50 %. We conclude that for maximizing output power at 1952 nm, coupling ratios anywhere between 50 and 95 % can be employed. Figure 4 demonstrates that the saturated output power Pout scales linearly with pump power up to the maximum simulated Pp of 2.55W. No Brillouin scattering or other nonlinear effects were observed in our experiments. This means that we can improve the output power of the amplifier simply by increasing the pump power, up to the limit where nonlinear effects start to be observed. The threshold for nonlinear effects in our shared pump amplifier is currently under study. For the parameters in the current experiments, the one stage shared pump design yields an attractive power amplifier that is simple to build and has high signal output power For generic or multipurpose amplifiers, Figure 5 and 11 illustrate that the operating bandwidth BW and center wavelength λc of the amplifier are strongly dependent on the active fiber length, with maximum long wavelength response above 2000 nm occurring for fiber lengths L of 9 meters and longer. Short wavelength response is maximized for short fiber lengths of 1.5 and 2 meters. The desired operating bandwidth and center wavelength can therefore be selected by choosing an appropriate active fiber length. The noise figure NF as shown in Figure 9 is slowly varying with signal wavelength λs for a coupling ration of k = 50 %, indicating that the noise performance of the multipurpose amplifier is highly tolerant of variations in signal wavelength λs. This is an attractive feature for the many applications of this type of TDFA.
Figure 1 .
1 Figure 1. Optical Design of Single Stage Single Pump TDFA with a Shared Pump Arrangement.
Figure 2 .
2 Figure 2. Signal Gain (G) as a Function of Fiber Length (L) for Four Different Levels of Ps.
Figure 3 .
3 Figure 3. Simulated Output Signal Power (Pout) as a Function of Fiber Length (L) and Pump Coupling Ratio (k)
Figure 4 .
4 Figure 4. Simulated Output Power Pout as a Function of Fiber Length L and Pump Power Pp for k = 50%
Figure 5 .
5 Figure 5. Simulated Slope Efficiencies η vs. Signal Wavelength λs and Active Fiber Length L
Figure 6 .
6 Figure 6. Output Signal Power Pout vs. Input Signal power Ps for k=50%, for Three Different Total Pump Powers Pp.
Figure 7 .
7 Figure 7. Gain G and Noise Figure NF at 1952 nm as a Function of Input Signal Power Ps.
Figure 8 .
8 Figure 8. Gain and Noise Figure as a Function of Coupling Ratio k.
Figure 9 .
9 Figure NF, dB
Figure 10 .Figure 11 .
1011 Figure 10. Saturated Output power Pout vs. Total Pump Power Pp as a Function of λs
Table 1 .
1 Table 1 contrasts the measured and simulated values of slope efficiency η as a function of signal wavelength λs for a fiber length of 7 m. Comparison of Simulated and Measured Slope Efficiency η as a Function of λs .
η, %
λ, nm Exp. Sim.
1910 68.2 76.0
1952 65.9 72.9
2004 52.1 55.0
2050 13.5 9.6
Table 3 .
3 Comparison of Single Stage, Shared Pump TDFA with Two Stage, Three Pump TDFA
Length L = 7 m TDFA Configurations, 1952 nm
Parameter Symbol Units 1 Stage, Shared Pump 2 Stage, 3 Pumps
Pump Power (1567 nm) Pp W 3.2 3.6
Saturated Output Power Pout W 1.9 2.6
Small Signal Noise Figure NF dB 3.4 3.2
Signal Dynamic Range Pin dB 32 32
Small Signal Gain G dB 51 56
Slope Efficiency (Saturated) η % 65.9 82
Operating Bandwidth BW nm 167 (simulated) > 120 (est. from ASE)
Summary
We have reported the experimental and simulated performance of a single stage TDFA with a shared in-band pump at 1567 nm. In particular we considered the dependence of amplifier performance on pump coupling ratio and signal wavelength. We determined that the optimum fiber length L and optimum coupling ratio k depend strongly on the design performance specifications for the TDFA such as signal wavelength band, saturated output power, noise figure, small signal gain, and dynamic range. Our simulations show that the operating bandwidth of the amplifier can be as high as 198 nm. Due to the broad Thulium emission bandwidth, this amplifier configuration can be tailored to meet a variety of performance needs. We achieved saturated output powers of 2 W, small signal gains as high as 51 dB, noise figures as low as 3.5 dB, and a dynamic range of 32 dB for a noise figure of less than 4.7 dB. In all cases we found good agreement between our simulation tool and the experiments. No Brillouin scattering or other nonlinear effects were observed in any of our measurements. Our experiments and simulations show that the shared pump TDFA can match the performance of more complex multistage, multi-pump TDFAs, and illustrate the simplicity and usefulness of our design. This opens the possibility for new and efficient TDFAs for lightwave transmission systems as preamplifiers, as in-line amplifiers, and as power booster amplifiers.
Acknowledgments
We gratefully acknowledge Eblana Photonics for the single frequency distributed mode 2 µm laser sources, and OFS for the single clad Tm-doped fiber. |
01766661 | en | [
"phys.phys.phys-optics",
"spi.opti"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766661/file/Tench_Romano_Delavaux_OFT_Manuscript_v5_11272017.pdf | Robert E Tench
email: [email protected]
Clément Romano
Jean-Marc Delavaux
Optimized Design and Performance of a Shared Pump Single Clad 2 µm TDFA
Keywords: Fiber Amplifier, Thulium, 2000 nm, Silica Fiber, Single Clad
We report the design, experimental performance, and simulation of a single stage, co-and counter-pumped Tmdoped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of <3.5 dB are demonstrated. Simulations of TDFA performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.
Introduction
Simplicity and optimization of design are critical for the practical realization of wide bandwidth, high power single clad Thulium-doped fiber amplifiers (TDFAs) for 2 µm telecommunications applications. Recent TDFAs [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF][START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] have reported 2 µm band amplifiers with output power > 2W, gain > 55 dB, noise figure < 4 dB, and optical bandwidth greater than 120 nm. While these designs achieve high optical performance, they employ two or more optical stages and multiple pump sources. Therefore, it is desirable to investigate designs using one amplifier stage and one pump source. In this paper we report on the design, simulation, and experimental performance of a one-stage single clad TDFA using an L-band (1567 nm) shared fiber laser pump source, as a function of pump coupling ratio, active fiber length, pump power, and signal wavelength. Our one-stage TDFA data compare well with recently reported performance of multi-stage, multi-pump amplifiers [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. In addition, the simplicity of the single clad pump shared design and its potential for cost reduction offer a broad selection of performance for different applications.
The paper is organized as follows: Section 2 presents our experimental setup, a single stage TDFA with variable coupling in the pump ratio between co-pumping and counter-pumping the active fiber. Section 3 covers the dependence of simulated amplifier performance on active fiber length, pump coupling ratio, slope efficiency, and signal wavelength. Section 4 compares measurement and simulation of the TDFA performance. Section 5 contrasts our simple TDFA design with performance of a two-stage, three-pump amplifier as reported previously in [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. Finally, Section 6 discusses design parameter tradeoffs for different TDFA applications.
Experimental Setup for Shared Pump Amplifier
The optical design of our one-stage single pump TDFA is shown in Figure 1. A single frequency 2 µm DML source (Eblana Photonics) is coupled through attenuator A and into the active fiber F1. Pump light from a multiwatt fiber laser P1 at 1567 nm is split by coupler C1 with a variable coupling ratio k (%).The two pump signals WDM = Wavelength Division Multiplexer. co-pump and counter-pump fiber F1, with k = 100% and k = 0% corresponding to all counter-pumping and all co-pumping, respectively. The value of k was changed in the simulations and experiments to optimize amplifier performance. Isolators I1 and I2 ensure unidirectional operation and suppress spurious lasing. Input and output signal powers, and co-pump and counter-pump powers, are respectively referenced to the input and output of Tm-doped fiber F1 (7 meters of OFS TmDF200). The input and output spectra of the TDFA were measured with an optical spectrum analyzer (Yokogawa AQ6375B).
Simulated Amplifier Performance
We begin the design of a high performance optical amplifier by studying the critically important variations of fiber signal gain (G) and output power (Pout) as a function of active fiber length (L) and input signal power (Ps).
The signal gain G is given by the following simple equation:
G(λs) = Pout (λs) / Ps (λs) ( 1
)
where λs is the signal wavelength, and Ps and Pout are signal powers measured at the input and output of the active Tm-doped fiber, respectively.
To study amplifier design we turn to the simulated TDFA performance [START_REF] Romano | Characterization of the 3F4 -3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF][START_REF] Jackson | Theoretical modeling of Tm-doped silica fiber lasers[END_REF] shown in Figure 2, where G is plotted vs. L for four input signal power levels. Here the total 1567 nm pump power (co-+ counter-) (Pp) is 2.5 W, the signal wavelength λs is 1952 nm, and the coupling ratio k = 50%. We note that a similar set of gain curves can be generated for different wavelength bands of the TDFA, and this behavior will be investigated later in the section.
In Figure 2 we have measured the dependence of G vs. L for a 32 dB input dynamic range in signal power. The different Ps values illustrate the amplifier operating from a linear/unsaturated regime (Ps= -30 dBm) to a highly saturated regime (Ps = + 2 dBm). The equally important dependence of noise figure on these parameters will be dealt with later.
The first observation drawn from Figure 2 is that for low input signals (e.g. for Ps = -30 dBm) G is maximized for long fiber lengths of 12 meters or greater, while for saturating input powers (e.g. Ps = +2 dBm) G reaches a maximum value for lengths of about 2 meters. It is also clear that for small signal or unsaturated gain, most of the gain (i.e. more than 80%) is achieved in the first 5 meters of the fiber, while for saturated gain most of the gain occurs within the first 1.5 meters. The second observation is that saturated gain varies only slightly with active fiber length for values greater than 3 meters, indicating that a wide range of fiber lengths can be chosen for design of a power booster amplifier. However, later we will see that the choice of the fiber length affects the useful amplifier bandwidth. The next design study for the shared pump amplifier is to examine the dependence of the saturated output power on active fiber length L and coupling ratio k. To study this issue, we plot the output signal power with pump coupling ratio k for four active fiber lengths (i.e. L = 3, 5, 7 and 9 m) as shown in Figure 3. In this simulation Ps is set to +2 dBm at 1952 nm to saturate the amplifier, with the total pump set at Pp = 2.5 W at 1567 nm.
We first note that for a given fiber length, Pout increases linearly when moving from co-pumping (k = 0%) to nearly all counter-pumping (k = 95%) and then drops down for full counter-pumping (k = 100%). For all fiber lengths, the maximum output power is achieved for k = 95%. This behavior is not surprising because counterpumping maximizes the pump power available at the output of the fiber where the amplified signal power is the largest. We next observe that the maximum output power is achieved for a ratio of 95% counter-pumping to 5% co-pumping. For full counter-pumping, the pump is attenuated significantly within two meters after being launched, leaving the input end of the active fiber unpumped with no inversion achieved for the input Tm ion population. This indicates that a small amount of co-pumping provides signal gain that offsets fiber absorption loss. Therefore full counter-pumping is not the most efficient way to pump this amplifier.
For k = 50% in Figure 3, the relatively small variation in output signal power Pout with fiber length is consistent with the small variation in gain seen in Figure 2 as a function of fiber length for Ps = +2 dBm. We further note that as the fiber length is decreased from 9 m to 3 m, the output power Pout consistently increases. For very short fibers the difference between co-and counter-pumping will become negligible. However, as we will illustrate later, this comes at the expense of the amplifier operating bandwidth shifting from higher to shorter wavelengths. The amplifier performance illustrated in Figure 3 shows that we may consider three cases for design: k = 0%, k= 50%, and k = 95%. For k = 0%, Pout variation with fiber length is 18%. For k = 50%, it is as much as 10.9%, and for k = 95% it is about 2%. This indicates that a mostly counter-pumped amplifier will be less sensitive to changes in active fiber length than a co-pumped amplifier. Now let's consider the important design consideration of the dependence of saturated output power as a function of active fiber length L and pump power Pp. This behavior is illustrated in Figure 4 for a signal wavelength of 1952 nm, a coupling ratio k = 50%, and 1567 nm pump powers of 0.83 W, 1.7 W and 2.55 W, respectively.
In this plot we see that the maximum saturated output power is obtained for L = 2 m, relatively independent of fiber length and the pump power. It is apparent that above L = 2 m, Pout decreases slightly with increases in fiber length. This behavior is consistent with the simulation in Figure 2, and it illustrates that the optimum Pout for a saturated amplifier is not greatly dependent on L. The curves in Figure 4 lead to the important observation that the output power scales linearly with increases in the pump power. Therefore saturated output powers much higher than the 2.6 W already demonstrated [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF] can be achieved with this type of Thulium-doped fiber up to the stimulated Brillouin scattering (SBS) threshold which is estimated to be 10-20 W for a fiber length of 7 m [START_REF] Sincore | SBS Threshold Dependence on Pulse Duration in a 2053 nm Single-Mode Fiber Amplifier[END_REF].
So far our simulations have been carried out for the signal wavelength of 1952 nm. To more fully study performance of the amplifier, we now look at the amplifier slope efficiency η vs. signal wavelength λs and active fiber length L. Slope efficiency η = ΔPsat/ΔPp is defined as: the ratio of the change in saturated output signal power Psat to a change in pump power, for a given fiber length L and signal wavelength λs. It measures the efficiency of conversion of pump light into signal light and is an important figure of merit for the amplifier. The saturated output power Psat in our experiments and simulations is measured for a high input signal power Ps = +2 dBm.
Figure 5 shows simulations of η over the wavelength region of 1900 nm to 2050 nm, for fiber lengths ranging from 1.5 to 9 meters. Clearly the bandwidth of the amplifier shifts toward longer signal wavelengths for the longer fiber such as 9 meters. Shorter fibers such as 1.5 and 2 meters shift the operating bandwidth region toward shorter wavelengths.
Figure 5 indicates that for short fibers of 1.5 and 2 meters, η is optimum below 1950 nm, then diminishes rapidly with wavelength around 2000 nm and is negligible above 2020 nm. For longer fibers of 5 to 9 meters, η decreases more gradually with increasing wavelength and allows for a modest efficiency (i.e. 35%) up to 2050 nm. The simulated slope efficiencies in Figure 5 give a value at 1952 nm of 73% which is fully consistent with the value of 73% determined from Figure 4. Based on the simulation results, for this single stage configuration we can draw four conclusions. First, the most significant gain occurs in the first couple of meters of the active fiber. Second, the saturated output power scales with pump power and is not significantly affected by the fiber length. Third, the optimum coupling ratio k for a combination of large dynamic range and saturated output power is achieved for medium fiber lengths of 6-8 meters and a k value around 50%. Fourth, the choice of the fiber length affects the operating bandwidth of the TDFA and the slope efficiency η: shorter lengths yield shorter operating wavelengths, while longer lengths give longer wavelength operating regions. This last point will be discussed further in Section 4.
Comparison of Simulation and Experiment
We now turn to comparisons of simulation and experiment for the single stage amplifier of Figure 1. In all these comparisons, the experimental fiber length is 7 meters.
We start by looking at the signal output power Pout as a function of 1952 nm signal input power Ps over a range of -30 dBm to + 2 dBm. Pump powers Pp at 1567 nm range from 0.89 W to 3.09 W and the coupling ratio k is 50%. As illustrated in Figure 6, the simulations (in solid lines) agree well with the experimental data (points) with an average difference between simulation and experiment of 0.6 dB.
For 0 dBm input power, the measured output powers are 1.11 W and 1.86 W for pump powers of 1.93 W and 3.09 W, respectively. This corresponds to optical power conversion efficiencies of 58% and 60%, respectively.
In Equations ( 2) through (4), Δλ is the effective resolution bandwidth of the optical spectrum analyzer in m, and PASE is the measured internal forward spontaneous output power under the signal peak in Watts. h is Planck's constant, and c is the speed of light in vacuum. G(λ) is given by Equation (1). We measured the noise figure with a Δλ of 0.1 nm on the Yokogawa optical spectrum analyzer.
Using Equations ( 1) through ( 4), we now analyze the performance of the TDFA as shown in Figure 7. A maximum signal gain G (points) of 51 dB is measured at Ps = -30 dBm with an NF < 3.5 dB. Over the full range of input powers studied, the simulated gain values (solid lines) agree with the measured gain values to within 1 dB, validating the performance of our simulator over a wide range of input powers. Experimental values of noise figure are also plotted in points in Figure 7. The minimum measured noise figure is 3.5 dB, and the minimum simulated noise figure is 3.2 dB, close to the 3.0 dB quantum limit. Agreement between experiment and simulation for noise figure is good. The measured dynamic range for the amplifier is 32 dB for a noise figure of 4.7 dB or less.
Figure 8 shows the dependence of G and NF on coupling ratio k for Ps = -30 dBm at 1952 nm and Pp= 1.70 W at 1567 nm. Experimental data are shown in points and the simulations in solid lines. The optimum operating setpoint for small signal gain is different from the optimum for noise figure, with the largest small signal gain occurring for k = 50% and the lowest noise figures for k = 0%. The noise figure increases slowly at first with k, and then rapidly to 6.1 dB as k reaches 100% which corresponds to counter-pump only. The agreement between simulation and experiment is good, validating the performance of our simulator over the full range of k values. From this graph, we observe that a good balance between optimum gain and optimum noise figure is achieved for a coupling ratio of k = 50%. In Figure 9 we plot the dependence of G and NF on signal wavelength λs over the range of 1900 -2050 nm for a coupling ratio of k = 50%, Pp = 1.12 W, and input signal power Ps = -30 dBm. The highest measured unsaturated gain is achieved at 1910 nm , with a small decrease with λ at 1952 nm and then a steady decrease up to 2050 nm. For G > 30 dB, the amplifier bandwidth is >120 nm. By extending the investigation to values of λ lower than 1900 nm, we can expect even larger bandwidths.
The smallest measured NF of 3.5 dB is at 1952 nm, and NF variation with wavelength is small (i.e.< 1.4 dB).
We observe that the agreement between simulation and experiment is good, and this shows our simulation predicts well the small signal gain G and noise figure NF as a function of λs. We now investigate the slope efficiency η as a function of total pump power Pp for a saturating input power of Ps ≈ + 2 dBm, Figure 10 shows experimental and simulated values for saturated output power as a function of pump power, for four different values of λs across the transmission band with k = 50%. The agreement between simulation (solid lines) and experiment (points) is good, illustrating the accuracy of our simulator over a wide region of λs and over pump powers from 0.3 to 3.2 W. Notice that at λ = 2050 nm the number of data points is limited by the onset of lasing due to the large ASE produced as the pump power increases. The experimental variation in signal output power with pump power is linear in all cases as expected from theory. The maximum measured output power is 2.00 W for λ = 1910 nm and pump power of 3.09 W, corresponding to an optical power conversion efficiency of 65%. In Figure 11 we compare the slope efficiencies measured in Figure 10 with the simulation of Figure 5, with an expanded span for λ of 1760 nm -2060 nm. The experimental slope efficiencies (points) agree reasonably well with the theory and this demonstrates that our simulation is valid over a wide range of values for λs for a saturated amplifier. The maximum measured slope efficiency is 68.2% at 1910 nm. This can be compared with the simulated value at this signal wavelength of 76.0%. Using a slope efficiency of greater than 50% as a criterion, Figure 11 shows that the simulated operating bandwidth BW and center operating wavelength λc of the amplifier vary significantly with fiber length. For a short fiber (3 m) the operating bandwidth BW at 50% slope efficiency is 198 nm as indicated by the horizontal arrows in the figure, and λc is 1896 nm. For the longest fiber simulated (9 m) BW is reduced to 160 nm, and λc is shifted up in wavelength to 1940 nm. Results for all the fiber lengths studied are summarized in Table 2. It is evident that shorter fiber lengths give greater operating bandwidths and lower center wavelengths. This behavior is consistent with previously reported results [START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF]. We note that Figure 11 and Table 2 are the first detailed comparisons of TDFA simulation and theory, since previous work on spectral performance has been either wholly experimental [START_REF] Li | Diodepumped wideband thulium-doped fiber amplifiers for optical communications in the 1800-2050 nm window[END_REF][START_REF] Li | Exploiting the short wavelength gain of silicabased thulium-doped fiber amplifiers[END_REF] or theoretical [START_REF] Gorjan | Model of the amplified spontaneous emission generation in thulium-doped silica fibers[END_REF][START_REF] Khamis | Theoretical Model of a Thulium-doped Fiber Amplifier Pumped at 1570 nm and 793 nm in the Presence of Cross Relaxation[END_REF].
Comparison of Multistage Amplifier Performance
In Sections 3 and 4, we have shown that the shared pump topology can deliver high performance that is fully in agreement with simulation results. Here we will compare the shared pump amplifier with a two stage-three pump TDFA [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF]. A summary comparison of the two amplifiers is given below in Table 3. The table reveals that there is no major difference in performance between the two TDFAs. Comparing the maximum saturated output powers, we see that the shared pump TDFA achieves 1.9 W output for 3.2 W available pump, while the 2 stage amplifier achieves 2.6 W for 3.6 W of available pump. The output power performance of the two amplifiers is seen to be comparable when the maximum pump power available is accounted for. NF values for the two amplifiers are similar, as are the operating dynamic ranges (measured over an input power span of -30 dBm to +2 dBm). The two-stage amplifier has a slightly higher small signal gain with 56 dB compared to 51 dB for the shared pump single stage TDFA.
Fiber
The difference in slope efficiencies, with 66% for the 1 stage shared pump configuration and 82% for the 2 stage, 3 pump configuration, can be explained by referring to the architecture of the 3 pump configuration [START_REF] Tench | Broadband 2 W Output Power Tandem Thuliumdoped Single Clad Fibre Amplifier for Optical Transmission at 2µm[END_REF].
Here we recall the definition of slope efficiency η from Section 3: η = ΔPsat/ΔPp. Remembering that Psat is the output power for a highly saturated amplifier, we observe that in the 2 stage TDFA the first stage boosts the input signal power of +2 dBm to an intermediate level of about +20 dBm which is then input to the second fiber stage. This boost in power to +20 dBm increases the conversion efficiency for available pump power in the second stage and so increases η. Indeed the two stage amplifier brings the measured efficiency closer to the simulated value as shown in Figure 11.
In comparing the amplifier bandwidth for the two configurations, we see that the 167 nm simulated bandwidth for the one stage, shared pump amplifier is consistent with the estimated value of >120 nm for the two stage, three pump amplifier. The >120 nm value was obtained by measuring the 10 dB width of the ASE noise background in the saturated output spectrum of the two stage TDFA. We believe that a simulation of the slope efficiency for the two stage amplifier (currently in progress) will result in a more precise value for its bandwidth.
The comparisons in Table 3 illustrate that our single stage shared pump TDFA can match the performance targets of a complex two stage three pump amplifier. The simplicity of the architecture of the shared pump TDFA is a considerable advantage in the design simulation of TDFAs for broadband telecommunications systems.
Discussion of Parameter Optimization for TDFA Architecture
The data reported in Figures 2 -11 illustrate several salient points about the operation of the shared pump TDFA.
From our experimental and theoretical studies, it is evident that input power levels, saturated output power targets, noise figure specifications, small signal gain specifications, and operating signal bandwidths all depend in an interrelated way on the amplifier architecture. Design of an optimized amplifier requires a careful balancing of all these performance targets as a function of fiber length L and coupling ratio k.
For gain amplifiers, Figure 2 shows that it is very important to consider the input signal power when choosing an optimum fiber length. For example, at the coupling ratio of 50%, the fiber gain for -30 dBm input is highest for a fiber length of 14 meters. For -15 dBm input, the optimum gain occurs for lengths of 7-8 meters. Clearly the design specifications of the TDFA must be carefully considered when choosing an optimum fiber length for a preamplifier designed to operate at low signal input powers. For these low input powers the NF value remains close to the quantum limit of 3 dB.
For power amplifiers, maximum simulated output power occurs for a coupling ratio of k = 95% and an optimized fiber length of about 3.5 meters for a signal wavelength of 1952 nm. This optimized fiber length agrees well with the values obtained in Figures 2 and4 where the optimum length for maximum output power is between 3 and 4 meters for a pump coupling ratio of 50 %. We conclude that for maximizing output power at 1952 nm, coupling ratios anywhere between 50 and 95 % can be employed.
Figure 4 demonstrates that the saturated output power Pout scales linearly with pump power up to the maximum simulated Pp of 2.55W. No Brillouin scattering or other nonlinear effects were observed in our experiments. This means that we can improve the output power of the amplifier simply by increasing the pump power, up to the limit where nonlinear effects start to be observed. The threshold for nonlinear effects in our shared pump amplifier is currently under study. For the parameters in the current experiments, the one stage shared pump design yields an attractive power amplifier that is simple to build and has high signal output power For generic or multipurpose amplifiers, Figure 5 and 11 illustrate that the operating bandwidth BW and center wavelength λc of the amplifier are strongly dependent on the active fiber length, with maximum long wavelength response above 2000 nm occurring for fiber lengths L of 9 meters and longer. Short wavelength response is maximized for short fiber lengths of 1.5 and 2 meters. The desired operating bandwidth and center wavelength can therefore be selected by choosing an appropriate active fiber length. The noise figure NF as shown in Figure 9 is slowly varying with signal wavelength λs for a coupling ratio of k = 50 %, indicating that the noise performance of the multipurpose amplifier is highly tolerant of variations in signal wavelength λs. This is an attractive feature for the many applications of this type of TDFA.
To conclude, we have shown that an active fiber length L of 7 meters and a coupling ratio k = 50 % provide balanced performance over a wide range of operating parameters for the one stage, shared pump TDFA.
Figure 1 .
1 Figure 1. Optical Design of Single Stage Single Pump TDFA with a Shared Pump Arrangement.
Figure 2 .
2 Figure 2. Signal Gain (G) as a Function of Fiber Length (L) for Four Different Levels of Ps.
Figure 3 .
3 Figure 3. Simulated Output Signal Power (Pout) as a Function of Fiber Length (L) and Pump Coupling Ratio (k)
Figure 4 .
4 Figure 4. Simulated Output Power Pout as a Function of Fiber Length L and Pump Power Pp for k = 50%
Figure 5 .
5 Figure 5. Simulated Slope Efficiencies η vs. Signal Wavelength λs and Active Fiber Length L
Figure 6 .
6 Figure 6. Output Signal Power Pout vs. Input Signal power Ps for k=50%, for Three Different Total Pump Powers Pp.
Figure 7 .
7 Figure 7. Gain G and Noise Figure NF at 1952 nm as a Function of Input Signal Power Ps.
Figure 8 .
8 Figure 8. Gain and Noise Figure as a Function of Coupling Ratio k.
Figure NF, dB
Figure 9 .
9 Figure 9 . Small Signal Gain G and Noise Figure NF as a Function of λs
Figure 10 .
10 Figure NF, dB
Table 1 .
1 Table 1 contrasts the measured and simulated values of slope efficiency η as a function of signal wavelength λs for a fiber length of 7 m. Comparison of Simulated and Measured Slope Efficiency η as a Function of λs .
η, %
λ, nm Exp. Sim.
1910 68.2 76.0
1952 65.9 72.9
2004 52.1 55.0
2050 13.5 9.6
Table 2 .
2 Operating Bandwidth BW and Center Wavelength λc as a Function of Fiber Length L.
L, m BW, nm λc, nm
3 198 1896
5 182 1918
7 167 1932
9 160 1940
Table 3 .
3 Comparison of Single Stage, Shared Pump TDFA with Two Stage, Three Pump TDFA
Length L = 7 m TDFA Configurations, 1952 nm
Parameter Symbol Units 1 Stage, Shared Pump 2 Stage, 3 Pumps
Pump Power (1567 nm) Pp W 3.2 3.6
Saturated Output Power Pout W 1.9 2.6
Small Signal Noise Figure NF dB 3.4 3.2
Signal Dynamic Range Pin dB 32 32
Small Signal Gain G dB 51 56
Slope Efficiency (Saturated) η % 65.9 82
Operating Bandwidth BW nm 167 (simulated) > 120 (est. from ASE)
Summary
We have reported the experimental and simulated performance of a single stage TDFA with a shared in-band pump at 1567 nm. In particular we considered the dependence of amplifier performance on pump coupling ratio and signal wavelength. We determined that the optimum fiber length L and optimum coupling ratio k depend strongly on the design performance specifications for the TDFA such as signal wavelength band, saturated output power, noise figure, small signal gain, and dynamic range. Our simulations show that the operating bandwidth of the amplifier can be as high as 198 nm. Due to the broad Thulium emission bandwidth, this amplifier configuration can be tailored to meet a variety of performance needs. We achieved saturated output powers of 2 W, small signal gains as high as 51 dB, noise figures as low as 3.5 dB, and a dynamic range of 32 dB for a noise figure of less than 4.7 dB. In all cases we found good agreement between our simulation tool and the experiments. No Brillouin scattering or other nonlinear effects were observed in any of our measurements. Our experiments and simulations show that the shared pump TDFA can match the performance of more complex multistage, multi-pump TDFAs, and illustrate the simplicity and usefulness of our design. This opens the possibility for new and efficient TDFAs for lightwave transmission systems as preamplifiers, as in-line amplifiers, and as power booster amplifiers.
Acknowledgments
We gratefully acknowledge Eblana Photonics for the single frequency distributed mode 2 µm laser sources, and OFS for the single clad Tm-doped fiber. |
01766662 | en | [
"phys.phys.phys-optics",
"spi.opti"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01766662/file/PTL%20Broadband%202W%20Tandem%20TDFA%20Tench%20Romano%20Delavaux%20REVISION%20v2%201%2001152018.pdf | Keywords: Doped Fiber Amplifiers, Infrared Fiber Optics, Optical Fiber Devices, Thulium, 2 microns
We report experimental and simulated performance of a tandem (dual-stage) Tm-doped silica fiber amplifier with a high signal output power of 2.6 W in the 2 µm band. Combined high dynamic range, high gain, low noise figure, and high OSNR are achieved with our design.
I. INTRODUCTION
The recent progress in transmission experiments at signal wavelengths in the 2 µm band [START_REF] Liu | High-capacity Directly-Modulated Optical Transmitter for 2-µm Spectral Region[END_REF] shows the need for Thulium-doped fiber amplifiers (TDFAs) with a combination of high gain, low noise figure, and large dynamic range. Previous work has demonstrated single stage amplifiers operating from 1900-2050 nm and 1650-1850 nm [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF]. In this paper we report the experimental and simulated performance of a tandem single clad TDFA employing inband pumping around 1560 nm and designed for the 1900-2050 nm signal band. A combination of high gain (> 50 dB), output power of 2.6 W, > 30 dB dynamic range, and < 4 dB small signal noise figure are demonstrated with our design. Performance as a function of input power and signal wavelength is presented. The experimental data are in good agreement with steady-state simulations of our single clad tandem TDFA performance.
II. EXPERIMENTAL SETUP
Figure 1 shows the setup for measurements of the tandem TDFA which consists of a preamplifier (Stage 1) and a power booster (Stage 2). Signal light from a single frequency discrete mode laser (DML) source (Eblana Photonics) is coupled into the first TDF F1 through attenuator A. Signal input power is set by varying the attenuator. In stage 1, fiber 1 is co-and counter-pumped using wavelength division multiplexers (WDMs) with 1550 nm grating stabilized DFBs (P1 and P2) which deliver more than 200 mW each into F1.
Manuscript submitted on December 13, 2017. Robert E. Tench and Jean-Marc Delavaux are with Cybel LLC, 1195 Pennsylvania Avenue, Bethlehem, PA 18018 USA (e-mail: [email protected]) (e-mail: [email protected]) Clement Romano is with Cybel LLC, 1195 Pennsylvania Avenue, Bethlehem, PA, 18018 USA, and Institut Telecom/Paris Telecom Tech, 46 Rue Barrault, 75634, Paris, France (e-mail: [email protected]). The signal output of F1 is then coupled into the second TDF fiber F2, in Stage 2, which is counter-pumped either with a multi-watt 1560 nm fiber laser or a multi-watt 1567 nm fiber laser. Optical isolators I1 and I2 suppress parasitic lasing and ensure unidirectional operation. In our experiments, F1 is a 7 m length of OFS TDF designated TmDF200. Two types of fiber F2 are investigated, the first 5 m of OFS TmDF200 and the second 4.4 m of IXBlue TDF designated IXF-TDF-4-125.
III. EXPERIMENTAL RESULTS AND SIMULATIONS
Figure 2 shows the measured gain (G) and noise figure (NF) for the two amplifier configurations, first the OFS/OFS combination and then the OFS/IXBlue combination. In all of our data, which are displayed as points in the figures, input powers are referenced to the input of F1, and output powers are referenced to the output of F2. Maximum values of G of 54.6 dB and 55.8 dB, for OFS/OFS and OFS/IXBlue, respectively, were measured at a signal wavelength λs of 1952 nm for fibre laser pump powers Pp at 1560 nm of 1.95 W. The corresponding NF was measured to be in the range Pin between -30 dBm and +2 dBm.
The data demonstrate a large dynamic range of over 32 dB for an NF of 5.1 dB or less. For lower fiber laser pump powers (Pp=0.2 W to 0.8 W at 1560 nm), NF values as low as 3.2 dB were measured for the OFS/OFS configuration.
Simulations of these data were performed using fiber parameters measured in our laboratory [START_REF] Romano | Characterization of the 3F4-3H6 Transition in Thulium-doped Silica Fibres and Simulation of a 2µm Single Clad Amplifier[END_REF]. The simulation is based on a three level model of the Thulium ion in silica using the 3 H6, 3 F4, and 3 H4 levels including ion-ion interactions [START_REF] Romano | Simulation and design of a multistage 10 W thulium-doped double clad silica fiber amplifier at 2050 nm[END_REF]. The parameters of gain coefficient, absorption coefficient, and 3 F4 level lifetime were determined for the OFS and IXBlue fibers under test. Figure 3 plots the measured gain and absorption coefficients for the OFS fiber, which has a maximum absorption of 92 dB/m at 1630 nm. Figure 4 shows the gain and absorption coefficients for the IXBlue fiber, which has a maximum absorption of 140 dB/m at 1630 nm. The measured lifetimes are 650 µS for the OFS fiber and 750 µS for the IXBlue fiber. Other relevant parameters were taken from the literature. We note that our measurements of peak gain are lower than the peak absorption. This feature is consistent with some published data but not others [START_REF] Sincore | High Average Power Thulium-Doped Silica Fiber Lasers: Review of Systems and Concepts[END_REF][START_REF] Pisarik | Thulium-doped fibre broadband source for spectral region near 2 micrometers[END_REF][START_REF] Agger | Emission and absorption cross section of thulium doped silica fibers[END_REF][START_REF] Smith | Mode instability thresholds for Tm-doped fiber amplifiers pumped at 790 nm[END_REF].
The set of three level differential population equations [START_REF] Jackson | Theoretical modeling of Tmdoped silica fiber lasers[END_REF] was solved using a stiff solver, while the propagation set of differential equations was solved with a 4 th order Runge-Kutta The simulation accounts numerically for the amplified spontaneous emission (ASE) generated in the setup. Two stage simulation was carried out by sequentially applying the results of the single stage calculations.
As illustrated by the solid lines in Figure 2, the simulations agree well with the experimental data. Simulations of G are within 1.5 dB of the data for Pin > -25 dBm. Simulations of NF agree with the data to within 2 dB. These results validate the accuracy of our simulations for both high gain and highly saturated operating regimes.
Data illustrating the variation in output power Pout as a function of 1567 nm fiber laser pump power Pp are shown in Figure 5. For these data, Pin was set to between +1.3 and +2.2 dBm to saturate the amplifier and Pp was varied from 0.3 W to 3.2 W. For the OFS/OFS configuration, a maximum slope efficiency of 82% was observed at λs = 2004 and 1952 nm, corresponding to maximum output powers at these wavelengths of 2.60 W. The slope efficiency is defined as ΔPout / ΔPp. A reduced output power of 0.4 W was achieved at 2051 nm because of lower slope efficiency and the onset of lasing. Figure 6 illustrates the long term stability Abs.
Gain
Abs.
of Pout at 1952 nm and Pp = 2.43 W over a period of 6 hours. The variation in Pout over this time period was less than 4%.
No fiber nonlinear behavior such as Raman or Brillouin scattering was observed in our experiments.
Comparison of the data and simulations shows agreement to better than 0.5 dB for all experimental signal wavelengths as illustrated by the solid lines in Figure 5. These results validate the performance of our simulator as a function of signal wavelength.
Slope efficiency data as a function of λs for the OFS/OFS and OFS/IXBlue setups are shown in Figure 7. Simulated slope efficiencies, given by the solid lines in Figure 7, agree well with the experimental data for all the measured signal wavelengths. The simulations indicate that high slope efficiencies of >70% can be expected from 1900 nm to 2020 nm. The simulations also show that the single clad fiber can significant power 2051 nm with reduced efficiency. We attribute this behavior to the presence of lower wavelength ASE and to reabsorption at lower wavelengths. In 8 we contrast experimental output spectra obtained for the two TDFA amplifiers, for saturated input signals of +2.1 dBm at 1952 nm and fiber laser pump power at 1567 nm of 3.2 W. These data are taken under the same conditions and yield optical signal to noise ratios (OSNR) of 57 dB/0.1 nm for both configurations. The spectra observed for both setups exhibit small differences in the wavelength region below 1950 nm. We believe this is caused by the different doping of the two fibers. Nevertheless, the operating wavelength regions and bandwidths for the OFS and IXBlue fibers are largely equivalent. We attribute this similarity to the low concentration of Tm in the two fibers where the scattering and ion-ion interactions can be neglected. Figure 9 compares the experimental output spectrum for the OFS/IXBlue configuration with the results of our steadystate simulations. We find that the simulations predict the experimental data relatively well. At low wavelengths <1900 nm, we believe the differences between data and simulation are caused by the wavelength dependence in the passive components and the non-monochromatic spectrum of the single frequency laser source.
IV. DISCUSSION
The high measured internal gain of >55 dB represents a significant improvement over results previously reported [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] for single stage TDFAs. Such a high small signal gain is promising for preamplifier, repeater, and low noise applications. Expt.
The high observed slope efficiency of 82% and output power of 2.6 W also show significant improvement over previously reported performance [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF]. The experimental SNR of 57 dB/0.1 nm for a saturated amplifier output is important for applications such as booster amplifiers.
The usable operating optical bandwidths of the tandem TDFAs, with the criterion of 10 dB down from the spontaneous emission peak (Figure 8), are estimated to be 122 nm for the OFS/OFS configuration and 130 nm for the OFS/IXBlue configuration. These values agree with previous work [START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] and are fully consistent with the simulated slope efficiencies in Figure 7.
Our steady state simulations of tandem TDFA performance agree well with the experimental over a range of Pin from -30 dBm to +2 dBm, for measurements of G, NF, and Pout. This agreement covers the measured wavelength range of 1910 -2051 nm. Future work will extend the studied wavelength range toward lower wavelengths. The good agreement between experiment and theory confirms that our simulator is a useful tool for the design of tandem high gain, high power TDFAs.
Finally, we note that both the OFS and IXBlue configurations of the tandem TDFA exhibit similar performance, both experimentally and in simulation, confirming [START_REF] Liu | High-capacity Directly-Modulated Optical Transmitter for 2-µm Spectral Region[END_REF][START_REF] Li | Thulium-doped fiber amplifier for optical communications at 2 µm[END_REF][START_REF] Jung | Silica-Based Thulium Doped Fiber Amplifiers for Wavelengths beyond the L-band[END_REF] that we can employ multiple commercial sources of Tm-doped fiber in our simulation and design of high performance tandem optical amplifiers.
V. SUMMARY
We have reported the design and experimental performance of a tandem single clad TDFA, in-band pumped around 1560 nm and operating in the 1900 -2050 nm signal band. Small signal gains >55 dB, output powers as high as 2.6 W, and small signal noise figures as low as 3.2 dB were experimentally measured. Slope efficiencies as high as 82% were also observed, and an SNR of 57 dB/0.1 nm was demonstrated with output powers >2 W. Comparison of our data with steady state simulations yielded good agreement, thereby validating our model for high gain and high saturated output powers from the tandem two-stage TDFA over a wavelength range of 1952-2051 nm. Our design is appropriate for high transmit power, preamplifier, and repeater applications in the 2 µm region.
Figure 1 .
1 Figure 1. Tandem TDFA configuration.
Figure 2 .
2 Figure 2. G and NF as a function of Pin for the two tandem amplifiers at 1952 nm.
of 4 .Broadband 2 W
42 Figure, dB
Figure 3 .
3 Figure 3. Measured gain and absorption coefficients for the OFS Tm-doped fiber.
Figure 4 .
4 Figure 4. Measured gain and absorption coefficients for the IXBlue Tm-doped fiber.
Figure 5 .
5 Figure 5. Saturated Pout vs. Pp for the OFS/OFS tandem amplifier.
Figure 6 .
6 Figure 6. Long term stability of the TDFA output for Pp = 2.43 W.
Figure 7 .
7 Figure 7. Slope efficiency as a function of λs for the two tandem configurations.
Figure 8 .
8 Figure 8. Saturated output spectra for the two tandem configurations.
Figure 9 .
9 Figure 9. Comparison of experimental and simulated output spectra for the OFS/IXBlue configuration.
VI. ACKNOWLEDGEMENTS
We are grateful to OFS and IXBlue for the Thulium-doped silica fibers, and to Eblana Photonics for the single frequency source lasers in the 2000 nm band. |
01766832 | en | [
"sdu.ocean",
"phys.phys.phys-ao-ph"
] | 2024/03/05 22:32:15 | 2015 | https://hal.univ-reunion.fr/hal-01766832/file/ILRC27_portafaix.pdf | Thierry Portafaix
Sophie Godin-Beekmann
Guillaume Payen
Martine De Mazière
Bavo Langerock
Susana Fernandez
Françoise Posny
Jean-Pierre Cammas
Jean-Marc Metzger
Hassan Bencherif
Ozone profiles obtained by DIAL technique at Maïdo Observatory in La Reunion Island: comparisons with ECC ozone-sondes, ground-based FTIR spectrometer and microwave radiometer measurements
Ozone profiles obtained by DIAL technique at Maïdo Observatory in La Reunion
Island: comparisons with ECC ozone-sondes, ground-based FTIR spectrometer and microwave radiometer measurements.
T. Portafaix (1)*, S. Godin-Beekmann (3), G. Payen (2), M. de Mazière (4), B. Langerock (4), S. Fernandez [START_REF] Neefs | BARCOS, an automation and remote control system for atmospheric observations with a Bruker interferometer[END_REF], F. Posny (1), J.P. Cammas (2), J. M. Metzger [START_REF] Fernandez | a novel ground based microwave radiometer for ozone measurement campaigns[END_REF], H. Bencherif [START_REF] Baray | Maïdo observatory: a new high-altitude station facility at Reunion Island (21° S, 55° E) for long-term atmospheric remote sensing and in situ measurements[END_REF] In addition, a microwave radiometer of University of Bern, has operated between late 2013 and early 2015.
STRATOSPHERIC DIAL SYSTEM AT REUNION ISLAND
This LIDAR was installed at Reunion Island in 2000 and moved to Maïdo facility in 2013 after instrumental updates.
Like any DIAL system, it requires the use of a pair of emitted wavelengths.
Laser sources are a tripled Nd:Yag laser (Spectra-Physics Lab 150) and a XeCl excimer laser (Lumonics PM 844). The Nd:Yag provides the non-absorbed beam at 355 nm with a pulse rate of 30 Hz and a power of 5W, and the excimer provides the absorbed beam at 308 nm with a pulse rate of 40 Hz and a power larger than 9W. An afocal optical system is used to reduce the divergence of the beam to 0.5 mrad.
The receiving telescope is composed of 4 parabolic mirrors (diameter: 500 mm). The backscattered signal is collected by 4 optical fibers located at the focal point of each mirror. The spectrometer used for the separation of the wavelengths is a Jobin Yvon holographic grating (3600 linesmm-1, resolution 3 Å.mm-1, efficiency >25 %).
The two Rayleigh beams at 308 and 355 nm are separated initially by the holographic grating and separated again at the output of the spectrometer by a lens system in the proportion 8% and 92 %, respectively, in order to adapt the signal to the non-saturation range of the photon-counting system. The optical signals are detected by 6 Hamamatsu non-cooled photomultipliers (PM). A mechanical chopper is used to cadence the laser shots and cut the signal in the lower altitude range where PM are saturated. This chopper consists of a steel blade rotating at 24 000 rpm in primary vacuum.
6 acquisition channels are recorded simultaneously: 2 channels at 355 nm corresponding to the lower and upper parts of the profile, 2 channels at 308 nm (lower and upper parts) and 2 Nitrogen Raman channels at 332 and 387 nm. In addition to the mechanical gating, both upper Rayleigh channels at 355 nm and 308 nm, are equipped with an electronic gating in order to cut the signals for the altitudes below 16 km and prevent signal-induced noise.
The system was moved to Maïdo Observatory by the end of 2012, after the update of the electronic system (now LICEL TR and PR transient recorders) and of the XeCl excimer laser. This new configuration allows us to obtain ozone profiles in the 15-45 km altitude range.
The lidar signals are recorded in a 3 min time file but averaged over the whole night acquisition (2 to 3h time integration per night) to increase the signal-to-noise ratio.
It is necessary to apply different corrections to the signal. The background signal is estimated and removed using an average or a linear regression in the high altitude range where the useful lidar signal is negligible (over 80 km). Another correction of the photomultiplier saturation for low layers is also required and applied.
OTHER STRATOSPHERIC OZONE INSTRUMENTS AT MAIDO FACILITY.
A ground-based microwave radiometer (GROMOS-C) designed to measure middle atmospheric ozone profiles has been installed at Maïdo Observatory in 2014 and removed in early 2015. It has been specifically designed for campaigns and is remotely controlled and operated continuously under all weather conditions. It measures the pressure broadened line at 110.836 GHz and can also measure the CO line at 115.271 GHz. The vertical profiles are retrieved by optimal estimation method [START_REF] Fernandez | a novel ground based microwave radiometer for ozone measurement campaigns[END_REF]. FTIR solar absorption measurements at high spectral resolution (from 0.0110 to 0.0035 cm -1 for ozone spectra) are performed by a Bruker 125HR spectrometer installed in 2013. This instrument is dedicated to NDACC measurements in the mid-infrared, covering the spectral range 600 to 6500 cm -1 (1.5 to 16 µm),and particularly the ozone retrievals are performed using the 1000-1005 cm -1 window in the 600-1400 cm -1 spectra (MCT detector, KBr beam-splitter). From the measured absorption spectrum, an inverse method (optimal estimation method) is used to trace back the vertical abundance profiles of gases present in the atmosphere. For ozone, information on about four independent layers in the atmosphere can be retrieved, roughly one in the troposphere and three in the stratosphere, up to about 45 km [START_REF] Vigouroux | Evaluation of tropospheric and stratospheric ozone trends over Western Europe from ground-based FTIR network observations[END_REF]. This instrument is operated remotely and automatically with an updated version of the BARCOS system [START_REF] Neefs | BARCOS, an automation and remote control system for atmospheric observations with a Bruker interferometer[END_REF]. In addition to the continuous monitoring of the atmospheric chemical composition and transport processes, the intention is also to participate to dedicated observations campaigns.
In addition, ECC ozone soundings are performed weekly at Reunion Island since 1998. The ozonesonde currently used is of ECC Z Ensci type with a 0.5% KI buffered solution from Droplet Measurement Technology [DMT]. It is coupled to a meteorological radiosonde M10 from MeteoModem. The effective vertical resolution of the ozone data is between 50 and 100 m [Thompson et al., 2003a[Thompson et al., ,b, 2007]]. The ozone measurement accuracy is around ±4% in the stratosphere below 10 mbar pressure level and the precision in total ozone column measured by the ECC sonde is around 5%. These ozone measurements are part of the SHADOZ (Thompson et al., 2003a(Thompson et al., , 2003b) and NDACC networks.
INTER-COMPARISONS
The first comparisons with ECC simultaneous sounding are very encouraging [START_REF] Baray | Maïdo observatory: a new high-altitude station facility at Reunion Island (21° S, 55° E) for long-term atmospheric remote sensing and in situ measurements[END_REF] with differences less than 10% throughout the profile. Figure 1 presents an example for June 23, 2014. Other comparison between DIAL and GROMOS -C ozone profiles after applying the averaging kernel show a very good agreement in the layer between 5 and 20 hPa, with differences less than 5 %. But differences are more important on the lower or upper layers. It can reach more than 15 % in the 20-100 hPa layer.
These comparisons with the microwave radiometer were made using the DIAL "Rapid Delivery" profiles for the NDACC network, using average parameters for photomultiplier desaturation and background signal removing.
These medium parameters can introduce some additional error in the lower or upper part of the resulting profiles. It will be important for the final version of this paper to make these comparisons from consolidated lidar profiles using refined parameters.
The stratospheric ozone LIDAR is already NDACC qualified. It should be noted however that an inter-comparison campaign of all the NDACC lidar systems (water vapor, temperature, ozone) installed the Maïdo Observatory with the mobile system of NASA-GSFC [START_REF] Mcgee | Improved stratospheric ozone lidar[END_REF] is planned for May 2015.
Comparisons with FTIR will be performed for the three layers between 15 and 45 km. The FTIR measurements in the ozone spectral range will be intensified during this 2015 intercomparison campaign.
The ozone number density is retrieved from the slope of signals after derivation[START_REF] Godin-Beekmann | Systematic DIAL ozone measurements at Observatoire de Haute-Provence[END_REF].The lidar signals are corrected the Rayleigh extinction using a composite pressure temperature profile computed from nearby meteorological soundings performed daily at Reunion Airport and the Arletty model (based on meteorological data from the European Centre) It is also necessary in the DIAL technique to use a low-pass filter. The logarithm of each signal is fitted to a 2nd order polynomial and the ozone number density is computed from the difference of the derivative of the fitted polynomial. Varying the number of points on which the signals are fitted completes the filtering.
Fig 1 :
1 Fig 1: Ozone profiles on 24 June 2013 by stratospheric DIAL (black line) and ECC-ozonesonde at Maïdo Observatory (blue).
ACKNOWLEDGEMENT
The present work is supported by LACy, OSU-Réunion and the FP7 European NORS project. The authors acknowledge the European Community, the Région Réunion, the CNRS, and the University of La Réunion for their support and contribution in the construction phase of the research infrastructure OPAR (Observatoire de Physique de l'Atmosphère à La Réunion). OPAR and LACy are presently funded by CNRS (INSU) and Université de La Réunion, and managed by OSU-R (Observatoire des Sciences de l'Univers à la Réunion, UMS 3365). We acknowledge Anne M. Thompson (NASA/GSFC, USA) the SHADOZ network principal Investigator, and E. Golubic, P. Hernandez and L. Mottet who are deeply involved in the routine lidar measurements at Maïdo facility. |
01766861 | en | [
"sdv.neu.nb"
] | 2024/03/05 22:32:15 | 2011 | https://amu.hal.science/hal-01766861/file/Debanne-Physiol-Rev-2011.pdf | Dominique Debanne
Emilie Campanac
Andrzej Bialowas
AND Edmond Carlier
Gisèle Alcaraz
Physiology. Alcaraz G Axon
Axon Physiology
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
I. INTRODUCTION
The axon (from Greek ␣´, axis) is defined as a long neuronal process that ensures the conduction of information from the cell body to the nerve terminal. Its discovery during the 19th century is generally credited to the German anatomist Otto Friedrich Karl Deiters (147), who first distinguished the axon from the dendrites. But the axon initial segment was originally identified by the Swiss Rüdolf Albert von Kölliker (293) and the German Robert Remak (439) (for a detailed historical review, see Ref. 480). The myelin of axons was discovered by Rudolf Virchow (548), and Louis-Antoine Ranvier (433) first characterized the nodes or gaps that now bear his name. The functional role of the axon as the output structure of the neuron was initially proposed by the Spanish anatomist Santiago Ramón y Cajal (429,430).
Two distinct types of axons occur in the peripheral and central nervous system (PNS and CNS): unmyelinated and myelinated axons, the latter being covered by a myelin sheath originating from Schwann cells in the PNS or oligodendrocytes in the CNS (Table 1). Myelinated axons can be considered as three compartments: an initial segment where somatic inputs summate and initiate an action potential; a myelinated axon of variable length, which must reliably transmit the information as trains of action potentials; and a final segment, the preterminal axon, beyond which the synaptic terminal expands (Fig. 1). The initial segment of the axon is not only the region of action potential initiation (117,124,514) but is also the most reliable neuronal compartment, where full action potentials can be elicited at very high firing frequencies without attenuation (488). Bursts of spikes display minimal attenuation in the AIS compared with the soma (488,561). The main axon is involved in the secure propagation of action potentials, but it is also able to integrate fluctuations in membrane potential originating from the somatodendritic region to modulate neurotransmitter release [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]291,489). Finally, the axon terminal that is principally devoted to excitation-release coupling with a high fidelity ( 159) is also the subject of activity-dependent regulation that may lead to spike broadening (209).
Generally, axons from the CNS are highly ramified and contact several hundreds of target neurons locally or distally. But, the function of the axon is not purely limited to the conduction of the action potential from the site of initiation near the cell body to the terminal. Recent experimental findings shed new light on the functional and computational capabilities of single axons, suggesting that several different complex operations are specifically achieved along the axon. Axons integrate subthreshold synaptic potentials and therefore signal both analog and digital events. Drop of conduction or backward propagation (reflection) may occur at specific axonal branch points under certain conditions. Axonal geometry together with the biophysical properties of voltage-gated channels determines the timing of propagation of the output message in different axonal branches. In addition, axons link central neurons through gap junctions that allow ultra-fast network synchrony. Moreover, local shaping of the axonal action potential may subsequently determine synaptic efficacy during repetitive stimulation. These operations have been largely described in in vitro preparations of brain tissue, but evidence for these processes is still scarce in the mammalian brain in vivo. In this paper we review the different ways in which the properties of axons can control the transmission of electrical signals. In particular, we show how the axon deter-mines efficacy and timing of synaptic transmission. We also discuss recent evidence for long-term, activity-dependent plasticity of axonal function that may involve morphological rearrangements of axonal arborization, myelination, regulation of axonal channel expression, and fine adjustment of AIS location. The cellular and molecular biology of the axon is, however, not discussed in depth in this review. The reader will find elsewhere recent reviews on axon-dendrite polarity [START_REF] Barnes | Establishment of axon-dendrite polarity in developing neurons[END_REF], axon-glia interaction (380,381), myelin formation [START_REF] Baumann | Biology of oligodendrocyte and myelin in the mammalian central nervous system[END_REF]483), axonal transport (138,250,405), and the synthesis of axonal proteins (210).
II. ORGANIZATION OF THE AXON
A. Complexity of Axonal Arborization: Branch Points and Varicosities
Axonal morphology is highly variable. Some axons extend locally (Ͻ1 mm long for inhibitory interneurons), whereas others may be as long as 1 m and more. The diameter of axons varies considerably (553). The largest axons in the mammalian PNS reach a diameter of ϳ20 m [(264); but the biggest is the squid giant axon with a diameter close to 1 mm (575)], whereas the diameter of unmyelinated cortical axons in the mammalian brain varies between 0.08 and 0.4 m [START_REF] Berbel | The development of the corpus callosum in cats: a light-and electron-microscopic study[END_REF]559). The complexity of axonal arborization is also variable. In one extreme, the cerebellar granule cell axon possesses a single T-shaped branch point that gives rise to the parallel fibers. On the other, many axons in the central nervous system typically form an elaborate and most impressive tree. For instance, the terminal arbor of thalamocortical axons in layer 4 of the cat visual cortex contains 150 -275 branch points [START_REF] Antonini | Morphology of single geniculocortical afferents and functional recovery of the visual cortex after reverse monocular deprivation in the kitten[END_REF]. The complexity of axonal arborization is also extensive in cortical pyramidal neurons. Axons of hippocampal CA3 pyramidal cells display at least 100 -200 branch points for a total axonal length of 150 -300 mm, and a single cell may contact 30,000 -60,000 neurons (269, 325, 347). GABAergic interneurons also display complex axons. Hippocampal and cortical inhibitory interneurons emit an axon with a very dense and highly branched arborization (235). One obvious function of axonal divergence is to allow synchronous transmission to a wide population of target neurons within a given brain area. For instance, hippocampal basket cells synchronize the firing of several hundred principal cells through their divergent axon (118).
The second morphological feature of axons is the presence of a large number of varicosities (synaptic boutons) that are commonly distributed in an en passant, "string of beads" manner along thin axon branches. A single axon may contain several thousands of boutons (235,325,411). Their size ranges between ϳ1 m for thin unmyelinated axons (482, 559) and 3-5 m for large hippocampal mossy-fiber terminals [START_REF] Blackstad | Special axo-dendritic synapses in the hippocampal cortex: electron and light microscopic studies on the layer of mossy fibers[END_REF]482). Their density varies among axons, and the spacing of varicosities is comprised between ϳ4 and ϳ6 m in unmyelinated fibers (481,482).
B. Voltage-Gated Ion Channels in the Axon
Voltage-gated ion channels located in assigned subdomains of the axonal membrane carry out action potential initiation and conduction, and synaptic transmission, by governing the shape and amplitude of the unitary spike, the pattern of repetitive firing, and the release of neurotransmitters (Fig. 2). Recent reviews (310,387,540) have provided a detailed account of the voltage-gated ion channels in neurons, clearly illustrating the view that in the axon, the specific array of these channels in the various neuronal types adds an extra level of plasticity to synaptic outputs.
Channels in the axon initial segment
A) SODIUM CHANNELS. Variations in potential arising from somato-dendritic integration of multiple inputs culminate FIG. [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF]. Summary of axonal functions. A pyramidal neuron is schematized with its different compartments. Four major functions of the axon are illustrated (i.e., spike initiation, spike propagation, excitationrelease coupling, and integration). A spike initiates in the axon initial segment (AIS) and propagates towards the terminal where the neurotransmitter is released. In addition, electrical signals generated in the somatodendritic compartment are integrated along the axon to influence spike duration and neurotransmitter release (green arrow).
at the axon initial segment (AIS) where a suprathreshold resultant will trigger the action potential. This classical view relies on the presence of a highly excitable region in the initial segment of the axon (Fig. 3). Theoretical studies of action potential initiation have suggested that a 20to 1,000-fold higher density of sodium (Na ϩ ) channels in the axon relative to that found in the soma and dendrites is required to permit the polarity of spike initiation in the axon of the neuron (157,346,373,434). The first evidence for concentration of Na ϩ channels at the axon hillock and initial segment of retinal ganglion cells was obtained with the use of broad-spectrum Na ϩ channel antibodies (564). After several fruitless attempts (119, 120), functional confirmation of the high concentration of Na ϩ channels in the AIS was achieved only recently with the use of Na ϩ imaging (193,290) and outside-out patch-clamp recordings from the soma and the axon (259). In these last studies, the largest Na ϩ -dependent fluorescent signals or voltage-gated Na ϩ currents were obtained in the AIS of cortical pyramidal neurons (Fig. 3, A andB). Na ϩ current density is 34-fold greater in the AIS than in the soma (259). This estimation has been very recently confirmed for the Nav1.6 subunit detected in CA1 pyramidal neurons by a highly sensitive, quantitative electron microscope immunogold method (SDS-digested freeze-fracture replica-labeling; Ref. 333; Fig. 3C). The density of gold particles linked to Nav1.6 subunits measured by this method (ϳ180/m 2 ) is fully compatible with a previous estimate made by in the AIS of L5 neurons where the density of Na ϩ current amounts to 2,500 pS/m 2 (i.e., ϳ150 channels/m 2 given a 17 pS unitary Na ϩ channel conductance).
Three different isoforms of Na ϩ channels, which drive the ascending phase of the action potential, are present at the AIS, namely, Nav1.1, Nav1.2, and Nav1.6. Nav1.1 is dominant at the AIS of GABAergic neurons (394), but it is also found in the AIS of retinal ganglion cells (542) and in spinal cord motoneurons (169; see Table 2 for details). With a few exceptions, its expression in interneurons is restricted to the proximal part of the AIS and displays little overlap with Nav1.6 that occupies the distal part (169,332,394,542). Nav1.6 and Nav1.2 are principally associated with AIS of myelinated and unmyelinated axons, respectively, with Nav1.2 expressed first during development, then being gradually replaced by Nav1.6 concomitantly with myelination [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF][START_REF] Boiko | Functional specialization of the axon initial segment by isoform-specific sodium channel targeting[END_REF]. Although greatly diminished, the expression of Nav1.2 might persist in the AIS of adult neurons and is maintained in populations of unmyelinated axons. The two isoforms coexist in the AIS of L5 pyramidal neurons with a proximal distribution of Nav1.2 and a distal distribution of Nav1. 6 (259). Sodium channels in the distal part of the AIS display the lowest threshold, suggesting that this polarized distribution could explain the unique properties of the AIS, including action potential initiation (principally mediated by Nav1.6) and backpropagation (largely supported by Nav1. [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF]Refs. 171,259). A similar conclusion is drawn in CA1 pyramidal neurons where Nav1.6 sodium channels play a critical role for spike initiation (449). FIG. 2. Schematic representation of the distribution of sodium (top), potassium (middle), and calcium (bottom) channels in the different compartments of a myelinated axon. The cell body is symbolized by a pyramid shape (left). Channel densities are figured by the density of color. The myelin sheath is symbolized in gray. NoR, node of Ranvier; AIS, axon initial segment. Uncertain localizations are written in gray and accompanied by a question mark.
Nav channels generate three different Na ϩ currents that can be distinguished by their biophysical properties, namely, 1) the fast-inactivating transient Na ϩ current (I NaT ), the persistent Na ϩ current (I NaP ), and the resurgent Na ϩ current (I NaR ; i.e., a current activated upon repolarization; Ref. 427). The two last currents are activated at subthreshold or near-threshold, and they play a critical role in the control of neuronal excitability and repetitive firing (345). I NaP is responsible for amplification of subthreshold excitatory postsynaptic potentials (EPSP) and is primarily generated in the proximal axon [START_REF] Astman | Persistent sodium current in layer 5 neocortical neurons is primarily generated in the proximal axon[END_REF]512). I NaR is thought to facilitate reexcitation during repetitive firing and is generated in the AIS of cortical pyramidal neurons of the perirhinal cortex [START_REF] Castelli | Resurgent Na ϩ current in pyramidal neurones of rat perirhinal cortex: axonal location of channels and contribution to depolarizing drive during repetitive firing[END_REF]. I NaR might be present all along the axon since a recent study indicates that this current shapes presynaptic action potentials at the Calyx of Held (246).
B) POTASSIUM CHANNELS. Potassium channels are crucial regulators of neuronal excitability, setting resting membrane potentials and firing thresholds, repolarizing action potentials, and limiting excitability. Specific voltage-gated potassium (Kv) conductances are also expressed in the AIS (see Fig. 2). Kv1 channels regulate spike duration in the axon (291, 490; Fig. 4A). Kv1.1 and Kv1.2 are most frequently associated at the initial segment of both excitatory and inhibitory cortical and hippocampal neurons (267,332), and tend to be located more distally than Nav1.6. The current carried by these channels is indeed 10-fold larger in the distal part of the AIS than that measured in the soma (291). It belongs to the family of lowvoltage activated currents because a sizable fraction of the current is already activated at voltages close to the resting membrane potential (291,490). These channels are also directly implicated in the high fidelity of action potential amplitude during burst firing (488).
Kv2.2 is present in the AIS of medial nucleus trapezoid neurons, where it promotes interspike hyperpolarization during repeated stimuli, thus favoring the extremely high frequency firing of these neurons (275). Kv7 channels (7.2 and 7.3), that bear the M-current (also called KCNQ channels), are also found in the AIS of many central neurons (154,398,546). These channels are essential to the regulation of AP firing in hippocampal principal cells, where they control the resting membrane potential and action potential threshold (399,473,474,579).
C) CALCIUM CHANNELS. The last players that have recently joined the AIS game are calcium channels (Fig. 2). Using two-photon Ca 2ϩ imaging, Bender and Trussell (46) showed that T-and R-type voltage-gated Ca 2ϩ channels are localized in the AIS of brain stem cartwheel cells. In this study, Ca 2ϩ entry in the AIS of Purkinje cells and neocortical pyramidal neurons was also reported. These channels regulate firing properties such as spike-timing, burst-firing, and action potential threshold. The downregulation of T-type Ca 2ϩ channels by dopamine receptor activation represents a powerful means to control action potential output [START_REF] Bender | Dopaminergic modulation of axon initial segment calcium channels regulates action potential initiation[END_REF]. Using calcium imaging, pharmacological tools, and immunochemistry, a recent study reported the presence of P/Q-type (Cav2.1) and N-type (Cav2.2) Ca 2ϩ channels in the AIS of L5 neocortical pyra-midal neurons (577). These channels determine pyramidal cell excitability through activation of calcium-activated BK channels.
Channels in unmyelinated axons
In unmyelinated fibers, action potential conduction is supported by Nav1.2 sodium channels that are thought to be homogeneously distributed [START_REF] Boiko | Functional specialization of the axon initial segment by isoform-specific sodium channel targeting[END_REF]223,558).
At least five voltage-gated K ϩ channel subunits are present in unmyelinated fibers (Table 2). Kv1.3 channels have been identified in parallel fiber axons of cerebellar granule cells (305,543). The excitability of Schaffer collaterals is strongly enhanced by ␣-dendrotoxin (DTX; a blocker of Kv1. [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF]Kv1.2,and Kv1.6) or margatoxin (MgTx; a blocker of Kv1.2 and Kv1.3), indicating that Kv1.2 is an important channel subunit for controlling excitability in these fibers (395). Hippocampal mossy fiber axons ex-FIG. [START_REF] Alle | Analog signalling in mammalian cortical axons[END_REF]. K ϩ channels determine AP duration in AIS of L5 pyramidal neuron and hippocampal mossy fiber terminals. A: DTX-sensitive K ϩ channels determine spike duration in L5 pyramidal axons. Top left: superimposed AP traces recorded from the soma (black) and at the indicated axonal distances from the axon hillock (red). Top right: representative K ϩ currents evoked by voltage steps from Ϫ110 to ϩ45 mV in cellattached patches from the soma, proximal AIS (5-30 m), distal AIS (35-55 m), and axonal sites (up to 400 m). Bottom: impact of 50 -100 nM DTX-I on somatic (left) and axonal (right) APs before (black) and after DTX-I (red). Note the enlargement of AP in the AIS but not in the soma. [From Kole et al. (291), with permission from Elsevier.] B: DTX-sensitive K ϩ channels determine spike duration in mossy-fiber terminal. Left: mossy-fiber bouton approached with a patch pipette. [From Bischofberger et al. (58), with permission from Nature Publishing Group.] Top right: K ϩ current activated in a mossy fiber bouton outside-out patch by pulses from Ϫ70 to ϩ30 mV in the absence (control) and in the presence of 1 M ␣-dendrotoxin (␣-DTX). Bottom right: comparison of the spike waveform in the soma and mossy fiber terminal (MF terminal) of a hippocampal granule cell. Note the large spike duration in the soma. [Adapted from Geiger and Jonas (209), with permission from Elsevier.] press Kv3.3 and Kv3.4 channels (105). The Kv7 activator retigabine reduces excitability of C-type nerve fibers of human sural nerve (315). Kv7 channels determine excitability of pyramidal CA1 cell axons (546).
Channels in the nodes of Ranvier
In myelinated axons, conduction is saltatory and is made possible by the presence of hot spots of sodium channels in the node of Ranvier (Fig. 2). Two principal Na ϩ channel isoforms are found in the nodes of PNS and CNS axons: Nav1.6 and Nav1.1 [START_REF] Caldwell | Sodium channel Na(v)1.6 is localized at nodes of ranvier, dendrites, and synapses[END_REF]169,333; see Table 2). In a recent study, Lorincz and Nusser (333) found that the density of Nav1.6 subunit in the node of Ranvier is nearly twice that observed in the AIS (ϳ350 channels/m 2 ). Transient and persistent sodium currents have been identified in the node of myelinated axons [START_REF] Benoit | Properties of maintained sodium current induced by a toxin from Androctonus scorpion in frog node of Ranvier[END_REF]166).
Saltatory conduction at nodes is secured by the juxtaparanodal expression of Kv1.1 and Kv1.2 channels, and by the nodal expression of Kv3.1b and Kv7.2/Kv7.3, which all concur to reduce reexcitation of the axon (152,154,165,378,435,550,551,584,585). Other calcium-or sodium-activated potassium channels are encountered in the nodal region of myelinated axons (see Table 2).
Channels in the axon terminals
Axonal propagation culminates in the activation of chemical synapses with the opening of the presynaptic Cav2.1 and Cav2.2 calcium channels (Fig. 2). With the use of imaging techniques, the presence of calcium channels has been identified in en passant boutons of cerebellar basket cell axons where they presumably trigger transmitter release (330). Hot spots of calcium influx have also been reported at branch points (330). Although their function is not entirely clear, they may control signal transmission in the axonal arborization. In addition, Cav1.2 (L-type) calcium channels that are sparsely expressed all over hippocampal soma and dendrites are prominently labeled by immunogold electron microscopy in hippocampal axons and in mossy fiber terminals (531).
Functional sodium channels have been identified in presynaptic terminals of the pituitary (2), at the terminal of the calyx of Held (260,320), and in hippocampal mossy fiber terminal (179). While Nav1.2 is probably the sole isoform of sodium channel expressed at terminals (in agreement with its exclusive targeting to unmyelinated portions of axons), terminal Kv channels exhibit a greater diversity (159). Kv1.1/Kv1.2 subunits dominate in many axon terminals (see Table 2 for details). Mossy fiber axons and boutons are enriched in Kv1.4 subunits (126,478,543) which determine the spike duration (Fig. 4B) and regulate transmitter release (209). The other main function of Kv1 channels is preventing the presynaptic terminal from aberrant action potential firing (158).
While Kv1 channels start to activate at low threshold, Kv3 conductances are typical high-voltage-activated currents. They have been identified in terminals of many inhibitory and excitatory neurons (see Table 2). Functionally, Kv3 channels keep action potential brief, thus limiting calcium influx and hence release probability (218).
Kv7 channels are also present in preterminal axons and synaptic terminals (see Table 2 for details). The specific M-channel inhibitor XE991 inhibits synaptic transmission at the Schaffer collateral input, whereas the Mchannel opener retigabine has the opposite effect, suggesting the presence of presynaptic Kv7 channels in Schaffer collateral terminals (546). It should be noted that these effects are observed in experimental conditions in which the M-current is activated, i.e., in the presence of a high external concentration of K ϩ .
Other dampening channels such as the hyperpolarization-activated cyclic nucleotide-gated cationic (HCN) channels are expressed in the unmyelinated axon and in axon terminals (see Table 2). H-channels are also encountered at the calyx of Held giant presynaptic terminal (133) and in nonmyelinated peripheral axons of rats and humans [START_REF] Baginskas | The H-current secures action potential transmission at high frequencies in rat cerebellar parallel fibers[END_REF]225). The typical signature of H-channels is also observed in cerebellar mossy fiber boutons recorded in vitro or in vivo (432). The postsynaptic function of Hchannels is now well understood, but their precise role in the preterminal axon and axon terminal is less clear. They may stabilize membrane potential in the terminal. For instance, the axons of cerebellar basket cells are particularly short, and any hyperpolarization or depolarization arising from the somatodendritic compartment may significantly change the membrane potential in the terminal and thus alter transmitter release. Thus stabilizing membrane potential in the terminal with a high density of HCN channels may represent a powerful means to prevent voltage shifts.
Besides voltage-gated conductances, axons and axon terminals also contain several ion-activated conductances including large-conductance, calcium-activated BK potassium channels (also called Maxi-K or Slo1 channels; Refs. 258,287,377,423,455), smallconductance calcium-activated SK potassium channels (390, 447), and sodium-activated K ϩ channels (K Na , also called Slack channels or Slo2. [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF]Ref. 52) that are activated upon depolarization of the axon by the propagating action potential (Table 2). All these channels will also limit excitability of the nerve terminal by preventing uncontrolled repetitive activity.
G protein-gated inwardly rectifying potassium (GIRK) channels are also present at presynaptic terminals (Table 2). In the cortex and the cerebellum, these channels are functionally activated by GABA B receptors where they are thought to control action potential duration (188, 308).
C. Ligand-Gated Receptors in the Axon
Axons do not contain only voltage-or metabolitegated ion channels but also express presynaptic vesicular release machinery (586) and many types of ligand-gated receptors including receptors to fast neurotransmitters and slow neuromodulators. We will focus here only on receptors that alter the excitability of the axon in physiological conditions.
Receptors in the axon initial segment
The axon initial segments of neocortical and hippocampal pyramidal neurons are particularly enriched in axo-axonic inhibitory contacts (499 -501). A single axon initial segment receives up to 30 symmetrical synapses from a single axo-axonic (chandelier) GABAergic cell (500). Axon-initial segments contain a high concentration of the ␣2 subunit variant of the GABA A receptor [START_REF] Brunig | Intact sorting, targeting, and clustering of gamma-aminobutyric acid A receptor subtypes in hippocampal neurons in vitro[END_REF]. Axo-axonic synapses display a fast and powerful GABAergic current (340). The strategic location of GABAergic synapses on the AIS has generally been thought to endow axo-axonic cells with a powerful inhibitory action on the output of principal cells. However, this view has been recently challenged. Gabor Tamás and colleagues (522) recently discovered that axo-axonic synapses impinging on L2-3 pyramidal neurons may be in fact excitatory in the mature cortex. Importantly, the potassium-chloride cotransporter 2 (KCC2) is very weakly expressed in the AIS, and thus the reversal potential for GABA currents is much more depolarized in the axon than in the cell body (522). Similar conclusions have been drawn in the basolateral amygdala (566) and in hippocampal granule cells with the use of local uncaging of GABA in the different compartments of the neuron (285). However, a recent study using noninvasive techniques concludes that inhibitory postsynaptic potentials (IPSPs) may be hyperpolarizing throughout the entire neuron (211).
Receptors in the axon proper
GABA A receptors are not exclusively located in the AIS, but they have also been demonstrated in myelinated axons of the dorsal column of the spinal cord (456,457) and in axonal branches of brain stem sensory neurons (545). Activation of these receptors modulates the compound action potential conduction and waveform. In some cases, propagation of antidromic spikes can be blocked by electrical stimulation of local interneurons (545). This effect is prevented by bath application of GABA A receptor channel blocker, suggesting that conduction block results from activation of GABA A receptors after the release of endogenous GABA. Similarly, GABA A receptors have been identified in the trunk of peripheral nerves [START_REF] Brown | Axonal GABA-receptors in mammalian peripheral nerve trunks[END_REF]. However, the precise mode of physiological activation of these receptors remains unknown, and there is no clear evidence that GABA is released from oligodendrocytes or Schwann cells (307).
Monoamines regulate axonal properties in neurons from the stomatogastric ganglion of the crab or the lobster [START_REF] Ballo | Complex intrinsic membrane properties and dopamine shape spiking activity in a motor axon[END_REF][START_REF] Bucher | Axonal dopamine receptors activate peripheral spike initiation in a stomatogastric motor neuron[END_REF]213,366). They also determine axonal properties in mammalian axons. For instance, subtype 3 of the serotonin receptor (5-HT 3 ) modulates excitability of unmyelinated peripheral rat nerve fibers (316).
Nicotinic acetylcholine receptors are encountered on unmyelinated nerve fibers of mammals where they modulate axonal excitability and conduction velocity [START_REF] Armett | The action of acetylcholine on conduction in mammalian non-myelinated fibres and its prevention by an anticholinesterase[END_REF]314).
Receptors in the periterminal axon and nerve terminals
While the axon initial segment and the axon proper contain essentially GABA A receptors, the preterminal axon and nerve terminals are considerably richer and express many different modulatory and synaptic receptors (180). Only a subset of these receptors affects axonal excitability.
A) GABA A RECEPTORS. Although GABA B receptors are widely expressed on presynaptic excitatory and inhibitory terminals [START_REF] Bettler | Molecular structure and physiological functions of GABA(B) receptors[END_REF]536), their action on periterminal and axonal excitability is slow and moderate. In contrast, high-conductance GABA A receptors control axonal excitability more accurately. Frank and Fuortes (197) first hypothesized modulation of transmitter release via axoaxonic inhibitory synapses to explain the reduction in monosynaptic transmission in the spinal cord (reviewed in Ref. 450). Based on the temporal correspondence between presynaptic inhibition and the depolarization of the primary afferent terminals, suggested that depolarization of the afferent was responsible for the inhibition of synaptic transmission. It was later shown that presynaptic inhibition is caused by a reduction in transmitter release (168,175). Since this pioneering work, the primary afferent depolarization (PAD) has been demonstrated with axonal recordings and computational tools in many different sensory afferents including the cutaneous primary afferents of the cat (224), group Ib afferent fibers of the cat spinal cord (309,312,313), and sensory afferents of the crayfish (100 -102). These studies and others (132,515) indicate that activation of GABA A receptors produces a decrease in the amplitude of the presynaptic AP, thus decreasing transmitter release. Two mechanisms based on simulation studies have been proposed to account for presynaptic inhibition associated with PADs: a shunting mechanism (469) and inactivation of sodium channels (226). In the crayfish, the reduction in spike amplitude is mainly mediated by a shunting effect, i.e., an increase in membrane conductance due to the opening of GABA A receptors (102). The inactivation of sodium channels may add to the shunting effect for larger PADs.
Single action potentials evoked in cerebellar stellate and basket cells induce GABAergic currents measured in the soma, indicating that release of GABA regulates axonal excitability through GABA A autoreceptors (419). Application of the GABA A receptor agonist muscimol in the bath or locally to the axon modulates the excitability of hippocampal mossy fibers (452). The sign of the effect may be modulated by changing the intra-axonal Cl Ϫ concentration. Direct evidence for GABA A receptors on hippocampal granule cell axons has been provided unambiguously by Alle and Geiger (6) by the use of patch-clamp recordings from single mossy fiber boutons and local application of GABA. In mechanically dissociated CA3 pyramidal neurons from young rats, mossy fiber-derived release is strongly facilitated by stimulation of presynaptic GABA A receptors (273). This facilitation has been extensively studied by with direct whole cell recordings from the mossy-fiber bouton. GABA A receptors modulate action potential-dependent Ca 2ϩ transients and facilitate LTP induction (451).
B) GLYCINE RECEPTORS. In a similar way, glycine receptors may also control axonal excitability and transmitter release. At the presynaptic terminal of the calyx of Held, glycine receptors replace GABA A receptors as maturation proceeds (538). Activation of presynaptic glycine receptors produces a weakly depolarizing Cl Ϫ current in the nerve terminal and enhances synaptic release (537). The depolarization induces a significant increase in the basal concentration of Ca 2ϩ in the terminal [START_REF] Awatramani | Modulation of transmitter release by presynaptic resting potential and background calcium levels[END_REF]. Similar conclusions are reached in the ventral tegmental area where presynaptic glycine receptors lead to the facilitation of GABAergic transmission through activation of voltagegated calcium channels and intraterminal concentration of Ca 2ϩ (573).
C) GLUTAMATE RECEPTORS. At least three classes of glutamate receptors are encountered at presynaptic release sites where they regulate synaptic transmission (412). Only a small fraction of these receptors regulates axonal excitability. In the CA1 region of the hippocampus, kainate produces a marked increase in spontaneous IPSCs. This effect might result from the direct depolarization of the axons of GABAergic interneurons (472). In fact, kainate receptors lower the threshold for antidromic action potential generation in CA1 interneurons.
NMDA receptors are encountered in many axons. They determine synaptic strength at inhibitory cerebellar synapses (170, 212), at granule cell-Purkinje cell synapse [START_REF] Bidoret | Presynaptic NR2Acontaining NMDA receptors implement a high-pass filter synaptic plasticity rule[END_REF][START_REF] Casado | Presynaptic N-methyl-Daspartate receptors at the parallel fiber-Purkinje cell synapse[END_REF], at L5-L5 excitatory connections (494), and at L2/3 excitatory synapses (127). However, recent studies indicate that axonal NMDA receptors do not produce significant depolarization or calcium entry in cerebellar stellate cells (111) and in L5 pyramidal cell axons (112) to significantly affect axonal excitability. In fact, NMDA receptors might modulate presynaptic release simply by the electrotonic transfer of the depolarization from the somatodendritic compartments to the axonal compartment (111,112); see also sect. VC). However, such tonic change in the somatodendritic compartment of the presynaptic cell has not been observed in paired-recording when presynaptic NMDA receptors are pharmacologically blocked (494). D) PURINE RECEPTORS. ATP and its degradation products, ADP and adenosine, are considered today as important signaling molecules in the brain [START_REF] Burnstock | Physiology and pathophysiology of purinergic neurotransmission[END_REF]. Classically ATP is coreleased from vesicles with acetylcholine (437) or GABA (274). However, a recent study indicates that ATP can also be relased by the axon in a nonvesicular manner through volume-activated anion channels (191). In fact, propagating action potentials cause microscopic swelling and movement of axons that may in turn stimulate volume-activated anion channels to restore normal cell volume through the release of water together with ATP and other anions.
Purinergic receptors are divided into three main families: P1 receptors (G protein-coupled, activated by adenosine and subdivided into A 1 , A 2A , A 2B and A 3 receptors), P2X receptors (ligand-gated, activated by nucleotides and subdivided into P2X 1-7 ), and P2Y (G protein-coupled, activated by nucleotides and subdivided into P2Y 1-14 ) [START_REF] Burnstock | Purinergic signalling and disorders of the central nervous system[END_REF]. Purine receptors are found on axon terminals where they modulate transmitter release. For instance, activation of presynaptic A 1 receptor powerfully inhibits glutamate, but not GABA release, in the hippocampus (529, 574). In contrast, activation of presynaptic P2X receptor by ATP enhances GABA and glycine release in spinal cord (263,442). P2X 7 receptors are expressed on developing axons of hippocampal neurons, and their stimulation promotes axonal growth and branching in cultured neurons (155).
III. AXON DEVELOPMENT AND TARGETING OF ION CHANNELS IN THE AXON
Neurons acquire their typical form through a stereotyped sequence of developmental steps. The cell initially establishes several short processes. One of these neurites grows very rapidly compared with the others and becomes the axon (161). The spatial orientation of the growing axon is under the control of many extracellular cues that have been reviewed elsewhere [START_REF] Barnes | Establishment of axon-dendrite polarity in developing neurons[END_REF]156). This section is therefore focused on the description of the major events underlying development and targeting of ion channels in the three main compartments of the axon.
A. Axon Initial Segments
In addition to its role in action potential initiation involving a high density of ion channels, the AIS might be Physiol Rev • VOL 91 • APRIL 2011 • www.prv.org also be defined by the presence of a specialized and complex cellular matrix, specific scaffolding proteins, and cellular adhesion molecules (393). The cellular matrix together with the accumulation of anchored proteins forms a membrane diffusion barrier (375,563). This diffusion barrier plays an important role in preferentially segregating proteins into the axonal compartment. Recently, a cytoplasmic barrier to protein traffic has been described in the AIS of cultured hippocampal neurons (502). This filter allows entry of molecular motors of the kinesin-1 family (KIF5) that specifically carry synaptic vesicle proteins which must be targeted to the axon. The entry of kinesin-1 into the axon is due to the difference in the nature of microtubules in the soma and the AIS (294). Molecular motors (KIF17) that carry dendrite-targeted postsynaptic receptors cannot cross the axonal filter [START_REF] Arnold | Actin and microtubule-based cytoskeletal cues direct polarized targeting of proteins in neurons[END_REF]502,567). This barrier develops between 3 and 5 days in vitro (i.e., ϳ1 day after the initial elongation of the process that becomes an axon).
The scaffolding protein ankyrin G (AnkG) is critical for assembly of AIS and is frequently used to define this structure in molecular terms (233). The restriction of many AIS proteins within this small axonal region is achieved through their anchoring to the actin cytoskeleton via AnkG (296). AnkG is attached to the actin cytoskeleton via IV spectrin [START_REF] Berghs | betaIV spectrin, a new spectrin localized at axon initial segments and nodes of ranvier in the central and peripheral nervous system[END_REF]. Sodium channels, Kv7 channels, the cell adhesion molecule neurofascin-186 (NF-186), and neuronal cell adhesion molecules (NrCAM) are specifically targeted to the AIS through interaction with AnkG (154,206,245,398). Furthermore, deletion of AnkG causes axons to acquire characteristics of dendrites with the appearance of spines and postsynaptic densities (244). While Nav and Kv7 channels are clustered through their interaction with AnkG, clustering of Kv1 channels in the AIS is under the control of the postsynaptic density 93 (PSD-93) protein, a member of the membrane-associated guanylate kinase (MAGUK) family (391). Some of the interactions between channels and AnkG are regulated by protein kinases. For instance, the protein kinase CK2 regulates the interaction between Nav and AnkG [START_REF] Brechet | Protein kinase CK2 contributes to the organization of sodium channels in axonal membranes by regulating their interactions with ankyrin G[END_REF]. But other factors might also control the development and targeting of Na ϩ channels at the AIS. For instance, the sodium channel 1 subunit determines the development of Nav1.6 at the AIS [START_REF] Brackenbury | Functional reciprocity between Na ϩ channel Nav1.6 and beta1 subunits in the coordinated regulation of excitability and neurite outgrowth[END_REF]. The absence of the phosphorylated IB␣ at the AIS, an inhibitor of the nuclear transcription factor-B, impairs sodium channel concentration (458).
The AIS may also contain axon specification signals ( 222). Cutting the axon of cultured hippocampal neurons is followed by axonal regeneration at the same site if the cut is Ͼ35 m from the soma (i.e., the AIS is still connected to the cell body). In contrast, regeneration occurs from a dendrite if the AIS has been removed (222).
B. Nodes of Ranvier
During development, Nav1.2 channels appear first at immature nodes of Ranvier (NoR) and are eventually replaced by Nav1.6 [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF]. Later, Kv3.1b channels appear at the juxtaparanodal region, just before Kv1.2 channels (152). While targeting of ion channels at the AIS largely depends on intrinsic neuronal mechanisms, the molecular organization of the NoR and its juxtaparanodal region is mainly controlled by interactions between proteins from the axon and the myelinating glia (310,393,413). For instance, in mutants that display abnormal myelin formation, Nav1.6 channels are dispersed or only weakly clustered in CNS axons [START_REF] Boiko | Compact myelin dictates the differential targeting of two sodium channel isoforms in the same axon[END_REF]276). In PNS axons, nodes are initiated by interactions between secreted gliomedin, a component of the Schwann cell extracellular matrix, and . But once the node is initiated, targeting of ion channels at the NoR resembles that at the AIS. Accumulation of Nav channels at NoR also depends on AnkG (173). However, Kv1 clustering at the juxtaparanodal region of PNS axons depends on the cell adhesion molecules Caspr2 and TAG-1 that partly originate from the glia but not on MAGUKs (257,413,414).
C. Axon Terminals
In contrast to the AIS and the NoR, much less is known about the precise events underlying development and targeting of ion channels in axon terminals. However, the trafficking of N-and P/Q-type Ca 2ϩ channels to axon terminal and that of GABA B receptors illustrate the presence of specific targeting motifs on axonal terminal proteins. The COOH-terminal region of the N-type Ca 2ϩ channel (Cav2.2) contains an amino acid sequence that constitutes a specific binding motif to the presynaptic protein scaffold, allowing their anchoring to the presynaptic terminal (356,357). Furthermore, direct interactions have been identified between the t-SNARE protein syntaxin and N-type Ca 2ϩ channels (323,479). Deletion of the synaptic protein interaction (synprint) site in the intracellular loop connecting domains II and III of P/Q-type Ca 2ϩ channels (Cav2.1) not only reduces exocytosis but also inhibits their localization to axon terminals (370).
One of the two subtypes of GABA B receptor (GABA B1a ) is specifically targeted to the axon (547). The GABA B1a subunit carries two NH 2 -terminal interaction motifs, the "sushi domains" that are potent axonal targeting signals. Indeed, mutations in these domains prevent protein interactions and preclude localization of GABA B1a subunits to the axon, while fusion of the wild-type GABA B1a to mGluR1a preferentially redirects this somatodendritic protein to axons and their terminals [START_REF] Biermann | The Sushi domains of GABA B receptors function as axonal targeting signals[END_REF].
In the pinceau terminal of cerebellar basket cells, HCN1 channels develop during the end of the second postnatal week (334). This terminal is particularly enriched in Kv1 channels (319), but the precise role of molecular partners and scaffolding proteins in clustering these channels remains unknown (392).
IV. INITIATION AND CONDUCTION OF ACTION POTENTIALS
A. Action Potential Initiation
Determining the spike initiation zone is particularly important in neuron physiology. The action potential classically represents the final step in the integration of synaptic messages at the scale of the neuron [START_REF] Bean | The action potential in mammalian central neurons[END_REF]514). In addition, most neurons in the mammalian central nervous system encode and transmit information via action potentials. For instance, action potential timing conveys significant information for sensory or motor functions (491). In addition, action potential initiation is also subject to many forms of activity-dependent plasticity in central neurons (493). Thus information processing in the neuronal circuits greatly depends on how, when, and where spikes are generated in the neuron.
A brief historical overview
Pioneering work in spinal motoneurons in the 1950s indicated that action potentials were generated in the AIS or possibly the first NoR (124,187,202). Microelectrode recordings from motoneurons revealed that the action potential consisted of two main components: an "initial segment" (IS) component was found to precede the full action potential originating in the soma [i.e., the somatodendritic (or SD) component]. These two components could be isolated whatever the mode of action potential generation (i.e., antidromic stimulation, direct current injection, or synaptic stimulation), but the best resolution was obtained with the first derivative of the voltage. The IS component is extremely robust and can be isolated from the SD component by antidromic stimulation of the axon in a double-shock paradigm (124). For very short interstimulus intervals, the SD component fails but not the IS component. With simultaneous recordings at multiple axonal and somatic sites of the lobster stretch receptor neuron, Edwards and Ottoson (176) also reported that the origin of the electrical impulse occurred first in the axon, but at a certain distance from the cell body (176).
This classical view was challenged in the 1980s and 1990s with the observation that under very specific conditions, action potentials may be initiated in the dendrites (438). The development in the 1990s of approaches using simultaneous patch-pipette recordings from different locations on the same neuron was particularly precious to address the question of the site of action potential initiation (514, 516). In fact, several independent studies converged on the view that dendrites were capable of generating regenerative spikes mediated by voltage-gated sodium and/or calcium channels (220,331,462,513,565). The initiation of spikes in the dendrites (i.e., preceding somatic action potentials) has been reported in neocortical (513), hippocampal (220), and cerebellar neurons (431) upon strong stimulation of dendritic inputs. However, in many different neuronal types, threshold stimulations preferentially induce sodium spikes in the neuronal compartment that is directly connected to the axon hillock [START_REF] Bischofberger | Action potential propagation into the presynaptic dendrites of rat mitral cells[END_REF]242,318,354,506,511,513,516). Thus the current rule is that the axon is indeed a low-threshold initiation zone for sodium spike generation. But the initiation site was precisely located only recently by direct recording from the axon.
Initiation in the axon
The recent development of techniques allowing loose-patch [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF][START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]116,362,422) or whole cell recording (291,355,463,489,561) from single axons of mammalian neurons, together with the use of voltage-sensitive dyes (196,396,397) or sodium imaging [START_REF] Bender | Axon initial segment Ca 2ϩ channels influence action potential generation and timing[END_REF]193,290), provide useful means to precisely determine the spike initiation zone. These recordings have revealed that sodium spikes usually occur in the axon before those in the soma (Fig. 5, A and B). More specifically, the initiation zone can be estimated as the axonal region where the advance of the axonal spike relative to the somatic spike is maximal (Fig. 5C). In addition, bursts of action potentials are generally better identified in the axon than in the cell body (355,561).
In myelinated axons, action potentials are initiated at the AIS [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF]196,283,284,396,397,488,578). Depending on cell type, the initiation zone varies being located between 15 and 40 m from the soma. In layer 5 pyramidal neurons, the action potential initiation zone is located in the distal part of the AIS, i.e., at 35-40 m from the axon hillock [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]397,578). Similar estimates have been obtained in subicular pyramidal neurons with an AP initiation zone located at ϳ40 m from the soma, beyond the AIS (119). The precise reason why the locus of axonal spike generation in myelinated fibers varies between the AIS and the first NoR is not known, but it may result from the heterogeneous distribution of Nav and Kv channels as well as the existence of ectopic zones for spike initiation (410,580). In cerebellar Purkinje cell axons, the question was debated until recently. On the basis of latency differences between simultaneous whole cell somatic and cellattached axonal recordings, the action potential was found to be generated at the first NoR (at a distance of ϳ75 m; Ref. 116). However, in another study, it was concluded that spike initiation was located at the AIS (i.e., 15-20 m from the soma; Ref. 284). Here, the authors found that the AIS but not the first NoR was very highly sensitive to focal application of a low concentration of TTX. Initiation in the AIS has recently been confirmed by the use of noninvasive recording techniques (196,396). The origin of the discrepancy between the first two studies has been elucidated. In fact, cell-attached recordings from the axon initial segment are not appropriate because the capacitive and ionic currents overlap, preventing identification of the spike onset.
In unmyelinated axons, the initiation zone has been identified at 20 -40 m from the axon hillock. In CA3 pyramidal neurons, the AP initiation zone is located at 35-40 m from the soma (363). A much shorter distance has been reported in hippocampal granule cell axons. The site of initiation has been estimated at 20 m from the axon hillock (463). This proximal location of the spike initiation zone is corroborated by the labeling of sodium channels and ankyrin-G within the first 20 m of the axon (299). A possible explanation for this very proximal location might be that the very small diameter of granule cell axons (ϳ0.3 m, Ref. 208) increases the electrotonic distance between the soma and proximal axonal compartments, thus isolating the site of initiation from the soma.
Threshold of action potential initiation
An essential feature of the all-or-none property of the action potential is the notion of a threshold for eliciting a spike. Converging evidence points to the fact that neuronal firing threshold may not be defined by a single value. The first studies of Lapicque (317) were designed to describe the role of depolarization time on the threshold current: the threshold current was reduced when its duration increased. Based on Hodgkin-Huxley membrane equations, Noble and Stein (384,385) defined the spike threshold as the voltage where the summed inward membrane current exceeds the outward current.
In contrast with current threshold, voltage threshold could not be assessed in neurons until intracellular records were obtained from individual neurons [START_REF] Brock | The recording of potentials from motoneurones with an intracellular electrode[END_REF]. Given the complex geometry of the neuron, a major question was raised in the 1950s: is action potential threshold uniform over the neuron? Since the spike is initiated in the axon, it was postulated that the voltage threshold was 10 -20 mV lower (more hyperpolarized) in the AIS than in the cell body (124). Because direct recording from the axon was not accessible for a long time, there was little evidence for or against this notion. In an elegant study, Maarten Kole and Greg Stuart recently solved this question with direct patch-clamp recordings from the AIS (292). They showed that the current threshold to elicit an action potential is clearly lower in the AIS (Fig. 6A). However, the voltage threshold defined as the membrane potential at which the rate of voltage (i.e., the first deriv- ative) crosses a certain value (generally 10 -50 V/s, Refs. [START_REF] Anderson | Thresholds of action potentials evoked by synapses on the dendrites of pyramidal cells in the rat hippocampus in vitro[END_REF]201,471) appeared surprisingly to be highest in the axon (Fig. 6A). This counterintuitive observation is due to the fact that Na ϩ channels in the AIS drive a local depolarizing ramp just before action potential initiation that attenuates over very short distances as it propagates to the soma or the axon proper, thus giving the impression that the voltage threshold is higher (Fig. 6B). When this local potential is abolished by focal application of TTX to the AIS, then the voltage threshold is effectively lower in the AIS (292). In other words, the spike threshold measured out of the AIS is a propagating spike, and the correct measure in this compartment is the threshold of the SD component. This subtlety may also be at the origin of unconventional proposals for Na ϩ channel gating during action potential initiation [START_REF] Baranauskas | Sodium currents activate without a Hodgkin-and-Huxley-type delay in central mammalian neurons[END_REF]236,359,379,578). Indeed, the onset of the action potential appears faster in the soma than expected from Hodgkin-Huxley modelling.
The spike threshold is not a fixed point but rather corresponds to a range of voltage. For instance, intrasomatic recordings from neocortical neurons in vivo reveal that spike threshold is highly variable [START_REF] Azouz | Cellular mechanisms contributing to response variability of cortical neurons in vivo[END_REF][START_REF] Azouz | Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo[END_REF]248). The first explanation usually given to account for this behavior involves channel noise. In fact, the generation of an AP near the threshold follows probability laws because the opening of voltage-gated channels that underlie the sodium spike is a stochastic process (465,560). The number of voltage-gated channels is not large enough to allow the contribution of channel noise to be neglected.
However, this view is now challenged by recent findings indicating that the large spike threshold variability measured in the soma results from back-propagation of the AP from the AIS to the soma when the neuron is excited by trains of noisy inputs (578). In fact, at the point of spike initiation (i.e., the AIS), the spike is generated with relatively low variance in membrane potential threshold but as it back-propagates towards the soma, variability increases. This behavior is independent of channel noise since it can be reproduced by a deterministic Hodgkin-Huxley model ( 578). The apparent increase in spike threshold variance results in fact from the rearrangement of the timing relationship between spikes and the frequency component of subthreshold waveform during propagation.
Timing of action potential initiation
Synchronous population activity is critical for a wide range of functions across many different brain regions including sensory processing (491), spatial navigation (388), and synaptic plasticity [START_REF] Bi | Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type[END_REF]142,143). Whereas temporal organization of network activity clearly relies on both timing at the synapse [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF] and elsewhere within the network (441), the mechanisms governing precise spike timing in individual neurons are determined at the AIS.
Recently, the rules governing the temporal precision of spike timing have started to emerge.
Outward voltage-gated currents with biophysical properties that sharpen physiological depolarizations, such as EPSPs, reduce the time window during which an action potential can be triggered and thus enhance spike precision [START_REF] Axmacher | Intrinsic cellular currents and the temporal precision of EPSP-action potential coupling in CA1 pyramidal cells[END_REF]200,207). In contrast, outward currents that reduce the rate of depolarization leading to the generation of a spike decrease spike-time precision (131,503). Here, high spike jitter may result from the fact that channel noise near the threshold becomes determinant during slow voltage trajectories. With the recent development of axonal recordings, it will be important to determine how these currents shape voltage in the AIS.
Plasticity of action potential initiation
The probability of action potential initiation in response to a given stimulus is not absolutely fixed during the life of a neuron but subjected to activity-dependent regulation. In their original description of LTP, Bliss and Lømo (63) noticed that the observed increase in the population spike amplitude, which reflects the number of postsynaptic neurons firing in response to a given synaptic stimulation, was actually greater than simply expected by the LTP-evoked increase in the population EPSP [START_REF] Bliss | Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path[END_REF]. This phenomenon was termed EPSP-spike or E-S potentiation. The intracellular signature of E-S potentiation is an increased probability of firing in response to a given synaptic input. This plasticity appears to be of fundamental importance because it directly affects the input-output function of the neuron. Originally described in the dentate gyrus of the hippocampus, E-S potentiation was also found at the Schaffer collateral-CA1 cell synapse when the afferent fibers were tetanized [START_REF] Abraham | Long-term potentiation involves enhanced synaptic excitation relative to synaptic inhibition in guinea-pig hippocampus[END_REF][START_REF] Andersen | Possible mechanisms for long-lasting potentiation of synaptic transmission in hippocampal slices from guinea-pigs[END_REF]136) and maybe induced associatively with coincident activation of synaptic input and a back-propagated action potential [START_REF] Campanac | Spike timing-dependent plasticity: a learning rule for dendritic integration in rat CA1 pyramidal neurons[END_REF]. Although dendritic conductances such as A-type K ϩ (199) or h-type currents [START_REF] Campanac | Downregulation of dendritic I(h) in CA1 pyramidal neurons after LTP[END_REF] are implicated in its expression, regulation of axonal channels cannot totally be excluded. Indeed, hyperpolarization of the spike threshold has been encountered in many forms of long-lasting increase in excitability in cerebellar and hippocampal neurons [START_REF] Aizenman | Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons[END_REF]568). Furthermore, activation of the fast transient Na ϩ current is regulated following LTP induction in CA1 pyramidal neurons (568).
B. Conduction of Action Potentials Along the Axon
A brief overview of the principle of conduction in unmyelinated axons
Conduction of the action potential has been primarily studied and characterized in invertebrate axons. According to a regenerative scheme, propagation along unmyeli-nated axons depends on the passive spread of current ahead of the active region to depolarize the next segment of membrane to threshold. The nature of the current flow involved in spike propagation is generally illustrated by an instantaneous picture of the action potential plotted spatially along the axon. Near the leading edge of the action potential there is a rapid influx of sodium ions that will lead to depolarization of a new segment of membrane towards threshold. At the following edge of the action potential, current flows out because potassium channels are opened, thus restoring the membrane potential towards the resting value. Because of both the inactivation of voltage-gated Na ϩ channels and the high conductance state of hyperpolarizing K ϩ channels, the piece of axonal membrane that has been excited is not immediately reexcitable. Thus the action potential cannot propagate backward, and conduction is therefore generally unidirectional. As the action potential leaves the activated region, Na ϩ channels recover from inactivation, K ϩ conductance declines, and the membrane is thus susceptible to be reexcited.
Conduction in myelinated axons
In myelinated (or medulated) axons, conduction is saltatory (from Latin saltare, to jump). Myelin is formed by wrapped sheaths of membrane from Schwann cells in peripheral nerves and oligodendrocytes in central axons. The number of wrappings varies between 10 and 160 [START_REF] Arbuthnott | Ultrastructural dimensions of myelinated peripheral nerve fibers in the cat and their relation to conduction velocity[END_REF]. The presence of the myelin sheath has a critical impact on the physiology of the axon. The effective membrane resistance of the axon is locally increased by several orders of magnitude (up to 300 times), and the membrane capacitance is reduced by a similar factor. The myelin sheath is interrupted periodically by NoR, exposing patches of axonal membrane to the external medium. The internodal distance is usually 100 times the external diameter of the axon, ranging between 200 m and 2 mm (264, 453). The electrical isolation of the axon by myelin restricts current flow to the node, as ions cannot flow into or out of the high-resistance internodal region. Thus only the restricted region of the axon at the node is involved in impulse propagation. Therefore, the impulse jumps from node to node, thereby greatly increasing conduction velocity. Another physiologically interesting consequence of myelination is that less metabolic energy is required to maintain the gradient of sodium and potassium since the flow of these ions is restricted to the nodes. However, a recent study from the group of Geiger [START_REF] Alle | Energy-efficient action potentials in hippocampal mossy fibers[END_REF] indicates that because of matched properties of Na ϩ and K ϩ channels, energy consumption is minimal in unmyelinated axons of the hippocampus.
The principle of saltatory conduction was first suggested by Lillie in 1925 (326) and later confirmed by direct experimental evidence [START_REF] Bostock | The internodal axon membrane: electrical excitability and continuous conduction in segmental demyelination[END_REF]265,407,527). In their seminal paper, Huxley and Stämpfli (265) measured currents in electrically isolated compartments of a single axon, containing either a node or an internode, during the passage of an impulse. They noticed that when the compartment contained a NoR, stimulation of the nerve resulted in a large inward current. In contrast, no inward current was recorded when the chamber contained an internode, thus indicating that there is no regenerative activity. The discontinuous nature of saltatory conduction should not be emphasized too much, however, because 30 consecutive NoR can participate simultaneously in some phases of the action potential.
Conduction velocity
Conduction velocity in unmyelinated axons depends on several biophysical factors such as the number of available Na ϩ channels, membrane capacitance, internal impedance, and temperature (122,148,251,252,277). Conduction velocity can be diminished by reducing external Na ϩ concentration (277) or partially blocking Na ϩ channels with a low concentration of TTX (122). In fact, the larger the sodium current, the steeper the rate of rise of the action potential. As a consequence, the spatial voltage gradient along the fiber is steeper, excitation of adjacent axonal regions is faster, and conduction velocity is increased.
The second major determinant of conduction is membrane capacity. Membrane capacity determines the amount of charge stored on the membrane per unit area. Thus the time necessary to reach the threshold is obviously shorter if the capacity is small.
The third major parameter for conduction velocity is the resistance of the axoplasm (i.e., the intra-axonal medium). For instance, in the giant squid axon, the insertion of a low-impedance wire cable in the axon considerably increases the rate of conduction (148). This property explains why conduction velocity in unmyelinated axons is proportional to the square root of the axon diameter (251). In fact, the current flow is facilitated in largediameter axons because of the high intracellular ion mobility.
Temperature has large effects on the rate of increase of Na ϩ channel conductance and action potential waveform ( 253). Channels open and close more slowly at lower temperature, and subsequently conduction velocity is reduced (106,198).
In myelinated axons, conduction displays linear dependence on fiber diameter [START_REF] Arbuthnott | Ultrastructural dimensions of myelinated peripheral nerve fibers in the cat and their relation to conduction velocity[END_REF]264,444,453). A simple rule is that every micrometer of outer diameter adds 6 m/s to the conduction velocity at 37°C. One particularly fascinating point addressed by the theoretical work of Rushton (453) is the notion of invariance in the conduction properties and morphological parameters of myelinated axons. In fact, the geometry of myelinated axons seems to be tuned by evolution to give the highest conduction velocity.
Conduction velocity in mammalian axons has been evaluated traditionally by antidromic latency measurements or field measurements of axon volleys (300, 521). More direct measurements of conduction velocity have been obtained recently with the development of axonal patch-clamp recordings in brain tissue. In unmyelinated axons, conduction velocity is generally slow. It has been estimated to be close to 0.25 m/s at Schaffer collateral [START_REF] Andersen | The hippocampal lamella hypothesis revisited[END_REF] or at the mossy-fiber axon (299, 463), and reach 0.38 m/s in the axon of CA3 pyramidal neurons (363). In contrast, conduction becomes faster in myelinated axons, but it largely depends on the axon diameter. In fact, myelination pays in terms of conduction velocity when the axon diameter exceeds 1-2 m (453). In the thin Purkinje cell axon (ϳ0.5-1 m), conduction velocity indeed remains relatively slow (0.77 m/s; Ref. 116). Similarly, in the myelinated axon of L5 neocortical pyramidal neurons of the rat (diameter ϳ1-1.5 m; Ref/ 290), conduction velocity has been estimated to be 2.9 m/s (291). Conduction velocity along small axons of neurons from the subthalamic nucleus is also relatively modest (4.9 m/s; diameter ϳ0. 5 m; Ref. [START_REF] Atherton | Autonomous initiation and propagation of action potentials in neurons of the subthalamic nucleus[END_REF]. In contrast, in large-diameter axons such as cat brain stem motoneuron fibers (ϳ5 m), the conduction velocity reaches 70 -80 m/s (214). Similarly, in group I afferents of the cat spinal cord, conduction velocity has been estimated to vary between 70 and 90 m/s (309). The fastest impulse conduction in the animal kingdom has been reported in myelinated axons of the shrimp, which are able to conduct impulses at speeds faster than 200 m/s (569). These axons possess two unique structures (microtubular sheath and submyelinic space) that contribute to speed-up propagation. In particular, the submyelinic space constitutes a low-impedance axial path that acts in a similar way to the wire in the experiment of del Castillo and Moore (148).
Modulation of conduction velocity
Conduction velocity along myelinated axons has been shown to depend also on neuron-glia interactions (123,190,526,570). Importantly, depolarization of a single oligodendrocyte was found to increase the action potential conduction velocity of the axons it myelinates by ϳ10% (570). Although the precise mechanism has not been yet fully elucidated, it may result from ephaptic interaction between the myelin depolarization and the axon (280; see also sect. VIIIC). This finding may have important functional consequences. Mature oligodendrocytes in the rat hippocampus are depolarized by theta burst stimulation of axons. Thus myelin may also dynamically regulate impulse transmission through axons and promote synchrony among the multiple axons under the domain of an individual oligodendrocyte (570). In a recent study, the conduction velocity in small myelinated axons was found to depend on tight junctions between myelin lamellae (153). The absence of these tight junctions in Claudin 11-null mice does not perturb myelin formation but significantly decreases conduction velocity in small, but not in large, myelinated axons. In fact, tight junctions in myelin potentiate the insulation of small axons, which possess only a relatively limited number of myelin loops, by increasing their internodal resistance.
In auditory thalamocortical axons, nicotine enhances conduction velocity and decreases axonal conduction variability (282). Although the precise mechanism remains to be clarified, this process may lower the threshold for auditory perception by acting on the thalamocortical flow of information.
V. FUNCTIONAL COMPUTATION IN THE AXON
A. Activity-Dependent Shaping of the Presynaptic Action Potential
The shape of the presynaptic action potential is of fundamental importance in determining the strength of synapses by modulating transmitter release. The waveform of the depolarization dictates the calcium signal available to trigger vesicle fusion by controlling the opening of voltage-gated calcium channels and the driving force for calcium influx. Two types of modification of the presynaptic action potential have been reported experimentally: modifications of action potential width and/or modifications of action potential amplitude.
Activity-dependent broadening of presynaptic action potential
The duration of the presynaptic spike is not fixed, and activity-dependent short-term broadening of the spike has been observed in en passant mossy fiber boutons (209). The mossy fiber-CA3 pyramidal cell synapse displays fast synchronized transmitter release from several active zones and also shows dynamic changes in synaptic strength over a more than 10-fold range. The exceptionally large synaptic facilitation is in clear contrast to the weak facilitation (ϳ150% of the control) generally observed at most central synapses. Granule cell axons exhibit several voltage-gated potassium channels including Kv1.1 (443), Kv1.2 (477), and two A-type potassium channels, Kv1. 4 (126,478,543) and Kv3.4 (543). Geiger and Jonas (209) have shown that the action potential at the mossy fiber terminal is half as wide as that at the soma. During repetitive stimulation, the action potential gets broader in the axon terminal but not in the soma (209) (Fig. 7). More interestingly, using simultaneous recordings from the granule cell terminal and the corre-sponding postsynaptic apical dendrite of a CA3 neuron, Geiger and Jonas (209) showed that action potential broadening enhanced presynaptic calcium influx and doubled the EPSC amplitude (Fig. 7). This broadening results from the inactivation of A-type K ϩ channels located in the membrane of the terminal. Consequently, the pronounced short-term facilitation probably results from the conjugated action of spike widening and the classical accumulation of residual calcium in the presynaptic terminal. Because ultrastructural analysis reveals A-type channel immunoreactivity in the terminal but also in the axonal membrane (126), activity-dependent spike broadening might also occur in the axon.
Activity-dependent reduction of presynaptic action potential
Reduction of the amplitude of the presynaptic action potential has been reported following repetitive stimulation of invertebrate (230) or mammalian axons (209,552). This decline results from sodium channel inactivation and can be amplified by low concentrations of TTX [START_REF] Brody | Release-independent short-term synaptic depression in cultured hippocampal neurons[END_REF]343). The consequences of sodium channel inactivation on synaptic transmission have been studied at various central synapses. Interestingly, the reduction of the sodium current by application of TTX in the nanomolar range decreases glutamatergic transmission and enhances shortterm depression [START_REF] Brody | Release-independent short-term synaptic depression in cultured hippocampal neurons[END_REF]243,421). In addition, the depolarization of the presynaptic terminal by raising the external potassium concentration increases paired-pulse synaptic depression at autaptic contacts of cultured hippocampal cells (243) and decreases paired-pulse synaptic facilitation at Schaffer collateral-CA1 synapses stimulated extracellularly (364). In this case, the depolarization of the presynaptic axons is likely to enhance presynaptic spike attenuation. Importantly, inactivation of sodium channels by high external potassium increases the proportion of conduction failures during repetitive extracellular stimulation of Schaffer collateral axons (364). However, these results must be interpreted carefully because apparent changes in paired-pulse ratio may simply be the result of stimulation failures produced by the reduction in presynaptic axon excitability.
Interestingly, the manipulations of the sodium current mentioned above have little or no effect on GABAergic axons (243,364,421). Riluzole, TTX, or external potassium affect neither GABAergic synaptic transmission nor short-term GABAergic plasticity. This difference between glutamatergic and GABAergic axons might result from several factors. Sodium currents in interneurons are less sensitive to inactivation, and a slow recovery from inactivation has been observed for pyramidal cells but not for inhibitory interneurons (353). Moreover, the density of sodium current is higher in interneurons than in pyramidal neurons (354). Thus axons of GABAergic interneurons could be better cables for propagation than those of pyramidal cells (194,525). This unusual property could be important functionally: safe propagation along inhibitory axons could protect the brain from sporadic hyperactivity and prevent the development of epileptiform activity.
B. Signal Amplification Along the Axon
Signal amplification is classically considered to be achieved by the dendritic membrane, the cell body, or the proximal part of the axon [START_REF] Astman | Persistent sodium current in layer 5 neocortical neurons is primarily generated in the proximal axon[END_REF]512). Whereas action potential propagation along the axon is clearly an active process that depends on a high density of sodium channels, the process of action potential invasion into presynaptic terminals was, until recently, less well understood. This question is of primary importance because the geometrical perturbation introduced by the presynaptic terminal decreases the safety factor for action potential propagation and may affect the conduction time (see sect. VIII). The invasion of the spike is active at the amphibian neuromuscular junction (278) but passive at the neuromuscular junction of the mouse [START_REF] Brigant | Presynaptic currents in mouse motor endings[END_REF]163) and the lizard (329). This question has been reconsidered at hippocampal mossy fiber boutons (179). In this study, Engel and Jonas (179) showed that sodium channel density is very high at the presynaptic terminal (2,000 channels/mossy FIG. 7. Shaping of the action potential in the axon. A: a mossy fiber bouton (mfb, blue) is recorded in the whole cell configuration and activated at a frequency of 50 Hz. B: during repetitive stimulation of the axon, the action potential becomes wider. The 10th and 50th action potentials are compared with the 1st action potential in the train. C: action potential broadening potentiates transmitter release. A mossy fiber terminal (red) and the corresponding CA3 cell (blue) were recorded simultaneously. Action potential waveforms were imposed at the presynaptic terminal. The increased duration of the waveform incremented the amplitude of the synaptic current. [Adapted from Geiger and Jonas (209), with permission from Elsevier.] fiber bouton). In addition, sodium channels in mossy fiber boutons activate and inactivate with submillisecond kinetics. A realistic computer simulation indicates that the density of sodium channels found in the mossy fiber bouton not only amplifies the action potential but also slightly increases the conduction speed along the axon (179). Similarly, presynaptic sodium channels control the resting membrane potential at the presynaptic terminal of the calyx of Held (260), and hence may determine transmitter release at this synapse.
Another mechanism of activity-dependent signal amplification has been reported at hippocampal mossy fiber (376). In immature hippocampus, repetitive stimulation of the mossy fiber pathway not only facilitates synaptic transmission but also facilitates the amplitude of the presynaptic volley, the electrophysiological signature of the presynaptic action potential in the axon recorded extracellularly. This axonal facilitation is not observed in mature hippocampus. It is associated with the depolarization of mossy fibers and fully inhibited by GABA A receptor antagonists, indicating that GABA released from interneurons depolarizes the axon and increases its excitability. Because the presynaptic axon was not directly recorded in this study, further investigations will be necessary to determine whether GABA A receptor depolarization limits conduction failures or interacts with sodium channel amplification.
C. Axonal Integration (Analog Signaling)
Classically, the somatodendritic compartment is considered as the locus of neuronal integration where subthreshold electrical signals originating from active synapses are temporally summated to control the production of an output message, the action potential. According to this view, the axon initial segment is the final site of synaptic integration, and the axon remains purely devoted to action potential conduction in a digital way. Synaptic strength can be modulated by the frequency of presynaptic action potential firing. Today, this view is now challenged by accumulating evidence in invertebrate and vertebrate neurons showing that the axon is also able to integrate electrical signals arising from the somato-dendritic compartment of the neuron (for reviews, see Refs. 4,115,351,410). In fact, the axon appears now to be a hybrid device that transmits neuronal information through both action potentials in a digital way and subthreshold voltage in an analog mode.
Changes in presynaptic voltage affect synaptic efficacy
The story started with classical observations reported at the neuromuscular junction of the rat (261, 262), and at the giant synapse of the squid (237, 369, 524), where the membrane potential of the presynaptic axon was found to control the efficacy of action potentialtriggered synaptic transmission. Synaptic transmission was found to be gradually enhanced when the membrane potential of the presynaptic element was continuously hyperpolarized to different membrane potentials. Thus the membrane potential of the presynaptic element determines, in an analog manner, the efficacy of the digital output message (the action potential). This facilitation was associated with a reduction in the paired-pulse ratio (369), indicating that it results from enhanced presynaptic transmitter release. Although the mechanisms underlying this behavior have not been clearly identified, it should be noted that graded presynaptic hyperpolarization increased the presynaptic spike amplitude in a graded manner (237,369,524). The importance of the amplitude of the presynaptic action potential is also demonstrated by the reduction of the evoked EPSP upon intracellular injection of increasing concentrations of TTX in the presynaptic axon (279). Thus a possible scheme here would be that hyperpolarization of the presynaptic element induces Na ϩ channel recovery from inactivation and subsequently enhance presynaptic spike and EPSP amplitudes. A similar phenomenon has been recently observed at autaptic contacts in cultured hippocampal neurons (528).
A totally different scenario has been observed in the Aplysia (475, 486, 487) and the leech (382). In these studies on connected pairs of neurons, the authors reported that constant or transient depolarization of the membrane potential in the soma of the presynaptic neuron facilitates synaptic transmission evoked by single action potentials, in a graded manner (Fig. 8A). The underlying mechanism in Aplysia neurons involves the activation of steady-state Ca 2ϩ currents (475) and inactivation of 4-aminopyridine-sensitive K ϩ current (484, 485) which overcome propagation failures in a weakly excitable region of the neuron (184). Thus the possible scenario in Aplysia neurons is that somatic depolarization may inactivate voltage-gated K ϩ currents located in the axon that control the propagation and subsequently the amplitude and duration of the action potential.
It is also important to mention that many types of invertebrate neuron release neurotransmitter as a graded function of presynaptic membrane potential [START_REF] Angstadt | A hyperpolarization-activated inward current in heart interneurons of the medicinal leech[END_REF][START_REF] Burrows | Graded synaptic transmission between local interneurones and motor neurones in the metathoracic ganglion of the locust[END_REF]227).
In these examples, synaptic transmission does not strictly depend on spiking but rather on variations of the presynaptic membrane potential, further supporting the idea that membrane potential alone is capable of controlling neuronal communication in an analog manner.
Space constant in axons
In the experiments reported in Aplysia, facilitation was induced by changing membrane potential in the soma, indicating that the presynaptic terminal and the cell body are not electrically isolated. Thus the biophysical characteristics of electrical transfer along the axon appear as the critical parameter determining axonal integration of somatic signals.
For biophysicists, the axon is viewed as a cylinder that can be subdivided into unit lengths. Each unit length is a parallel circuit with its own membrane resistance (r m ) and capacitance (c m ). All the circuits are connected by resistors (r i ), which represent the axial resistance of the intracellular cytoplasm, and a short circuit, which represents the extracellular fluid (Fig. 8B). The voltage response in such a passive cable decays exponentially with distance due to electrotonic conduction (253a). The space (or length) constant, , of axons is defined as the distance over which a voltage change imposed at one site will drop to 1/e (37%) of its initial value (Fig. 8C). In fact, the depolarization at a given distance x from the site of injection x ϭ 0 is given by V x ϭ V 0 /e x/ , where e is the Euler's number and is the space or length constant. The length constant is expressed as ϭ (r m /r i ) 1/2 . For a cable with a diameter d, it is therefore expressed as ϭ [(d/ 4)(R M /R A )] 1/2 , where R A is the axial resistance and R M is the specific membrane resistance (425). Thus the length constant of the axon depends on three main parameters. In myelinated axons, R M may reach very high values because of the myelin sheath. Therefore, space constants in myelinated axons are very long. For instance, in cat brain stem neurons, the space constant amounts to 1.7 mm (214). EPSPs generated in the soma are thus detectable at long distances in the axon. In thin unmyelinated axons, the situation was thought to be radically different because R M is relatively low and the diameter might be very small. Space constants below 200 m were considered in models of nonmyelinated axons (295,313). The recent use of whole-cell recordings from unmyelinated axons [START_REF] Bischofberger | Patchclamp recording from mossy fiber terminals in hippocampal slices[END_REF] profoundly changed this view. In hippocampal granule cell axons, the membrane space constant for an EPSP generated in the somato-dendritic compartment is ϳ450 m (5; see also sect. VC3). Similarly, the axonal space constant in L5 pyramidal neurons is also 450 m (489). However, these values might be underestimated because the EPSP is a transient event and the space constant is inversely proportional to the frequency content of the signal (468, 489). For instance, the axonal space constant for slow signals (duration Ն200 ms) may reach ϳ1,000 m in L5 pyramidal cell axons (112).
Axonal integration in mammalian neurons
Axonal integration is not peculiar to invertebrate neurons and synaptic facilitation, produced by depolarization of the presynaptic soma, has been reported in at least three central synapses of the mammalian brain. First, at synapses established between pairs of CA3-CA3 pyramidal cells, steady-state depolarization of the presynaptic neuron from Ϫ60 to Ϫ50 mV enhances synaptic transmission (460). More recently, an elegant study published by Alle and Geiger (5) shows, by using direct patch-clamp recording from presynaptic hippocampal mossy fiber boutons, that granule cell axons transmit analog signals (membrane potential at the cell body) in FIG. 8. Axonal integration. A: graded control of synaptic efficacy by the membrane potential in a pair of connected Aplysia neurons. The hyperpolarization of the presynaptic neuron gradually reduces the amplitude of the synaptic potential. [Adapted from Shimahara and Tauc (487).] B: electrical model of a passive axon. Top: the axon is viewed as a cylinder that is subdivided into unit lengths. Bottom: each unit length is considered as a parallel circuit with its membrane resistance (r m ) and capacitance (c m ). All circuits are connected intracellularly by resistors (r i ). C: space constant of the axon. Top: schematic representation of a pyramidal cell with its axon. Bottom: plot of the voltage along the axon. A depolarization to V 0 is applied at the cell body (origin in the plot). The potential decays exponentially along the axon according to V ϭ V 0 /e X/ . The color represents the membrane potential (red is depolarized and blue is the resting potential). The space constant is defined as the distance for which V is 37% of V 0 (dashed horizontal line on the plot). addition to action potentials. Surprisingly, excitatory synaptic potentials evoked by local stimulation of the molecular layer in the dentate gyrus could be detected in the mossy-fiber bouton located at several hundreds microns from the cell body (Fig. 9). Excitatory presynaptic potentials (EPreSP) recorded in the mossy-fiber bouton represent forward-propagated EPSPs from granule cell dendrite. They were not generated locally in the CA3 region because application of AMPA receptor or sodium channel blockers locally to CA3 has no effect on the amplitude of EPreSP [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF].
As expected from cable theory, this signal is attenuated and the EPSP waveform is much slower in the terminal than in the soma of granule cells. The salient finding here is that the space constant of the axon is much wider (ϳ450 m) than initially expected. Consistent with propagation of electrical signals over very long distances, the analog facilitation of synaptic transmission has a slow time constant [START_REF] Ahern | Induction of persistent sodium current by exogenous and endogenous nitric oxide[END_REF][START_REF] Aizenman | Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons[END_REF][START_REF] Alle | Analog signalling in mammalian cortical axons[END_REF]Refs. 291,382,489). The functional consequence is that slow depolarizations of the membrane in somatic and dendritic regions are transmitted to the axon terminals and can influence the release of transmitter at the mossy fiber-CA3 cell synapse. Similar observations have been reported in the axon of L5 cortical pyramidal neurons recorded in whole cell configuration at distances ranging between 90 and 400 m (489; Fig. 10A). In this case, whole cell recording is possible on the axon because sectioning the axon produces a small enlargement of its diameter that allows positioning of a patch pipette. Here again, incoming synaptic activity in the presynaptic neuron propagates down the axon and can modulate the efficacy of synaptic transmission. The modulation of synaptic efficacy by somatic potential is blocked at L5-L5 connections (489) or reduced by Ca 2ϩ chelators at the mossy fiber input (5; but see Ref. 468), and may therefore result from the control of background calcium levels at the presynaptic terminal [START_REF] Awatramani | Modulation of transmitter release by presynaptic resting potential and background calcium levels[END_REF].
At least one mechanism controlling the voltage-dependent broadening of the axonal action potential has been recently identified in L5 pyramidal neurons (291, 490). Kv1 channels are expressed at high densities in the AIS ( 267), but they are also present in the axon proper. With cell-attached recordings from the axon at different distances from the cell body, Kole et al. (291) elegantly showed that Kv1 channel density increases 10-fold over the first 50 m of the AIS but remains at very high values in the axon proper (ϳ5-fold the somatic density). The axonal current mediated by Kv1 channels inactivates with a time constant in the second range (291, 489). Pharmacological blockade or voltage inactivation of Kv1 channels produce a distance-dependent broadening of the axonal spike, as well as an increase in synaptic strength at proximal axonal terminals (291). For instance, when the membrane potential is shifted from Ϫ80 to Ϫ50 mV, the D-type current is reduced by half (291). Subsequently, the axonal spike is enlarged and transmitter release is enhanced (Fig. 10B). Thus Kv1 channels occupy a strategic position to integrate slow subthreshold signals generated in the dendrosomatic region and control the presynaptic action potential waveform to finely tune synaptic coupling in local cortical circuits.
Axonal speeding
The role of the axonal membrane compartment is also critical in synaptic integration. The group of Alain Marty (365) showed by cutting the axon of cerebellar interneurons with two-photon illumination that the axonal membrane speeds up the decay of synaptic potentials recorded in the somatic compartment of cerebellar interneurons. This effect results from passive membrane properties of the axonal compartment. In fact, the axonal compartment acts as a sink for fast synaptic currents. The capacitive charge is distributed towards the axonal membrane synaptic current, thus accelerating the EPSP decay beyond the speed defined by the membrane time constant of the neuron (usually 20 ms). Functionally, axonal speeding has important consequences. EPSP decay is faster and, consequently, axonal speeding increases the tempo-FIG. 9. Integration of subthreshold synaptic potential in the axon of hippocampal granule cells. Electrically evoked synaptic inputs in the dendrites of a granule cell can be detected in the mossy fiber terminal (EPreSP). Bottom panel: synaptic transmission at the mossy fiber synapse was facilitated when the simulated EPreSP ("EPreSP") was associated with a presynaptic action potential (AP ϩ "EPreSP"). [Adapted from Alle and Geiger (5), with permission from the American Association for the Advancement of Science.] ral precision of EPSP-spike coupling by reducing the time window in which an action potential can be elicited (200,418).
Backward axonal integration
Voltage changes in the somatic compartment modify release properties at the nerve terminal, and the effect is reciprocal. Small physiological voltage changes at the nerve terminal affect action potential initiation (400). In their recent study, Paradiso and Wu (400) have shown that small subthreshold depolarizations (Ͻ20 mV) of the calyx of Held produced by current injection or by the afterdepolarization (ADP) of a preceding action potential were found to decrease the threshold for action potential generated by local stimulation 400 -800 m from the nerve terminal. Conversely, a small hyperpolarization of the nerve terminal (Ͻ15 mV) produced either by current injection or the AHP increased the threshold for spike initiation. Thus this elegant study showed for the first time that axonal membrane, like dendrites, can backpropagate signals generated in the nerve terminal. Presynaptic GABA A currents originating in the axon have been recently identified in the cell body of cerebellar interneurons (535). Thus axonal GABAergic activity can probably influence somatic excitability in these neurons, further supporting the fact that axonal and somatodendritic compartments are not electrically isolated.
The functional importance of axonal integration is clear, but many questions remain open. The three examples where hybrid (analog-digital) signaling in the axon has been observed are glutamatergic neurons [CA3 pyramidal neurons (460), granule cells (5), and L5 pyramidal neurons (291, 489)]. Do axons of GABAergic interneurons also support hybrid axonal signaling? A study indicates that this is not the case at synapses established by parvalbumin-positive fast-spiking cells that display delayed firing and pyramidal neurons in cortical layer 2-3 (117, 217). However, the equilibrium between excitation and inhibition probably needs to be preserved in cortical circuits, and one cannot exclude that hybrid axonal signaling may exist in other subclasses of cortical or hippocampal GABAergic interneurons. In cerebellar interneurons, GABA release is facilitated by subthreshold depolarization of the presynaptic soma (110). Can inhibitory postsynaptic potentials spread down the axon, and if so, how do they influence synaptic release? In dendrites, voltagegated channels amplify or attenuate subthreshold EPSPs. Do axonal voltage-gated channels also influence propagation of subthreshold potentials? Now that the axons of mammalian neurons are finally becoming accessible to direct electrophysiological recording, we can expect answers to all these questions.
VI. PROPAGATION FAILURES
One of the more unusual operations achieved by axons is selective conduction failure. When the action potential fails to propagate along the axon, no signal can reach the output of the cell. Conduction failure represents a powerful process that filters communication with postsynaptic neurons (549). Propagation failures have been observed experimentally in various axons including vertebrate spinal axons [START_REF] Barron | Intermittent conduction in the spinal cord[END_REF]301), spiny lobster or crayfish motoneurons (230,231,241,401,496), leech mechanosensory neurons [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF]234,541,572), thalamocortical axons ( 151), rabbit nodose ganglion neurons ( 167), rat dorsal root ganglion neurons (335, 336), neurohypophysial axons [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]172), and hippocampal pyramidal cell axons (144,364,498). However, some axons in the auditory pathways are capable of sustaining remarkably high firing rates, with perfect entrainment occurring at frequencies of up to 1 kHz (467). Several factors determine whether propagation along axons fails or succeeds.
A. Geometrical Factors: Branch Points and Swellings
Although the possibility that propagation may fail at branch points was already discussed by Krnjevic and Miledi (301), the first clear indication that propagation is perturbed by axonal branch points came from the early studies on spiny lobster, crayfish, and leech axons (230,231,401,496,497,541,572). The large size of invertebrate axons allowed multielectrode recordings upstream and downstream of the branch point. For example, in lobster axons, conduction across the branch point was found to fail at frequencies above 30 Hz (Fig. 11A,Ref. 230). The block of conduction occurred specifically at the branch point because the parent axon and one of the daughter branches continued to conduct action potentials. Failures appeared first in the thicker daughter branch, but they could be also observed in the thin branch at higher stimulus frequency. In the leech, conduction block occurs at central branch points where fine axons from the periphery meet thicker axons (572). Branch point failures have been observed or suspected to occur in a number of mammalian neurons (144,151,167).
Propagation failures also occur when the action potential enters a zone with an abrupt change in diameter. This occurs with en passant boutons [START_REF] Bourque | Intraterminal recordings from the rat neurohypophysis in vitro[END_REF]272,581) but also when impulses propagating along the axon enter the soma [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF]185,336). For instance, in the metacerebral cell of the snail, propagation failures have been observed when a spike enters the cell body (Fig. 11B; Ref. 12).
These failures result because the electrical load is significantly higher on the arriving action potential, and the current generated by the parent axon is not sufficient to support propagation (reviewed in Ref. 470). Simulations show that at geometrical irregularities the propagating action potential is usually distorted in amplitude and width, and local conduction velocity can be changed. For instance, an abrupt increase in axon diameter causes a decrease in both velocity and peak amplitude of the action potential, whereas a step decrease in diameter has the opposite local effects on these two parameters (221,226,272,337,338,348,349,403). In fact, the interplay between the total longitudinal current produced by the action potential and the input impedance of the axon segments ahead of the action potential determines the fate of the propagating action potential. The case of the branch point has been studied in detail (219,221,583). The so-called 3/2 power law developed by Rall describes an ideal relationship between the geometry of mother and daughter branches (221,424,426). A geometrical parameter (the geometrical ratio, GR) has been defined as follows: GR ϭ (d 3/2 daughter 1 ϩ d 3/2 daughter 2 )/d 3/2 mother , where d daughter 1 and d daughter 2 are the diameters of the daughter branches and d mother is the diameter of the parent axon.
For GR ϭ 1, impedances match perfectly and spikes propagate in both branches. If GR Ͼ 1, the combined electrical load of the daughter branches exceeds the load of the main branch. In other words, the active membrane of the mother branch may not be able to provide enough current to activate both branches. If GR Ͼ 10, conduction block occurs in all daughter branches (404). For 1 Ͻ GR Ͻ 10, the most common situation by far, propagation past the branch point occurs with some delay. All these conclusions are only true if the characteristics of the membrane are identical, and any change in ion channel density may positively or negatively change the safety factor at a given branch point. The amplification of the propagating action potential by sodium channels in the mossy fiber bouton is able to counteract the geometrical effects and speeds up the propagation along the axon (179). Details on the experimental evaluation of GR at axon branch points has been reviewed elsewhere (139).
B. Frequency-Dependent Propagation Failures
Depending on the axon type, conduction failures are encountered following moderate or high-frequency (200 -300 Hz) stimulation of the axon. For instance, a frequency of 20 -30 Hz is sufficient to produce conduction failures at the neuromuscular terminal arborization (301) or at the branch point of spiny lobster motoneurons (230). These failures are often seen as partial spikes or spikelets that are electrotonic residues of full action potentials. The functional consequences of conduction failures might be important in vivo. For example, in the leech, propagation failures produce an effect similar to that of sensory adaptation. They represent a nonsynaptic mechanism that temporarily disconnects the neuron from one defined set of postsynaptic neurons and specifically routes sensory information in the ganglion (234,339,541,572).
What are the mechanisms of frequency-dependent conduction failure? As mentioned above, the presence of a low safety conduction point such as a branch point, a bottleneck (i.e., an axon entering the soma) or an axonal swelling determines the success or failure of conduction. However, these geometrical constraints are not sufficient to fully account for all conduction failures, and additional factors should be considered. The mechanisms of propagation failure can be grouped in two main categories.
First, propagation may fail during repetitive axon stimulation as a result of a slight depolarization of the membrane. At spiny lobster axons, propagation failures were associated with a 10 -15% reduction of the action potential amplitude in the main axon and a membrane depolarization of 1-3 mV (230). These observations are consistent with potassium efflux into the peri-axonal space induced by repetitive activation. In most cases, the membrane depolarization produced by external accumulation of potassium ions around the axon probably contributes to sodium channel inactivation. In fact, hyperpolarization of the axon membrane or local application of physiological saline with a low concentration of potassium in the vicinity of a block can restore propagation in crayfish axons (496). Elevation of the extracellular potassium concentration produced conduction block in spiny lobster axons (231). However, this manipulation did not reproduce the differential block induced by repetitive stimulation, as failures occurred simultaneously in both branches (230). Interestingly, conduction could also be restored by the elevation of intracellular calcium concentration. Failures were also induced with a lower threshold when the electrogenic Na ϩ /K ϩ pump was blocked with ouabain. Thus differential conduction block could be explained as follows. During high-frequency activation, potassium initially accumulates at the same rate around the parent axon and the daughter branches. Sodium and calcium accumulate more rapidly in the thin branch than in the thick branch because of the higher surface-to-volume ratio. Thus the Na ϩ /K ϩ pump is activated and extracellular potassium is lowered, more effectively around the thin branch (231). Accumulation of extracellular potassium has also been observed in the olfactory nerve (178) and in hippocampal axons (416), and could similarly be at the origin of unreliable conduction.
Propagation failures have also been reported in the axon of Purkinje neurons under high regimes of stimulation (283, 372; Fig. 12A). In this case, the cell body was recorded in whole cell configuration, whereas the signal in the axon was detected in cell-attached mode at a distance up to 800 m from the cell body. Propagation was found to be highly reliable for single spikes at frequencies below 200 Hz (failures were observed above 250 Hz). In physiological conditions, Purkinje cells typically fire simple spikes well below 200 Hz, and these failures are unlikely to be physiologically relevant (196). However, Purkinje cells also fire complex spikes (bursts) following stimulation of the climbing fiber. The instantaneous frequency during these bursts may reach 800 Hz (283, 372). Interestingly, complex spikes did not propagate reliably in Purkinje cell axons. Generally, only the first and the last spike of the burst propagate. The failure rate of the complex spike is very sensitive to membrane potential, and systematic failures occur when the cell body is depolarized (372). The limit of conduction has not been yet fully explored in glutamatergic cell axons, but conduction failures have been reported when a CA3 pyramidal neuron fire at 30 -40 Hz during a long plateau potential (362). Thus the conduction capacity seems much more robust in inhibitory cells compared with glutamatergic neurons. However, this study was based on extracellular recordings, and the apparent conduction failures may result from detection problems. In fact, very few failures were observed with whole cell recordings in neocortical pyramidal neurons (489). Furthermore, the robustness of spike propagation along axons of inhibitory neurons will require further studies.
Propagation failures induced by repetitive stimulation may also result from hyperpolarization of the axon. Hyperpolarization-induced conduction block has been observed in leech (339,541,572), locust (247), and mammalian axons [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]167). In this case, axonal hyperpolarization opposes spike generation. Activity-dependent hyperpolarization of the axon usually results from the activation of the Na ϩ -K ϩ -ATPase and/or the activation of calcium-dependent potassium channels. Unmyelinated axons in the PNS, for example, vagal C-fibers, hyperpolarize in response to repeated action potentials (445, 446) as a result of the intracellular accumulation of Na ϩ and the subsequent activation of the electrogenic Na ϩ /K ϩ pump [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]445,446). In crayfish axons, this hyperpolarization may amount to 5-10 mV [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. The blockade of the Na ϩ -K ϩ -ATPase with ouabain results in axon depolarization, probably as a consequence of posttetanic changes in extracellular potassium concentration. In the leech, hyperpolarization-dependent conduction block occurs at central branch points in all three types of mechanosensory neurons in the ganglion: touch (T), pressure (P), and nociceptive (N) neurons. In these neurons, hyperpolarization is induced by the Na ϩ -K ϩ -ATPase and by cumulative activation of a calcium-activated potassium conductance. It is interesting to note that the conduction state can be changed by neuromodulatory processes. 5-HT decreases the probability of conduction block in P and T cells, probably by a reduction of the hyperpolarization (350).
Hyperpolarization-dependent failures have also been reported in axons of hypothalamic neurons (from paraventricular and supraoptic nuclei) that run into the neurohypophysis. The morphology of their boutons is unusual in that their diameter varies between 5 and 15 m (581). In single axons, propagation failures are observed at stimulation rates greater than 12 Hz and are concomitant with a hyperpolarization of 4 mV [START_REF] Bielefeldt | A calcium-activated potassium channel causes frequency-dependent action-potential failures in a mammalian nerve terminal[END_REF]. Here, the induced hyperpolarization of the neuron results from activation of the calcium-dependent BK potassium channels.
Several recent studies indicate that the hyperpolarization produced by repetitive stimulation could be dampened by hyperpolarization-induced cationic current (I h ) [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]498). This inward current is activated at resting membrane potential and produces a tonic depolarization of the axonal membrane [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. Thus reduction of this current induces a hyperpolarization and perturbs propagation. The pharmacological blockade of I h by ZD-7288 or by external cesium can in fact produce more failures in Schaffer collateral axons (498). The peculiar biophysical properties of I h indicate that it may limit large hyperpolarizations or depolarizations produced by external and internal accumulation of ions. In fact, hyperpolarization of the axon will activate I h , which in turn produces an inward current that compensates the hyperpolarization [START_REF] Beaumont | Temporal synaptic tagging by I(h) activation and actin: involvement in long-term facilitation and cAMP-induced synaptic enhancement[END_REF]. Reciprocally, this compensatory mechanism is also valid for depolarization by removing basal activation of I h . In addition, activity-induced hyperpolarization of the axonal membrane may modulate the biophysical state of other channels that control propagation.
C. Frequency-Independent Propagation Failures
Action potential propagation in some axon collaterals of cultured CA3 pyramidal neurons can be gated by activation of presynaptic A-type K ϩ current, independently of the frequency of stimulation (144,295). Synaptic transmission between monosynaptically coupled pairs of CA3-CA3 or CA3-CA1 pyramidal cells in hippocampal slice cultures can be blocked if a brief hyperpolarizing current pulse is applied a few milliseconds before the induction of the action potential in the presynaptic neuron (Fig. 12B). This regulation is observed in synaptic connections that have no transmission failures, therefore indicating that the lack of postsynaptic response is the consequence of a conduction failure along the presynaptic axon. In contrast to axonal integration where transmitter can be gradually regulated by the presynaptic membrane potential, transmission is all or none. Interestingly, failures can also be induced when the presynaptic hyperpolarizing current pulse is replaced by a somatic IPSP (144,295). When presynaptic cells are recorded with a microelectrode containing 4-aminopyridine (4-AP), a blocker of I A -like conductances, failures are abolished, indicating that I A gates action potential propagation (see also Ref. 389). Because A-channels are partly inactivated at the resting membrane potential, their contribution during an action potential elicited from the resting membrane potential is minimal, and the action potential propagates successfully from the cell body to the nerve terminal. In contrast, A-channels recover from inactivation with a transient hyperpolarization and impede successful propagation to the terminal.
Propagation failures have been induced in only 30% of cases (144), showing that propagation is generally reliable in hippocampal axons (341,342,422). In particular, I A -dependent conduction failures have been found to occur at some axon collaterals but not at others (144). With the use of a theoretical approach, it has been shown that failures occur at branch points when A-type K ϩ channels are distributed in clusters near the bifurcation (295). Perhaps because these conditions are not fulfilled in layer II/III neocortical neurons (128, 289) and in dissociated hippocampal neurons (341), this form of gating has not been reported in these cell types. It would be interesting to explore the actual distribution of K ϩ channel clusters near branch points using immunofluorescence methods.
Functionally, this form of gating may determine part of the short-term synaptic facilitation that is observed during repetitive presynaptic stimulation. Apparent paired-pulse facilitation could be observed because the first action potential fails to propagate but not the second spike, as a result of inactivation of A-type K ϩ current (145). A recent study suggests that repetitive burst-induced inactivation of A-type K ϩ channels in the axons of cortical cells projecting onto accumbens nucleus leads to short-term synaptic potentiation through an increased reliability of spike propagation [START_REF] Casassus | Short-term regulation of information processing at the corticoaccumbens synapse[END_REF].
VII. REFLECTION OF ACTION POTENTIAL PROPAGATION
Branch points are usually considered as frequency filters, allowing separate branches of an axon to activate their synapses at different frequencies. But another way that a neuron's branching pattern can affect impulse propagation is by reflecting the impulse (221,402,428). Reflection (or reverse propagation) occurs when an action potential is near failure (221). This form of axonal computation has been well described in leech mechanosensory neurons (Fig. 13A; Refs. [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF] in which an unexpected event occurs when conduction is nearly blocked: the action potential that has nearly failed to invade the thick branch of the principal axon sets up a local potential that propagates backwards. Reflection occurs because impulses are sufficiently delayed as they travel through the branch point. Thus, when the delay exceeds the refractory period of the afferent axon, the impulse will propagate backwards as well as forwards, creating a reflection. This phenomenon can be identified electrophysiologically at the cell body of the P neuron because action potentials that reflect had a longer initial rising phase (or "foot"), indicating a delay in traveling through the branch point. This fast double firing in the thin branch of mechanosensory neurons has important functional consequences. It facilitates synaptic transmission at synapses formed by this axon and postsynaptic neurons by a mechanism of paired-pulse facilitation with the orthodromic spike and the antidromic action potential that reflected at the branch point (Fig. 13A). Reflection is not limited to P cells but also concerns T cells [START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF]. Interestingly, the facilitation of synaptic transmission also affects the chemical synapse between the P cell and the S neuron, a neuron that plays an essential role in sensitization, a nonassociative form of learning [START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF].
Reflected propagation is not restricted to mechanosensory neurons of the leech but has also been noted in the axon of an identified snail neuron [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF]. Reflection has not yet been definitively reported in mammalian axons (270), but it has been demonstrated in dendrites. In mitral cells of the mammalian olfactory bulb, both conduction FIG. [START_REF] Antonini | Morphology of single geniculocortical afferents and functional recovery of the visual cortex after reverse monocular deprivation in the kitten[END_REF]. Reflection of action potentials. A: reflection and conduction block produce multilevel synaptic transmission in mechanosensory neurons of the leech. Left column: an action potential initiated by anterior minor field stimulation invades the whole axonal arborization (red) and evokes an EPSP in all postsynaptic cells. Middle column: following repetitive stimulation, the cell body is slightly hyperpolarized (orange) and the same stimulation induces a reflected action potential at the branch point between the left branch and the principal axon. The reflected action potential (pink arrow 2) stimulates the presynaptic terminal on postsynaptic cell 1 twice, thus enhancing synaptic transmission (arrow). Right column: when the cell body is further hyperpolarized (blue), the stimulation of the minor field now produces an action potential that fails to propagate at the branch point. The failed spike is seen as a spikelet at the cell body (upward arrow). No postsynaptic response is evoked in postsynaptic cell 2 (downward arrow). [Adapted from Baccus et al. (28,[START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF].] B: reflection of action potential propagation in the presynaptic dendrite of the mitral cell. The dendritic and somatic compartments are recorded simultaneously. An action potential (1) initiated in the dendrite (d) fails to propagate towards the soma (s, dotted trace), is then regenerated at the soma ( 2), and propagates back to the dendrite, thus producing a double dendritic spike (thick trace in the inset). The asterisk marks the failing dendro-somatic spike. [Adapted from Chen et al. (108).] failures (107) and reflection (108) have been observed for impulses that are initiated in dendrites (Fig. 13B). Propagation in dendrites of mitral cells is rather unusual compared with classical dendrites. Like axons, it is highly active, and no decrement in the amplitude of the AP is observed between the soma and the dendrite [START_REF] Bischofberger | Action potential propagation into the presynaptic dendrites of rat mitral cells[END_REF]. In addition, mitral cell dendrites are both pre-and postsynaptic elements. "Ping-pong" propagation has been observed following near failure of dendritic action potentials evoked in distal primary dendrites (108). Forward dendritic propagation of an action potential can be evoked by an EPSP elicited by a strong stimulation of the glomerulus. This particular form of propagation may fail near the cell body when the soma is slightly hyperpolarized. For an intermediate range of membrane potential, the action potential invades the soma and may trigger a back-propagating AP, which is seen as a dendritic double spike in the primary dendrite. The function of reflected propagation is not yet definitively established, but when axonal output is shut down by somatic inhibition, the primary dendrite of the mitral cell may function as a local interneuron affecting its immediate environment. Reflection of fast action potentials has also been observed in dendrites of retinal ganglion cells (544).
VIII. SPIKE TIMING IN THE AXON
A. Delay Imposed by Axonal Length
Axonal conduction introduces a delay in the propagation of neuronal output, and axonal arborization might transform a temporal pattern of activity in the main axon into spatial patterns in the terminals (113). Axonal delay initially depends on the velocity of the action potential in axons (generally between 0.1 m/s in unmyelinated axons and 100 m/s in large myelinated axons) which directly results from the diameter of the axon and the presence of a myelin sheath. Axonal delays may have crucial functional consequences in the integration of sensory information. In the first relay of the auditory system of the barn owl, differences in the axonal conduction delay from each ear, which in this case depends on the differences in axonal length, produce sharp temporal tuning of the binaural information that is essential for acute sound localization (Fig. 14A; Refs. [START_REF] Carr | Axonal delay lines for time measurement in the owl's brainstem[END_REF][START_REF] Carr | A circuit for detection of interaural time differences in the brain stem of the barn owl[END_REF]358).
What is the functional role of axonal delay in network behavior? Theoretical work shows that synchronization of cortical columns and network resonance both depend FIG. [START_REF] Antonini | Rapid remodeling of axonal arbors in the visual cortex[END_REF]. Axonal propagation and spike timing. A: delay lines in the auditory system of the barn owl. Each neuron from the nucleus laminaris receives an input from each ear. Note the difference in axonal length from each side. [Adapted from Carr and Konishi (96).] B: comparison of the delay of propagation introduced by a branch point with GR Ͼ 1 (dashed traces) versus a branch point with perfect impedance matching (GR ϭ 1, continuous traces). Top: schematic drawing of a branched axon with 3 points of recording. At the branch point with GR ϭ 8, the shape of the action potential is distorted and the propagation displays a short latency (⌬t). [Adapted from Manor et al. (349).] C: propagation failures in hippocampal cell axons are associated with conduction delays. The presynaptic neuron was slightly hyperpolarized with constant current to remove inactivation of the A-current (I A ). A presynaptic action potential induced with a short delay after onset of the depolarizing pulse did not elicit an EPSC in the postsynaptic cell because of the large activation of I A . Increasing the delay permitted action potential propagation because I A was reduced during the action potential. For complete inactivation of I A (bottom pair of traces), latency decreased. [Adapted from Debanne et al. (144), with permission from Nature Publishing Group.] on axonal delay [START_REF] Bush | Inhibition synchronizes sparsely connected cortical neurons within and between columns in realistic network models[END_REF]344). A recent theoretical study emphasizes the importance of axonal delay in the emergence of poly-synchronization in neural networks (271). In most computational studies of storage capacity, axonal delay is totally ignored, but in fact, the interplay between axonal delays and synaptic plasticity based on timing (spike-timing-dependent plasticity, STDP) generates the emergence of polychronous groups (i.e., strongly interconnected groups of neurons that fire with millisecond precision). Most importantly, the number of groups of neurons that fire synchronously exceeds the number of neurons in a network, resulting in a system with massive memory capacity (271).
However, differences in axonal delay may be erased to ensure synchronous activity. A particularly illustrative example is given by the climbing fiber inputs to cerebellar Purkinje cells. Despite significant differences in the length of individual olivocerebellar axons, the conduction time is nearly constant because long axons are generally thicker. Thus this compensatory mechanism allows synchronous activation of Purkinje cells with millisecond precision (518). Similarly, the eccentricity of X-class retinal ganglion cells within the retina is compensated by their conduction velocity to produce a nearly constant conduction time (507). Thus, regardless of the geometrical constraints imposed by retinal topography, a precise spatiotemporal representation of the retinal image can be maintained in the visual relay.
B. Delays Imposed by Axonal Irregularities and Ion Channels
In addition to this axonal delay, local changes in the geometry of the axon produce an extra delay. The presence of axonal irregularities such as varicosities and branch points reduces conduction velocity (Fig. 14B). This reduction in conduction velocity occurs as a result of a high geometrical ratio (GR; see sect. VIA). The degree of temporal dispersion has been simulated in the case of an axon from the somatosensory cortex of the cat (349). The delay introduced by high GR branch points could account for a delay of 0.5-1 ms (349). But this extra delay appears rather small compared with the delay imposed by the conduction in axon branches with variable lengths (in the range of 2-4 ms).
A third category of delay in conduction can be introduced during repetitive stimulation or during the activation of specific ion channels. Thus the magnitude of this delay is usually variable. It has been measured in a few cases. In lobster axons, the conduction velocity of the axon was lowered by ϳ30% following repetitive stimulation (231). In dorsal root ganglion neurons, the latency of conducted spikes was found to be enhanced by ϳ1 ms following antidromic paired-pulse stimulation of the axon (336). Com-putational studies indicate that this delay may also result from a local distortion of the action potential shape. Activitydependent delays may have significant consequences on synaptic transmission. For instance, the synaptic delay was found to increase by 1-2 ms during repetitive stimulation of crayfish motor neurons (241). Monosynaptic connections to motoneurons show an increase in synaptic latency concomitant with the synaptic depression induced by repetitive stimulation at 5-10 Hz, which induced near-propagation failures (510). Similarly, a longer synaptic delay has been measured between connected hippocampal cells when conduction nearly fails, due to reactivation of A-type potassium channels (Fig. 14C; Ref. 144). Thus axonal conduction may introduce some noise into the temporal pattern of action potentials produced at the initial segment. At the scale of a nerve, delays in individual axons introduce a temporal dispersion of conduction, suggesting a stuttering model of propagation (374).
Synaptic timing at L5-L5 or CA3-CA3 unitary connections is largely determined by presynaptic release probability [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]. Synaptic latency is inversely correlated with the amplitude of the postsynaptic current, and changes in synaptic delay in the range of 1-2 ms are observed during paired-pulse and long-term plasticity involving regulation of presynaptic release [START_REF] Boudkkazi | Release-dependent variations in synaptic latency: a putative code for short-and long-term synaptic dynamics[END_REF]. Probability of transmitter release is not the only determinant of synaptic timing, however. The waveform of the axonal spike also plays a critical role. The enlargement of the axonal spike by a Kv channel blocker significantly prolongs synaptic latency at L5-L5 synapses [START_REF] Boudkkazi | Presynaptic action potential waveform determines cortical synaptic latency[END_REF]. The underlying mechanism results from the shift in the presynaptic calcium current. Because the presynaptic action potential overshoots at approximately ϩ50 mV, the calcium current develops essentially during the repolarizing phase of the presynaptic spike [START_REF] Augustine | Calcium entry into voltage-clamped presynaptic terminals of squid[END_REF][START_REF] Bischofberger | Timing and efficacy of Ca 2ϩ channel activation in hippocampal mossy fiber boutons[END_REF]279,328,454). Thus the spike broadening produced by 4-AP delays the calcium current and subsequently shifts glutamate release towards longer latencies. Physiologically, spike broadening in the axon may occur when Kv channels are inactivated during repetitive axonal stimulation and may participate in the stabilization of synaptic delay [START_REF] Boudkkazi | Presynaptic action potential waveform determines cortical synaptic latency[END_REF].
The probabilistic nature of voltage-gated ion channels (i.e., channel noise) may also affect conduction time along fibers below 0.5 m diameter. A simulation study indicates that four distinct effects may corrupt propagating spike trains in thin axons: spikes being added, deleted, jittered, or split into subgroups (186). The local variation in the number of Na ϩ channels may cause microsaltatory conduction.
C. Ephaptic Interactions and Axonal Spike Synchronization
Interactions between neighboring axons were first studied by Katz and Schmitt (280,281) in crab. The passage of an impulse in one axonal fiber produced a sub-threshold change in excitability in the adjacent fiber. As the action potential approaches in the active axon, the excitability of the resting fiber was first reduced, and then quickly enhanced (280,288). This effect results from the depolarization of the resting axon by the active axon because it generates locally an extracellular potential of a few millivolts. Interactions of this type are called ephaptic (from the Greek for "touching onto," Ref. 20). They are primarily observed when the extracellular conductance is reduced [START_REF] Barr | Electrophysiological interaction through the interstitial space between adjacent unmyelinated parallel fibers[END_REF]280). This condition is fulfilled, for instance, in bundles of unmyelinated axons where the periaxonal space is minimal, as in olfactory nerves [START_REF] Blinder | Intercellular interactions in the mammalian olfactory nerve[END_REF][START_REF] Bokil | Ephaptic interactions in the mammalian olfactory system[END_REF]. Ephaptic interactions between axons have also been observed in frog sciatic nerve (288) and in demyelinated spinal axons of dystrophic mice (436).
One of the most interesting features of ephaptic interaction between adjacent axons is that the conduction velocity in neighboring fibers might be unified, thus synchronizing activity in a bundle of axons. If one action potential precedes the other by a few milliseconds, it accelerates the conduction rate of the lagging action potential in the other axon [START_REF] Barr | Electrophysiological interaction through the interstitial space between adjacent unmyelinated parallel fibers[END_REF]280;Fig. 15). This phenomenon occurs because the ephaptic potential created in the adjacent fiber is asymmetrical. When the delay between the two spikes is small (ϳ1-2 ms; Ref. 37), the depolarizing phase of the ephaptic potential facilitates spike generation and increases conduction velocity. However, perfectly synchronized action potentials decrease the conduction velocity in both branches because of the initial hyperpolarizing phase of the ephaptic potentials. Synchronization can only occur if the individual velocities differ only slightly and are significant for a sufficient axonal length (280). Does such synchronization also occur in mammalian axons? There is no evidence for this yet, but modeling studies indicate that the relative location of nodes of Ranvier on two adjacent myelinated axons might also determine the degree of temporal synchrony between fibers [START_REF] Binczak | Ephaptic coupling of myelinated nerve fibers[END_REF]440). On small unmyelinated axons, ephaptic interaction between axons is predicted to be very small (254), but future research in this direction might reveal a powerful means to thoroughly synchronize neuronal activity downstream of the site of action potential initiation.
D. Electric Coupling in Axons and Fast Synchronization
Fast communication between neurons is not only ensured by chemical synapses, but electrical coupling has been reported in a large number of cell types including inhibitory cortical interneurons (249). In the hippocampus, one type of high-frequency oscillation (100 -200 Hz) called "ripple" arises from the high-frequency firing of inhibitory interneurons and phase-locked firing of many CA1 neurons (533). Some of the properties of ripple oscillation are, however, difficult to explain. First, the oscillations are so fast (near 200 Hz) that synchrony across many cells would be difficult to achieve through chemical synaptic transmission. In addition, ripples persist during pharmacological blockade of chemical transmission in vitro (162). While some inhibitory interneurons may synchronize a large number of pyramidal cells during the ripple (286), a significant part of the synchronous activity could be mediated by axo-axonal electrical synaptic contacts through gap junctions (464). Antidromic stimulation of a neighboring axon elicits a small action potential, a spikelet with a fast rate of rise (near 180 mV/ms). Spikelets can be evoked at the rate of a ripple (200 Hz), and they are blocked by TTX or by the gap junction blocker FIG. [START_REF] Aponte | Hyperpolarizationactivated cation channels in fast-spiking interneurons of rat hippocampus[END_REF]. Ephaptic interaction in axons. A: local circuit diagram in a pair of adjacent axons. The red area indicates the "active region." The action currents produced by the action potential penetrates the inactive axon. B: schematic representation of resynchronization of action potentials in a pair of adjacent axons. While the spikes propagate along the axons, the initial delay between them becomes reduced. [Adapted from Barr and Plonsey (37) and Katz and Schmitt (280).] carbenoxolone. Simultaneous recording from the axon and cell body showed that the spikelet first traversed the axon prior to invading the soma and the dendrites. Finally, the labeling of pyramidal neurons with rhodamine, a small fluorescent molecule, showed dye coupling in adjacent neurons that was initiated through the axon (464). Thus the function of the axon is not limited to the conduction of the impulses to the terminal, and information may process between adjacent pyramidal neurons through electrical synapses located close to their axon hillock.
A similar mechanism of electrical coupling between proximal axons of Purkinje cells is supposed to account for very fast oscillations (Ͼ75 Hz) in the cerebellum. Very fast cerebellar oscillations recorded in cerebellar slices are indeed sensitive to gap junction blockers (368). In addition, spikelets and fast prepotentials eliciting full spikes are observed during these episodes. In fact, the simulation of a cerebellar network where Purkinje cells are sparsely linked though axonal gap junctions replicates the experimental observations (534).
Cell-cell communication through axons of CA1 pyramidal neurons has recently been suggested in vivo (181). Using the newly developed technique of in vivo whole cell recording in freely moving rats (321,322,352), the group of Michael Brecht found that most records from CA1 cells (ϳ60%) display all-or-none events, with electrophysiological characteristics similar to spikelets resulting from electrical coupling in the axon. These events have a fast rise time (Ͻ1 ms) and a biphasic decay time. They occur during ripples as bursts of three to six events (181).
IX. ACTIVITY-DEPENDENT PLASTICITY OF AXON MORPHOLOGY AND FUNCTION
A. Morphological Plasticity
The recent development of long-term time lapse imaging in vitro and in vivo (255) has revealed that axon morphology is highly dynamic. Whereas the large-scale organization of the axonal arborization remains fairly stable over time in adult central neurons, a subset of axonal branchlets can undergo impressive structural rearrangements in the range of a few tens of micrometers (review in Ref. 256). These rearrangements affect both the number and size of en passant boutons as well as the complexity of axonal arborization. For instance, the hippocampal mossy fiber terminals are subject to dramatic changes in their size and complexity during in vitro development and in the adult in vivo following exposure to enriched environment (203,204,216). The turnover of presynaptic boutons in well identified Schaffer collateral axons is increased following induction of LTD in vitro [START_REF] Becker | LTD induction causes morphological changes of presynaptic boutons and reduces their contacts with spines[END_REF]. Finally, in an in vitro model of traumatic epilepsy, transection between the CA3 and CA1 region induces axonal sprouting associated with an increase in the density of boutons per unit length (360).
Axonal reorganization has also been reported in vivo. In the visual cortex, a subset of geniculo-cortical axonal branches can undergo structural rearrangements during development [START_REF] Antonini | Rapid remodeling of axonal arbors in the visual cortex[END_REF] and in the adult (508) or following activity deprivation (239,240,554). Similar observations have been reported in the barrel cortex during development (417) and in the adult mice (137). However, one should note that the magnitude of axonal rearrangements is much larger during the critical period of development.
In the adult mice cerebellum, transverse, but not ascending branches of climbing fibers are dynamic, showing rapid elongation and retraction (383). The motility of axonal branches is clearly demonstrated in all these studies, and it certainly reflects dynamic rewiring and functional changes in cortical circuits. Neuronal activity seems to play a critical role in the motility of the axon, but the precise mechanisms are not clearly understood. For instance, stimulation of the axon freezes dynamic changes in cerebellar climbing fibers in vivo (383). Similarly, the fast motility of axonal growth cone of hippocampal neurons in vitro is reduced by stimulation of GluR6 kainate receptors or electrical stimulation and depends on axonal calcium concentration (266). In contrast, the slow remodeling of local terminal arborization complexes of the mossy fiber axon is reduced when Na ϩ channel activity is blocked with TTX (204).
Electrical activity not only determines axon morphology but also controls induction of myelination in developing central and peripheral axons. For instance, blockade of Na ϩ channel activity with TTX reduces the number of myelinated segment and the number of myelinating oligodendrocytes, whereas increasing neuronal excitability has the opposite effects (149). In contrast, electrical stimulation of dorsal root ganglion neurons delays myelin formation (509). In this case, ATP released by active axons is subsequently hydrolyzed to adenosine that stimulates adenosine receptors in Schwann cells and freezes their differentiation. Neuronal activity is also thought to determine the maintenance of the myelin sheath in adult axons. In the hindlimb unloading model, myelin thickness is tightly controlled by motor activity [START_REF] Canu | Activity-dependent regulation of myelin maintenance in the adult rat[END_REF]. Myelin is thinner in axons controlling inactive muscles but thicker in hyperactive axons.
B. Functional Plasticity
Beyond morphological rearrangements, the axon is also able to express many forms of functional plasticity (520, 557). In fact, several lines of evidence suggest that ion channel activity is highly regulated by synaptic or neuronal activity (reviews in Refs. 135, 493, 582). There-fore, some of the axonal operations described in this review could be modulated by network activity. Axonal plasticity can be categorized into Hebbian and homeostatic forms of functional plasticity according to the effects of the induced changes in neuronal circuits. Hebbian plasticity usually serves to store relevant information and to some extent destabilizes neuron ensembles, whereas homeostatic plasticity is compensatory and stabilizes network activity within physiological bounds (420, 539).
Hebbian plasticity of axonal function
There are now experimental facts suggesting that Hebbian functional plasticity exists in the axon. For instance, the repetitive stimulation of Schaffer collateral axons at 2 Hz leads to a long-lasting lowering of the antidromic activation threshold (361). Although the precise expression mechanisms have not been characterized here, this study suggests that axonal excitability is persistently enhanced if the axon is strongly stimulated. Furthermore, LTP and LTD are respectively associated with increased and decreased changes in intrinsic excitability of the presynaptic neuron (205, 324). These changes imply retrograde messengers that target the presynaptic neuron. Although these changes are detected in the cell body, the possibility that ion channels located in the axon are also regulated cannot be excluded. Two parallel studies have recently reported a novel form of activity-dependent plasticity in a subclass of inhibitory interneurons of the cortex and hippocampus containing neuropeptide Y (476, 523). Stimulation of the interneuron at 20 -40 Hz leads to an increase in action potential firing lasting several minutes. In both studies, the persistent firing is consistent with the development of an ectopic spike initiation zone in the distal region of the axon.
Homeostatic axonal plasticity
The expression of axonal channels might be regulated by chronic manipulation of neuronal activity according to the homeostatic scheme of functional plasticity. For instance, blocking neuronal activity by TTX enhances both the amplitude of the transient Na ϩ current (150) and the expression of Na ϩ channels in hippocampal neurons [START_REF] Aptowicz | Homeostatic plasticity in hippocampal slice cultures involves changes in voltage-gated Na ϩ channel expression[END_REF]. Although the subcellular distribution of Na ϩ channels was not precisely determined in these studies, they might be upregulated in the axon. Indeed, axon regions that become silent because of acute demyelination express a higher density of Na ϩ channels which eventually allows recovery of active spike propagation [START_REF] Bostock | The internodal axon membrane: electrical excitability and continuous conduction in segmental demyelination[END_REF]195,555). Activity deprivation not only enhances intrinsic excitation but also reduces the intrinsic neuronal brake provided by voltage-gated K ϩ channels (131,141,150). Chronic inactivation of neuronal activity with TTX or synaptic blockers inhibits the expression of Kv1.1, Kv1.2, and Kv1.4 potassium channels in the cell body and axon of cultured hippocampal neurons (229). Although the functional consequences were not analyzed here, this study suggests that downregulation of Kv1 channels would enhance neuronal excitability and enlarge axonal spike width.
The position of the AIS relative to the cell body is also subject to profound activity-dependent reorganization (Fig. 16A). In a recent study, Grubb and Burrone (232,233) showed that brief network-wide manipulation of electrical activity determines the position of the AIS in hippocampal cultured neurons. The AIS identified by its specific proteins ankyrin-G and -IV-spectrin is moved up to 17 m distally (i.e., without any change in the AIS length) when activity is increased by high external potas- sium or illumination of neurons transfected with channelrhodopsin-2 during 48 h (232; Fig. 16B). The relocation of the AIS is reversible and depends on T-and L-type calcium channels, suggesting that intra-axonal calcium may control the dynamic of the AIS protein scaffold. This bidirectional plasticity might be a powerful means to adjust the excitability of neurons according to the homeostatic rule of plasticity (539). In fact, neurons with proximal AIS are generally more excitable than those with distal AIS, suggesting that shifting the location of the AIS distally elevates the current threshold for action potential generation (232,298). Thus these data indicate that AIS location is a mechanism for homeostatic regulation of neuronal excitability.
Homeostatic AIS plasticity might be a general rule and may account for the characteristic frequency-dependent distribution of sodium channels along the axon of chick auditory neurons (302,303). In neurons that preferentially analyze high auditory frequencies (ϳ2 kHz), sodium channels are clustered at 20 -50 m from the soma, whereas they are located in the proximal part of the axon in neurons that detect lower auditory frequencies (ϳ600 Hz; Ref. 302). A recent study from Kuba and coworkers (304) directly demonstrates the importance of afferent activity in AIS position in chick auditory neurons. Removing cochlea in young chicks produces an elongation of the AIS in nucleus magnocellularis neurons without affecting its distance from the cell body (Fig. 16C; Ref. 304). This regulation is associated with a compatible increase in the whole cell Na ϩ currents.
Axonal excitability is also homeostatically tuned on short-term scales. Sodium channel activity is downregulated by many neuromodulators and neurotransmitters including glutamate that classically enhances neuronal activity [START_REF] Cantrell | Neuromodulation of Na ϩ channels: an unexpected form of cellular plasticity[END_REF][START_REF] Carlier | Metabotropic glutamate receptor subtype 1 regulates sodium currents in rat neocortical pyramidal neurons[END_REF]. Although further studies will be required to precisely determine the location of the regulated Na ϩ channels, it is nevertheless tempting to speculate that AIS excitability might be finely tuned.
X. PATHOLOGIES OF AXONAL FUNCTION
Beyond Wallerian degeneration that may be caused by axon sectioning, deficits in axonal transport (121,138), or demyelination (381), the axon is directly involved in at least two large families of neurological disorders. Neurological channelopathies such as epilepsies, ataxia, pain, myotonia, and periodic paralysis usually result from dysfunction in ion channel properties or targeting (130,306,409,461). The major consequences of these alterations are dysfunctions of neuronal excitability and/or axonal conduction (297). In addition, some forms of Charcot-Marie-Tooth disease affect primarily the axon (297,367,519). They mainly lead to deficits in axonal propagation (297, 519).
A. Axonal Diseases Involving Ion Channels
Epilepsies
Many ion channels expressed in the axons of cortical neurons are mutated in human epilepsies, and dysfunction of the AIS is often at the origin of epileptic phenotypes (562). For instance, mutations of the gene SCN1A encoding Nav1.1 cause several epileptic phenotypes including generalized epilepsy with febrile seizure plus (GEFSϩ) and severe myoclonic epilepsy of infancy (SMEI) [START_REF] Baulac | A second locus for familial generalized epilepsy with febrile seizures plus maps to chromosome 2q21-q33[END_REF]114,182;Fig. 17). Some of these mutations do not produce a gain of function (i.e., hyperexcitability) as expected in the case of epilepsy, but rather a loss of function ( 505). Since Nav1.1 channels are highly expressed in the axons of GABAergic neurons (394), a decrease in excitability in inhibitory neurons will enhance excitability of principal neurons that become less inhibited. Mice lacking SCN1A display spontaneous seizures because the sodium current is reduced in inhibitory interneurons but not in pyramidal cells (576). Similarly, deletions or mutations in Kv1.1 channels produce epi-FIG. 17. Axonal channelopathies in cortical circuits. The possible roles of axonal ion channels implicated in epilepsy are illustrated schematically. Mutations in Nav1.1 from axons of GABAergic interneurons produce a loss of Na-channel function (i.e., reduced excitability of inhibitory interneurons but increased network activity) that might underlie epilepsy with febrile seizure plus (GEFSϩ) or severe myoclonic epilepsy of infancy (SMEI). Mutations in Kv7.2/7.3 channels lead to a loss of function (i.e., an increase in excitability of principal neurons) and may result in benign familial neonatal convulsions (BFNC). Deletions or mutations in Kv1.1 increase neuronal excitability and produce episodic ataxia type 1. lepsy (495) and episodic ataxia type 1 (EA1), characterized by cerebellar incoordination and spontaneous motorunit activity [START_REF] Browne | Episodic ataxia/myokymia syndrome is associated with point mutations in the human potassium channel gene, KCNA1[END_REF]. Mutations in KCNQ2/3 (Kv7.2/Kv7.3) channels produce several forms of epilepsy such as benign familial neonatal convulsions (BNFC;Refs. 406,466,492;Fig. 17). Some mutations may also target ion channels located in axon terminals. For instance, a missense mutation in the KCNMA1 gene encoding BK channels is associated with epilepsy and paroxysmal dyskinesia (164; Fig. 15).
Epilepsies may also be acquired following an initial seizure. For instance, many epileptic patients display graduated increases in the frequency and strength of their crises, indicating that epilepsy might be acquired or memorized by neuronal tissue. The cellular substrate for this enhanced excitability is thought to be long-lasting potentiation of excitatory synaptic transmission [START_REF] Bains | Reciprocal interactions between CA3 network activity and strength of recurrent collateral synapses[END_REF]146), but enhanced neuronal excitability might be also critical [START_REF] Beck | Plasticity of intrinsic neuronal properties in CNS disorders[END_REF][START_REF] Bernard | Acquired dendritic channelopathy in temporal lobe epilepsy[END_REF][START_REF] Blumenfeld | Role of hippocampal sodium channel Nav1.6 in kindling epileptogenesis[END_REF]517). These changes in excitability are generally proepileptic, but additional work will be required to determine whether axonal channels specifically contribute to acquired epilepsy phenotypes.
In addition to epilepsy, mutations in the SCNA1A or CACNA1A gene can also lead to cases of familial hemiplegic migraine; these mutations have mixed effects when studied in expression systems that could explain how they concur to cortical spreading depression (103, 408).
Axonal channelopathies in the PNS
Mutations in axonal channels may be involved in several diseases that affect the PNS. For instance, pain disorders are often associated with mutations of the SCN9A gene encoding the alpha subunit of Nav1.7, that cause either allodynia (i.e.,burning pain;Refs. 189,571) or analgesia (129). Pain is usually associated with a gain of function of Nav1.7 (i.e., lower activation threshold or reduced inactivation; Refs. 189,238).
B. Axonal Diseases Involving Myelin
Multiple sclerosis
Multiple sclerosis (MS) is characterized by multiple attacks on CNS myelin that may lead to sensory (principally visual) and/or motor deficits (532,555). MS is generally diagnosed in the young adult (before 40), and the progression of the disease often alternates phases of progression and remission where the patient recovers because compensatory processes occur, such as Na ϩ channel proliferation in the demyelinated region (555). Althought the etiology of MS is multiple with hereditary, infectious, and environmental factors, the most important determinant of MS is dysregulation of the immune system including autoimmune diseases directed against myelin proteins. The main consequence is a partial or total loss of myelin that prevents axonal conduction in axons of the optic nerves or corticospinal tracts.
Charcot-Marie-Tooth disease
Charcot-Marie-Tooth (CMT) disease affects myelin of PNS axons and constitutes a highly heterogeneous group of genetic diseases. These diseases generally invalidate molecular interactions between axonal and glial proteins that stabilize myelin produced by Schwann cells. The most frequent forms, CMT1A, CMT1B, and CMT1X, are caused by mutations in genes which encode three components of the myelin sheath, peripheral myelin protein-22 (PMP22), myelin protein zero (MPZ), and connexin 32, respectively (519).
Hereditary neuropathy with liability to pressure palsies
Hereditary neuropathy with liability to pressure palsies (HNPP) is a genetic disease that results from a deficiency in the gene coding for PMP22 (104). HNPP is characterized by focal episodes of weakness and sensory loss and is associated with abnormal myelin formation leading to conduction blocks [START_REF] Bai | Conduction block in PMP22 deficiency[END_REF].
XI. CONCLUDING REMARKS
A. Increased Computational Capabilities
Axons achieve several fundamental operations that go far beyond classical propagation. Like active dendrites, axons amplify and integrate subthreshold and suprathreshold electrical signals [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]144,179,291,489). In addition, the output message can be routed in selective axonal pathways at a defined regime of activity. The consequences of this are not yet well understood in mammalian axons, but branch point failures may participate in the elaboration of sensory processing in invertebrate neurons (234). Axonal propagation may also bounce back at a branch point or at the cell body, but at present, there are only a handful of examples showing reflected propagation [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF][START_REF] Baccus | Synaptic facilitation by reflected action potentials: enhancement of transmission when nerve impulses reverse direction at axon branch points[END_REF][START_REF] Baccus | Action potential reflection and failure at axon branch points cause stepwise changes in EPSPs in a neuron essential for learning[END_REF]108). Reflected impulses may limit the spread of the neuronal message and enhance synaptic transmission. Theoretical and experimental studies indicate that reflection of action potentials could occur in axons that display large swellings or a branch point with high GR. Moreover, axonal delay is important to set network resonance (344) and increase storage capacity in neuronal networks (271). Finally, axonal coupling through ephaptic interactions or gap junctions may precisely synchronize network activity (448,464). All these operations increase the computational capabilities of axons and affect the dynamics of synaptic coupling. Many pieces of the puzzle are, however, still missing.
The computational capabilities of axons might be further extended by another unexpected and important feature: their capacity to express both morphological and functional plasticity. There is now evidence for Hebbian and homeostatic long-term axonal plasticities that might further enhance the computational capacity of the circuits (232,233,304). Thus activity-dependent plasticity is not restricted to the input side of the neuron (i.e., its dendrites and postsynaptic differentiation), but it may also directly involve axonal function.
B. Future Directions and Missing Pieces
In the recent past, most (if not all) of our knowledge about axonal computation capabilities was derived from experiments on invertebrate neurons or from computer simulations (470). The use of paired-recording techniques (140, 144) and the recent spread of direct patch-clamp recordings from the presynaptic terminal [START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF][START_REF] Bischofberger | Patchclamp recording from mossy fiber terminals in hippocampal slices[END_REF]179,432) or from the axon (259, 291, 292, 488 -490) suggest that the thin mammalian axon will yield up all its secrets in the near future. There are good reasons to believe that, combined with the development of high-resolution imaging techniques like multiphoton confocal microscopy (128,193,194,289), second-harmonic generation microscopy (160) and voltage-sensitive dyes [START_REF] Antic | Functional profile of the giant metacerebral neuron of Helix aspersa: temporal and spatial dynamics of electrical activity in situ[END_REF][START_REF] Bradley | Submillisecond optical reporting of membrane potential in situ using a neuronal tracer dye[END_REF]196,215,228,327,396,397,580), this technique will be a powerful tool to dissect the function of axons. Development of nanoelectronic recording devices will also probably offer promising solutions to solve the problem of intracellular recording from small-diameter axons (530).
Axonal morphology and the subcellular localization of ion channels play crucial roles in conduction properties and propagation failures or reflected propagation may result from the presence of axonal irregularities such as varicosities and branch points. However, detailed quantitative analysis of the morphometry of single axons combined with the quantitative immunostaining of sodium channels as used recently by Lorincz and Nusser (333) will be needed. The use of recently developed molecular tools to target defined channel subunits towards specific axonal compartments could be of great help in determining their role in axonal propagation.
Fine temporal tuning can be achieved by axons. Differences in axonal length in the terminal axonal tuft introduce delays of several milliseconds. Is temporal scaling of action potential propagation in the axonal arborization relevant to the coding of neuronal information? Differential conduction delays in axonal branches participate in precise temporal coding in the barn owl auditory system [START_REF] Carr | Axonal delay lines for time measurement in the owl's brainstem[END_REF][START_REF] Carr | A circuit for detection of interaural time differences in the brain stem of the barn owl[END_REF]358). But the role of axonal delays has only been studied in artificial neural networks [START_REF] Bush | Inhibition synchronizes sparsely connected cortical neurons within and between columns in realistic network models[END_REF]271,344) or in vitro neuronal circuits [START_REF] Bakkum | Long-term activity-dependent plasticity of action potential propagation delay and amplitude in cortical networks[END_REF], and additional work will have to be done to describe its implication in hybrid (i.e., neuron-computer) or in in vivo networks. Furthermore, understanding the conflict faced by cortical axons between space (requirement to connect many different postsynaptic neurons) and time (conduction delay that must be minimized) will require further studies [START_REF] Budd | Neocortical axon arbors trade-off material and conduction delay conservation[END_REF].
Local axonal interactions like ephaptic coupling and gap-junction coupling allow very fast synchronization of activity in neighboring neurons. Surprisingly, little experimental effort has been devoted to ephaptic interactions between axons. This mechanism represents a powerful means to precisely synchronize output messages of neighboring neurons. Perhaps ephaptic interactions between parallel axons could compensate the "stuttering conduction" that is introduced by axonal varicosities and branch points (374). The implications of these mechanisms in synchronized activity will have to be determined in axons that display favorable geometrical arrangement for ephaptic coupling (i.e., fasciculation over a sufficient axonal length). Callosal axons, mossy fibers, and Schaffer collaterals are possible candidates.
In conclusion, we report here evidence that beyond classical propagation many complex operations are achieved by the axon. The axon displays a high level of functional flexibility that was not expected initially. Thus it may allow a fine tuning of synaptic strength and timing in neuronal microcircuits. There are good reasons to believe that after the decade of the dendrites in the 1990s, a new era of axon physiology is now beginning.
FIG. 3 .
3 FIG. 3. High concentration of functional sodium channels at the AIS of cortical pyramidal neurons. A: changes in intracellular Na ϩ during action potentials are largest in the AIS. A L5 pyramidal neuron was filled with the Na ϩ -sensitive dye SBFI and the variations in fluorescence measured at different distances from the axon hillock. The signal is larger in the AIS (25 m) and rapidly declines along the axon (55 m) or at proximal locations (5 m or soma). [Adapted from Kole et al. (290), with permission from Nature Publishing Group.] B: Na ϩ channel density is highest at the AIS. Top: Na ϩ currents evoked by step depolarizations (30 ms) from a holding potential of Ϫ100 to ϩ20 mV in outside-out patches excised from the soma (black), AIS (orange, 39 m), and axon (red, 265 m). Bottom: average amplitude of peak Na ϩ current obtained from different compartments. [From Hu et al. (259), with permission from Nature Publishing Group.] C: high-resolution immunogold localization of the Nav1.6 subunit in AIS of CA1 pyramidal neuron. Gold particles labeling the Nav1.6 subunits are found at high density on the protoplasmic face of an AIS. Note the lack of immunogold particles in the postsynaptic density (PSD) of an axo-axonic synapse. [From Lorincz and Nusser (333), with permission from the American Association for the Advancement of Science.]
FIG. 5 .
5 FIG.[START_REF] Alle | Combined analog and action potential coding in hippocampal mossy fibers[END_REF]. Spike initiation in the AIS. A: confocal images of two L5 pyramidal neurons labeled with biocytin (A. Bialowas, P. Giraud, and D. Debanne, unpublished data). Note the characteristic bulbous end of the severed axon ("bleb") B: dual soma-axonal bleb recording in whole-cell configuration from a L5 pyramidal neuron. Left: scheme of the recording configuration. Right: action potentials measured in the soma (black) and in the axon (red). C: determination of the spike initiation zone. Scheme of the time difference between axonal and somatic spikes as a function of the axonal distance (origin: soma). The maximal advance of the axonal spike is obtained at the AIS (i.e., the spike initiation zone). The slope of the linear segment of the plot gives an estimate of the conduction velocity along the axon.
FIG. 6 .
6 FIG. 6. Spike threshold is lowest in the AIS. A: lower currrent threshold but high voltage in the AIS of L5 pyramidal neurons. Left: overlaid voltage responses during current injection into the AIS (blue) or soma (black) at the action potential threshold. Note the depolarized voltage threshold in the AIS compared with the soma. Right: average amplitude of injected current versus action potential probability for action potentials evoked by current injection in the AIS (open circles) or soma (solid circles). Note the lower current threshold in the AIS. B: slow depolarizing ramp mediated by Na ϩ channel in the AIS but not in the soma. Left: action potentials generated by simulated EPSC injection at the soma and recorded simultaneously at the soma (AIS) and AIS (blue). Middle: same recording in the presence of TTX (1 M). Right: voltage difference (AIS-soma) in control (gray) and TTX (red) reveals a depolarizing ramp in the AIS before spike intiation. [Adapted from Kole and Stuart (292), with permission from Nature Publishing Group.]
FIG. 10 .
10 FIG.[START_REF] Anderson | Thresholds of action potentials evoked by synapses on the dendrites of pyramidal cells in the rat hippocampus in vitro[END_REF]. Depolarization of the presynaptic soma facilitates synaptic transmission through axonal integration. A: facilitation of synaptic transmission in connected L5-L5 pyramidal neurons. Left: experimental design. Synaptic transmission is assessed when presynaptic action potentials are elicited either from rest (Ϫ62 mV) or from a depolarized potential (Ϫ48 mV). Right: averaged EPSP amplitude at two presynaptic somatic membrane potentials. Note the facilitation when the presynaptic potential is depolarized. [Adapted fromShu et al. (489), with permission from Naure Publishing Group.] B: mechanism of presynaptic voltage-dependent facilitation of synaptic transmission. Top: the cell body and the axon of a cortical pyramidal neuron are schematized. When an action potential is elicited from the resting membrane potential (RMP, Ϫ65 mV), the spike in the axon is identical in the proximal and distal part of the axon. Postsynaptic inward currents are shown below. Bottom: an action potential elicited from a steady-state depolarized value of Ϫ50 mV is larger in the proximal part of the axon (because I D is inactivated) but unchanged in the distal part (because I D is not inactivated by the somatic depolarization). As a result, synaptic efficacy is enhanced for the proximal synapse (red inward current) but not for the distal synapse (blue inward current).
FIG. 11 .
11 FIG. 11. Propagation failures in invertebrate neurons. A: propagation failure at a branch point in a lobster axon. The main axon and the medial and lateral branches are recorded simultaneously. The repetitive stimulation of the axon (red arrow) at a frequency of 133 Hz produces a burst of full spike amplitude in the axon and in the lateral branch but not in the medial branch. Note the electrotonic spikelet in response to the third stimulation. [Adapted from Grossman et al. (230), with permission from Wiley-Blackwell.] B: propagation failure at the junction between an axonal branch and the soma of a snail neuron (metacerebral cell). The neuron was labeled with the voltage-sensitive styryl dye JPW1114. The propagation in the axonal arborization was analyzed by the local fluorescence transients due to the action potential. The recording region is indicated by an outline of a subset of individual detectors, superimposed over the fluorescence image of the neuron in situ. When the action potential was evoked by direct stimulation of the soma, it propagated actively in all axonal branches (red traces). In contrast, when the action potential was evoked by the synaptic stimulation (EPSP) of the right axonal branch (Br1), the amplitude of the fluorescent transient declined when approaching the cell body, indicating a propagation failure (black traces). [Adapted from Antic et al. (12), with permission from John Wiley & Sons.]
FIG. 12 .
12 FIG. 12. Propagation failures in mammalian axons. A: propagation failures in a Purkinje cell axon. Top: fluorescent image of a Purkinje cell filled with the fluorescent dye Alexa 488. The locations of the somatic and axonal recordings are indicated schematically. [Adapted from Monsivais et al. (372).] B: gating of action potential propagation by the potassium current I A . Left: at resting membrane potential, presynaptic I A was inactivated and the action potential evoked in the presynaptic cell propagated and elicited an EPSP in the postsynaptic cell. Right: following a brief hyperpolarizing prepulse, presynaptic I A recovered from inactivation and blocked propagation. Consequently, no EPSP was evoked by the presynaptic action potential. [Adapted from Debanne et al. (144), with permission from Nature Publishing Group.]
FIG. 16 .
16 FIG. 16. Activity-dependent plasticity of AIS. A: scheme of the homeostatic regulation of AIS location in cultured hippocampal neurons (left) and in brain stem auditory neurons (right). AIS is moved distally following chronic elevation of activity by high external K ϩ or photostimulation of neurons expressing the light-activated cation channel channelrhodopsin 2 (ChR2) (left). AIS length is augmented in chick auditory neurons following cochlea removal (right). B: ankyrin G label in control neurons and in neurons treated with 15 mM K ϩ during 48 h (scale bar: 20 m). [From Grubb and Burrone (232), with permission from Nature Publishing Group.] C: AIS plasticity in chick auditory neurons. Sodium channels have been immunolabeled with pan-Na channel antibody. Neurons from deprived auditory pathway display longer AIS (right) than control neurons (left). [From Kuba et al. (304), with permission from Nature Publishing Group.]
ACKNOWLEDGMENTS
We thank M. Seagar for constant support, P. Giraud for providing confocal images of labeled neurons, and S. Binczak for helpful discussion. We thank J. J. Garrido, A. Marty, M. Seagar, and F. Tell for helpful comments on the manuscript and the members of D. Debanne's lab for positive feedback.
Address for reprint requests and other correpondence: D. Debanne, Université de la Méditerranée, Faculté de médecine secteurnord,IFR11,Marseille,F-13916France(e-mail:dominique. [email protected]).
GRANTS
This work was supported by Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Ministry of Research (doctoral grants to E. Campanac and A. Bialowas), Fondation pour la Recherche Médicale (to E. Campanac), and Agence Nationale de la Recherche (to D. Debanne and G. Alcaraz).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the authors. |
01766868 | en | [
"math"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01766868/file/INDRUM2018_Vandebrouck-Bechir%20reviewed.pdf | Sghaier Salem Béchir
email: [email protected]
Fabrice Vandebrouck²
Teaching and learning continuity with technologies
Keywords: teaching and learning of analysis and calculus, novel approaches to teaching, continuity, digital technologies
We developed a digital tool aiming at introducing the concept oflocal -continuity together with its formal definition for Tunisian students at the end of secondary school. Our approach is a socioconstructivist one, mixing conceptualisation in the sense of Vergnaud together with Vygotski's concepts of mediation and ZPD. In the paper, we focus on the design of the tool and we give some flashes about students' productions with the tool and teachers' discourses in order to foster students' understanding of the continuity.
The definition of continuity of functions at a given point, together with the concept of continuity, remains a major difficulty in the teaching and learning of analysis. There is a dialectic between the definition and the concept itself which make necessary the introduction of the two aspects together.
The definition of continuity brings FUG aspects in the sense of [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF]. This means first that it permits to formalize (F) the concept of continuity. But it also allows to unify (U) several different images (or situations) of continuity encountered by students: in [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF], several emblematic situations of continuity are established (see below) and the definition aims at unifying all these different kinds of continuity. Moreover, the definition of continuity allows generalisations (G) to all other numerical functions, not already encountered and not necessarily with graphical representations, or more general functions inside other spaces of functions. As [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF] stresses for the definition of limit of sequences, notions which bring FUG aspects must be introduced with a specific attention to mediations and especially the role of the teacher.
Our ambition is then to design a technological tool which allows on one hand students activities concerning the two aspects of continuity and, on the other hand, allows the teacher to introduce the concept of continuity with its formal definition, referring to the activities developed on the technological tool. As it was noticed in the first INDRUM conference, papers about introduction of technologies in the teaching of analysis remain very few.
We first come back to well-known concept images and concept definitions of continuity. Then, we explain our theoretical frame about conceptualisation and mathematical activities. This theoretical frame leads us to the design of the technological tool which brings most of the aspects we consider important for the conceptualisation of continuity. Due to the text constraints, the results of the paper are mostly in term of the design itself and the way the tool encompasses our theoretical frame and our hypotheses about conceptualisation (with tasks, activities and opportunities for mediations). Then, we can give some flashes about students' activities with the software and also teachers' discourses to introduce the definition of continuity, based on students' mathematical activities on the software.
CONCEPT IMAGES AND CONCEPT DEFINITIONS OF CONTINUITY
No one can speak about continuity without referring to Tall and Vinner's paper about concept images and concept definitions in mathematics, whose particular reference is about limits and continuity [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF]. Tall considers that the concept definition is one part of the total concept image that exists in our mind. Additionally, it is understood that learners enter their acquisition process of a newly introduced concept with preexisting concept images. [START_REF] Sierpinska | On understanding the notion of function[END_REF] used the notion of epistemological obstacles regarding some properties of functions and especially the concept of limit. Epistemological obstacles for continuity are very close to those observed for the concept of limit and they can be directly relied to students' concept images, as a specific origin of theses conceptions (El Bouazzaoui, 1988). One of these obstacles can be associated to what we call a primitive concept image: it is a geometrical and very intuitive conception of continuity, related to the aspects of the curve. With this concept image, continuity and derivability are often mixed and continuity means mainly that the curve is smooth and have no angles. Historically, this primitive conception leads Euler to introduce a definition of continuity based on algebraic representations of functions. This leads to a second epistemological obstacle: a continuous function is given by only one algebraic expression, which can be called the algebraic concept image of continuity. This conception has led to a new obstacle with the beginning of Fourier's analysis. Then, a clear definition is necessary. This definition comes with Cauchy and Weierstrass and it is close to our actual formal definition.
We also refer to [START_REF] Bkouche | Points de vue sur l'enseignement de l'analyse : des limites et de la continuité dans l'enseignement[END_REF] who identifies three points of view about continuity of functions which are more or less connected to the epistemological obstacles we have highlighted. The first one is a cinematic point of view. Bkouche says that the variable pulls the function with this dynamic concept image. The other one is an approximation point of view: the desired degree of approximation of the function pulls the variable. This last point of view is more static and leads easily to the formal definition of continuity. These two points of view are also introduced by [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF] when she studies the introduction of the formal definition of limit (for sequences). A third point of view is also identified by Bkouche that is the algebraic point of view, which is about algebraic rules, without any idea of the meaningful of these rules.
At last, we refer to more recent papers and specifically the one of Hanke and Schafer (2017) about continuity in the last CERME congress. Their review of central papers on concept images about students' conceptions of continuity leads to a classification of the eight possible mental images that are reported in the literature: I : Look of the graph of the function : "A graph of a continuous function must be connected" -II : Limits and approximation : "The left hand side and right hand side limit at each point must be equal" -III : Controlled wiggling : "If you wiggle a bit in x, the values will only wiggle a bit, too" -IV : Connection to differentiability : "Each continuous function is differentiable" -V : General properties of functions : "A continuous function is given by one term and not defined piecewise"-VI : Everyday language : "The function continues at each point and does not stop" -VII : Reference to a formal definition : "I have to check whether the definition of continuity applies at each point" -VIII : Miscellaneous
We can recognize some of the previous categories, even if some refinements are brought. Mainly, concept images I, II, IV and VI can be close to the primitive concept image whereas VII refers to the formal definition and V seems to refer to the algebraic approach of continuity.
CONCEPTUALISATION OF CONTINUITY
We base our research work on these possible concepts image and concepts definition of continuity. However, we are more interested in conceptualisation, as the process which describes the development of students' mathematical knowledge. Conceptualisation in our sense has been mainly introduced by [START_REF] Vergnaud | La théorie des champs conceptuels[END_REF] and it has been extended within an activity theoretical frame developed in the French didactic of mathematics. These developments articulate two epistemological approaches: that of mathematics didactics and that of developmental cognitive psychology as it is discussed and developed in [START_REF] Vandebrouck | Activity Theory in French Didactic Research[END_REF].
Broadly, conceptualisation means that the developmental process occurs within students' actions over a class of mathematical situations, characteristic of the concept involved. This class of situations brings technical tasksdirect application of the concept involved -as well as tasks with adaptations of this concept. A list of such adaptations can be found in [START_REF] Horoks | Tasks Designed to Highlight Task-Activity Relationships[END_REF]: for instance mix between the concept and other knowledge, conversions between several registers of representations [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF], use of different points of view, etc. Tasks that require these adaptations of knowledge or concepts are called complex tasks. These ones encourage conceptualisation, because students become able to develop high level activities allowing availability and flexibly around the relevant concept.
A level of conceptualisation refers to such a class of situations, in a more modest sense and with explicit references to scholar curricula. In this paper, the level of conceptualisation refers to the end of scientific secondary school in Tunisia or the beginning of scientific university in France. It supposes enough activities which can permit the teacher to introduce the formal definition of continuity together with the sense of the continuity concept. The aim is not to obtain from students a high technicity about the definition itselfstudents are not supposed to establish or to manipulate the negation of the definition for instance. However, this level of conceptualisation supposes students to access the FUG aspects of the definition of continuity.
Of course, we also build on instrumental approach and instrumentation as a sub process of conceptualisation [START_REF] Rabardel | L'acquisition de la notion de convergence des suites dans l'enseignement supérieur[END_REF]. Students' cognitive construction of knowledge (specific schemes) arise during the complex process of instrumental genesis in which they transform the artifact into an instrument that they integrate within their activities. [START_REF] Artigue | Learning mathematics in a cas environment: The genesis of a reflection about instrumentation and the dialectics between technical and conceptual work[END_REF] says that it is necessary to identify the new potentials offered by instrumented work, but she also highlights the importance of identifying the constraints induced by the instrument and the instrumental distance between instrumented activities and traditional activities (in paper and pencil environment). Instrumentation theory also deals with the complexity of instrumental genesis.
We also refer to Duval's idea of visualisation as a contribution of the conceptualisation process (even if Duval and Vergnaud have not clearly discussed this point inside their frames). However, the technological tool brings new dynamic representations, which are different from static classical figures in paper and pencil environment. These new representations lead to enrich students' activitiesmostly in term of recognition -bringing specific visualization processes. Duval argues that visualization is linked to visual perception, and can be produced in any register of representation. He introduces two types of visualization, namely the iconic and the non-iconic, saying that in mathematical activities, visualization does not work with iconic representations [START_REF] Duval | Representation, vision and visualisation: cognitive functions in mathematical thinking. Basic issues for learning[END_REF].
At last, we refer on Vygotsky (1986) who stresses the importance of mediations within a student's zone of proximal developmental (ZPD) for learning (scientific concepts). Here, we also draw on the double approach of teaching practices as a part of French activity theory coming from [START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF]. The role of the teacher' mediations is specifically important in the conceptualisation process, especially because of the FUG aspects of the definition of continuity (as we have recalled above).
First of all, we refine the notion of mediation by adding a distinction between procedural and constructive mediations in the context of the dual regulation of activity. Procedural mediations are object oriented (oriented towards the resolution of the tasks), while constructive mediations are more subject oriented. We also distinguish individual (to pairs of students) and collective mediations (to the whole class).
Secondly, we use the notion of proximities [START_REF] Bridoux | Les moments d'exposition des connaissances : analyses et exemples[END_REF] which are discourses' elements that can foster students' understanding and then conceptualisation -according to their ZPD and their own activities in progress. In this sense, our approach is close to the one of Bartolini [START_REF] Bartolini Bussi | Semiotic Mediation in the Mathematics Classroom: Artifacts and Signs after a Vygotskian Perspective[END_REF] with their Theory of Semiotic Mediations. However, we do not refer explicitly at this moment to this theory which supposes a focus on signs and a more complex methodology than ours. According to us, the proximities characterize the attempts of alignment that the teacher operates between students' activities (what has been done in class) and the concept at stake. We therefore study the way the teacher organizes the movements between the general knowledge and its contextualized uses: we call ascending proximities those comments which explicit the transition from a particular case to a general theorem/property; descending proximities are the other way round; horizontal proximities consist in repeating in another way the same idea or in illustrating it.
DESIGN OF THE TECHNOLOGICAL TOOL
The technological tool called "TIC-Analyse" is designed to grasp most of the aspects which have been highlighted above. First of all, it is designed to foster students' activities about continuity aspects in the two first points of view identified by Bkouche: several functions are manipulatedcontinuous or notand for each of them, two windows are in correspondence. In one of the window, the cinematicdynamical point of view is highlighted (figure 1) whereas in the second window the approximation-static point of view is highlighted (figure 2). The correspondence between the two points of view is in coherence with Tall's idea of incorporation of the formal definition into the pre-existing students' concept images. It is also in coherence with the importance for students to deal with several points of view for the conceptualisation of continuity (adaptations). In second, the functions at stake in the software are extracted from the categories of [START_REF] Tall | Concept image and concept definition in mathematics, with special reference to limits and continuity[END_REF]. For instance, we have chosen a continuous function which is defined by two different algebraic expressions, to avoid the algebraic concept image of continuity and to avoid the amalgam between continuity and derivability. We also have two kinds of discontinuity, smooth and with angle.
There is an emphasis not only on algebraic representations of functions in order to avoid algebraic conceptions of functions. Three registers of representations of functions (numerical, graphical and algebraic) are coordinated to promote students' activities about conversions between registers (adaptations). The design of the software is coherent with the instrumental approach mostly in the sense that the instrumental distance between the technological environment, the given tasks, and the traditional paper and pencil environment is reduced. However the software produces dynamic new representationsa moving point on the curve associated to a numerical table of values within the dynamic window; two static intervals, one being included or not in the other, for the static windowoccurring non iconic visualisations which intervene in the conceptualisation process. The software promotes students' actions and activities about given tasks: in the dynamic window, they are supposed to command the dynamic point on the given curvecorresponding to the given algebraic expression. They can observe the numerical values of coordinates corresponding to several discrete positions of the point and they must fill a commentary with free words about continuity aspects of the function at the given point (figures 1, 3). In the static window, they must fill the given array with values of α, the β being given by the software (figures 2, 4). Then, they have to fill a commentary which begins differently according to the situation (continuity or not) and the α they have found (figures 4, 5).
As we have mentioned in our theoretical frame, students are not supposed with these tasks and activities to get the formal definition by themselves. However, students are supposed to have developed enough knowledge in their ZPD so that the teacher can introduce the definition together with the sense and FUG aspects of continuity.
STUDENTS ACTIVITIES AND TEACHER'S PROXIMITIES
The students work by pair on the tool. The session is a one hour session but four secondary schools with four teachers are involved. Students have some concept images of continuity but nothing has been thought about the formal definition. The teacher is supposed to mediate students' activities on the given tasks. Students are not supposed to be in a total autonomy during the session according to our socio constructivist approach. We have collected video screen shots, videos of the session (for each schools) and recording of students' exchanges in some pairs. Students' activities on each tasks are identified, according to the tasks' complexity (mostly kinds of adaptations), their actions and interactions with computers and papers (written notes), the mediations they receive (procedural or constructive mediations, individual or collective, from the tool, the pairs or the teacher) and the discourses' elements seen as "potential" proximities proposed by the teacher.
It appears that the teacher mostly gives collective procedural mediations to introduce the given tasks, to assure an average progression of the students and to take care of the instrumental process. Some individuals mediations are only technical ones ("you can click on this button"). Some collective mediations are most constructive such as "now, we are going to see a formal approach. We are going to see again the four activities (ie tasks) but with a new approach which we are going to call formal approach...". The constructive mediations are not tasks oriented but they aim at helping students to organize their new knowledge and they contribute to the aimed conceptualisation according to our theoretical approach.
As examples of students' written notes (as traces of activities), we can draw on figure 3 and4. A pair of students explains the dynamic non-continuity with their words "when x takes values more and more close to 2 then f(x) takes values close to -2,5 and -2. It depends whether it's lower or higher" (figure 3) which is in coherence with the primitive concept image of continuity. The same pair of students explains the non-continuity in relation to what they can observe on the screen: "there exists β positive, for all α positivealready proposed by the tool in case of non-continuitysuch that f(i) not completely in j… f is not continuous". We can note that the students are using "completely" to verbalize that the intersection of the two intervals is not empty. However, the inclusiveness of an interval into another one is not expected as a formalized knowledge at this level of conceptualisation. Their commentary is acceptable. Students are expressing what they have experimented several times : for several values of β (β = 0,3 in figure 4), even with α very small (α = 0,01 in figure 4), the image of the interval ]2-α, 2+ α[ is not included in ]-2,5-β, -2,5+ β[. Concerning a case of continuity, the students are also able to write an acceptable commentary (figure 5) "for all β positive, their exists α positivealready proposed by the tool in case of continuitysuch that f(i) is included in j."
Students' activities on the given tasks are supposed to help the teacher to develop proximities with the formal definition. It is really observed that some students are able to interact spontaneously with the teacher when he wants to write the formal definition on the blackboard. This is interpreted as a sign that the teacher's discourse encounters these students' ZPD. Then the observed proximities seem to be horizontal ones: the teacher reformulates several times the students' propositions in a way which lead gradually to the awaited formal definition, for instance "so, we are going to reformulate, for all β positive, their exists α positive, such that if x belong to a neighbour of α … we can note it x 0 -α, x 0 + α…."
Of course, it is insufficient to ensure proof and effectiveness of our experimentation. The conceptualisation of continuity is an ongoing long process with is only initiated by our teaching process. However, we want to highlight here the important role of the teacher and more generally the importance of mediations in the conceptualisation process of such a complex concepts. We only have presented the beginning of our experimentation. It is completed by new tasks on the tool which are designed to come back on similar activities and to continue the conceptualisation process.
Figure 1 :
1 Figure 1: two windows for a function, the dynamic point of view about continuity
Figure 2 :
2 Figure 2: two windows for a function, the static points of view about continuity
Figure 3 :
3 Figure 3: example of commentary given by a pair of students in the dynamic window
Figure 4 :
4 Figure 4: example of commentary given by a pair of students in the static window
Figure 5 :
5 Figure 5: example of commentary given by a pair of students in the static window |
01766869 | en | [
"math"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01766869/file/IL_Vandebrouck%20new.pdf | Fabrice Vandebrouck
ACTIVITY THEORY IN FRENCH DIDACTIC RESEARCH
Keywords: Mathematics, Tasks, Activity, Mediations, Technologies
The theoretical and methodological tools provided by the first generation of Activity Theory have been expanded in recent decades by the French community of cognitive ergonomists, followed by a sub-community of researchers working in didactics of mathematics. The main features are first the distinction between tasks and activity and second the dialectic between the subject of the activity and the situation within which this activity takes place. The core of the theory is the twofold regulatory loop which reflect both the codetermination of the activity by the subject and by the situation, and the developmental dimension of the subject's activity. This individual and cognitive understanding of Activity Theory mixes aspects of Piaget and Vygotsky's frameworks. It is first explored in this paper, associated with a methodology for analyzing students' mathematical activites. Then we present findings that help to understand the complexity of student mathematical activities when working with technology.
Introduction
Activity Theory is a cross-disciplinary theory that has been adopted to study various human activities, including teaching and learning in ordinary classrooms, where individual and social levels are interlinked. These activities are seen as developmental processes mediated by various contextual elementshere we consider the teacher, the pair and the artefact (Vandebrouck et al., 2012: 13). Activity is always motivated by an object; a characteristic that distinguishes one activity from another. Transforming the object into an outcome is another key feature of activity. Subject and object form a dialectic unit: the subject transforms the object and at the same time is him/herself transformed. This framework can be adapted to describe the actions and interactions that emerge in the teaching/learning environment, and that relate to the subjects, the objects, the artefacts and the outcomes of the activity [START_REF] Wertsch | The concept of activity in soviet psychology: an introduction[END_REF].
Activity Theory was originally developed by, among others, [START_REF] Leontiev | Activity, consciousness and personality[END_REF]. A well-known extension is the systemic model proposed by Engeström and al. (1999), called third generation of Activity Theory. It expresses the complex relationships between the elements that mediate activity in an activity system. In this paper, we take a more cognitive and individual perspective. This school of thought has been expanded over the course of the past four decades by French researchers working in the domain of occupational psychology and cognitive ergonomics, and has since been adapted to the didactics of mathematics. The focus is on the individual as a cognitive subject and an actor in the activity, rather than the overall systemeven if individual activity is seen as embedded in a collective system, and cannot be analysed outside the context in which it occurs.
An example of this adaptation is already well-established internationally. Specifically, it refers to the distinction between the artefact and the instrument, which is used to understand the complex integration of technologies into the classroom. The notion of instrumental genesis (or instrumental approach) was first introduced by [START_REF] Rabardel | Les hommes et les technologies, approche cognitive des instruments contemporains[END_REF] in the context of cognitive ergonomics, then extended to didactics of mathematics by [START_REF] Artigue | Learning mathematics in a cas environment: The genesis of a reflection about instrumentation and the dialectics between technical and conceptual work[END_REF] and it is concerned with the subject-artefact dialectic of turning an artefact into an instrument. In this paper, we draw upon and try to encompass this instrumental approach.
1 -2 First, we describe how Activity Theory has been developed in the French context. These developments are both general and focused on students' mathematical activity. Next, we present a general methodology for analysing students' mathematical activity when working with technology. Then we develop an example of application, and we describe our findings. Finally, we present some conclusions.
Activity theory in French context
The first notable feature of Activity Theory in the French context is the distinction between tasks and activity [START_REF] Rogalski | Theory of Activity and Developmental Frameworks for an Analysis of Teachers' Practices and Students' Learning[END_REF]. Activity relates to the subject, while tasks relate to the object. Activity refers to what the subject engages in to complete the task: external actions but also inferences, hypotheses, thoughts and actions he/she decides to take or not. It also concerns elements that are specific to the subject, such as time management, workload, fatigue, stress, enjoyment, and interactions with others. As for the taskas described by [START_REF] Leontiev | Activity, consciousness and personality[END_REF] and extended in cognitive ergonomicsthis refers to a goal to be attained under certain conditions [START_REF] Leplat | Regards sur l'activité en situation de travail[END_REF].
Activity Theory draws upon two key concepts: the subject and the situation. The subject refers to an individual person, who has intentions and competencies (potential resources and constraints). The situation provides the task and the context for the task. Together, situation (notably task demands) and the subject codetermine activity. The dynamic of the activity produces feedback in the form of twofold regulatory loop (Figure 1) that reflects the develomental dimension of Activity Theory [START_REF] Leplat | Regards sur l'activité en situation de travail[END_REF]. The concept of twofold regulation reflects the fact that the activity modifies both the situation and the subject. On the one hand (upper loop), the situation is modified, giving rise to new conditions for the activity (e.g. a new task). On the other hand (lower loop), the subject's own knowledge is modified (e.g. by the difference between expectations, acceptable outcomes and the results of actions).
More recently, the dialectic between the upper and lower regulatory loops (shown in Figure 1) has been expanded through a distinction between the productive and constructive dimensions of activity [START_REF] Pastré | Apprendre des situations[END_REF][START_REF] Samurcay | Modèles pour l'analyse de l'activité et des compétences: propositions[END_REF]. Productive activity is object-oriented (motivated by task completion), while constructive activity is subject-oriented (the subject aims to develop his or her knowledge). In teaching/learning situations, especially those that involve technologies, the constructive dimension in the students' activity is key. The teacher aims the students to develop constructive activity. However, especially with computers, students are mostly engaged in producing results and the motivation of their activity can be only towards the productive dimension. Then the effects of their activity on students' knowledge -as it is stipulated by the dual regulatory loop -are mostly indirect with less or without any constructive aspects.
The last important point to note is the fact that French Activity Theory mixes Piagetian approach of epistemological genetics, together with Vygotsky's socio-historical framework to specifiy the developmental dimension of activity. As Jaworski (in Vandebrouck, 2013) writes, "the focus on the individual subjectas a person-subject rather than a didactic subjectis perhaps somewhat more surprising, especially since it leads the authors to consider a Piagetian approach of epistemological genetics alongside Vygotsky's sociohistorical framework". Rogalski (op. cit.) responds with "the Piagetian theory looks from the student's side at epistemological analyses of mathematical objects in play while the Vygotskian theory takes into account the didactic intervention of the teacher, mediating between knowledge and student in support of the students' activity".
The dual regulation of activity is consistent with the constructivist theories of Piaget and Vygotsky.
The first author [START_REF] Piaget | The equilibration of cognitive structures: The central problem of intellectual development[END_REF] provides tools to identify the links between activities and development, through epistemological analyses. [START_REF] Vergnaud | Cognitive and developmental psychology and research in mathematics education: some theoretical and methodological issues[END_REF][START_REF] Vergnaud | La théorie des champs conceptuels[END_REF], expands the Piagetian theoretical framework regarding conceptualisation and conceptual fields by highlighting situation classes relative to a knowledge domain. We therefore define the students' learningand development -with reference to Vergnaud's conceptualisation.
On the other hand, Vygotsky (1986) stresses the importance of mediation within the student's zone of proximal developmental (ZPD) for learning (scientific concepts). Here, we refine the notion of mediation by adding a distinction between procedural and constructive mediations in the context of the dual regulation of activity. Procedural mediations are object-oriented (oriented towards the resolution of the task), while constructive mediations are more subject-oriented. This distinction can be seen as an extension to what Robert [START_REF] Robert | Why and how to understand what is at stake in a mathematics class?[END_REF]) calls teachers' procedural and constructive teacher's aids. A more detailed exploration of the complementarity Piaget/Vygotski can be found in [START_REF] Cole | Beyond the individual-social antinomy in discussions of Piaget and Vygotski[END_REF].
General methodology for analysing students' mathematical activities
Following Activity Theory, we postulate that students' learning depends directly on their activity, even though other elements can play a part -and even if activity is partially inaccessible to us and differ from one student to another. Students' activity is developed through the actions that are carried out to complete tasks. Through their actions, subjects aim to achieve goals, and their actions are driven by the motivation for the activity. Here, we draw upon the three levels originally introduced by [START_REF] Leontiev | Activity, consciousness and personality[END_REF]: activity associated with a motive; actions associated with goals; and operations associated with conditions. Activity takes place in a specific situation, such as the classroom, at home, or during a practical session. Actions, involved by the proposed precise tasks, can be external (i.e. spoken, written or performed), or internal (e.g. hypotheses or decisions) and partially converted in operations. As [START_REF] Galperine | Essai sur la formation par étapes des actions et des concepts[END_REF] and [START_REF] Wells | Reevaluating the IRF sequence: A proposal for the articulation of theories of activity and discourse for the analysis of teaching and learning in the classroom[END_REF] note, the three levels are relative and, for instance, operations can be considered as actions that have been routinized.
Here, we use the generic term mathematical activities (rather than activity), to refer to students' activity on a specific mathematical task in a given context. Mathematical activities refer to everything that surrounds actions and operations (also non actions for instance). They are a function of a number of factors (including task complexity, but extending to the characteristics of the context and all mediations that occur as tasks are performed) that contribute to regulation and intended development in terms of mathematical knowledge.
Two methodological levels can be adopted from the dynamic of activity within the twofold regulatory loop. First of all, regulations can be considered at a local level as short-term adjustments of activities to previous actions and as procedural learning (also called functional regulations, upper loop in the figure 1). Secondly, at a global level, regulations are mostly constructive ones (also called structural regulations) and they correspond to the long-term development of the subject (in link with conceptualisation).
2.a The local level
At the local level, the analysis focuses on students' activities in the situation, in the form of tasks, their context, and their completion by students with or without direct help from the teacher. The initial step is an a priori analysis of the tasks given to students (by the teacher, the computer…), which is closely linked to the situational context (e.g. the students' academic level and age). We use [START_REF] Robert | Outil d'analyse des contenus mathématiques à enseigner au lycée et à l'université[END_REF] categorisation to characterise these tasks.
First, we identify the mathematical knowledge to be used for a given task: the representation(s) of a concept, theorem(s), definition(s), method(s), formula(s), types of proof, etc. The analysis aims to answer several crucial questions: does the mathematical knowledge to be used already exist for students or is it new? Do students have to find the knowledge to be used by themselves? Do the task only require the direct application of this knowledge without any adjustment (technical task), or does it require adaptations and/or carrying out subtasks? A list of such adaptations can be found in Horoks and [START_REF] Robert | Tasks Designed to Highlight Task-Activity Relationships[END_REF]: mix of knowledge, the use of intermediaries, change of register [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF], change of mathematical domain or setting [START_REF] Douady | Jeux de cadre et dialectique outil-objet[END_REF], introduction of steps, choices, use of different points of view, etc. Tasks that require the adaptation of knowledge are referred to as complex tasks and encourage conceptualisation, as students become able to more readily and flexibly access the relevant knowledge, depending however on the implementation in the classroom.
The a priori analysis of tasks leads us to describe what we have called the intended students'activities associated with the tasks. Here we draw upon [START_REF] Galperine | Essai sur la formation par étapes des actions et des concepts[END_REF] functions of operations, and adapt them to mathematical activities. Galperine distinguishes three functions: orientation, execution and control. Next, we use three "critical" mathematical activities that are characteristic of complex tasks [START_REF] Vandebrouck | Proximités en acte mises en jeu en classe par les enseignants du secondaire et ZPD des élèves : analyses de séances sur des tâches complexes[END_REF].
First, recognizing activities refer mainly to orientation and control. They occur when students have to recognize mathematical concepts as objects or tools that can be used to solve the tasks they are given.
Students may also be asked to recognize modalities of application or adaptation of these tools.
Second, organizing activities refer mainly to orientation: students have to identify the logical and temporal steps in their mathematical reasoning, together with any intermediaries.
Third, treatment activities refer to all of the mathematical activities associated with execution on mathematical objects. Students may be asked to draw a figure, compute, substitute, transform expressions (with or without giving the steps), change registers, change mathematical domains, etc.
Following Vygotsky, we supplement our local analysis of intended students' activities by developing ways to analyse classroom teaching (a posteriori), and to approach effective students' activities as functions of the different mediations that occur. For this, we use videos and observations in the classroom. We also record students' discussions, teacher's discourses and writings, and capture students' computer screens to identify observable activities. The data that is collected concerns how long students spend working on tasks, the format of their work (the whole class, in small groups, by pairs of students etc.), its nature (copying, reading, calculation, investigation, written or oral, graded or not, etc.) and all elements of the context that may modify intended activities. This highlights, at least partially, the autonomy given to students, the nature of mediations, and opportunities for students to show initiative, in relation to the adaptation and availability of knowledge.
Multiple aspects of mediations are analysed with respect to their assumed influence on student activities. Some relate to their format (interactions with students, between students, with teacher, with computers, etc.), while others concern the specific ways of taking into account the mathematical content (mathematical aids, assessment, reminders, explanations, corrections and evaluations, presentation of knowledge, direct mathematical content, etc.).
Two types of mediations have already been introduced, depending on whether they modify intended activities, or whether they add to activities (effective or at last observed). The first are object-oriented; here we use the term procedural mediations. These mediations modify intended activities and correspond to instructions given by the teacher, the screen or by other students, directly or indirectly, before or during task completion. They are often seen in open-ended questions form the teacher such as 'What theorem can you use?' They can be given by the computer giving feedbacks which transform the task to be performed or with some limitations in the provided tools which give indirect indications to students about the way to achieve the task. These procedural mediations may lead to the subdivision of a complex task into subtasks. They usually change knowledge adaptations in complex tasks and simplify the intended activities in such a way that it becomes more like technical tasks (for instance students having to apply a contextualized method).
The second type of mediations are more subject-oriented, here we use the term constructive mediations. They are designed to add something to the students' activities and the knowledge that can emerge from these activities. They can take the form of a simple summary of what has been developed by students, an explanation of choices, a partial decontextualization or generalisation, assessments and feedbacks, a discussion of results, etc. On some computers, the way a geometrical figure has been achieved by a student can be replayed to recall him the order in which instructions have been given without any wrong ways.
It should be noted here that our framework leads to the hypothesis that there is an internal transformation of the subject in the learning process: constructive mediations aim to contribute to this process. However, the mediations can be constructive for some students and remain procedural for others. On the contrary, some procedural mediations can become constructive for some students, for instance if they extract by themselves a generalisation from a local indication. Moreover some constructive mediationbut also perhaps productivecan belong to some students' ZPD in the sense of Vygotski or they can remain out of the ZPD. When they belong to the ZPD, they can be identified to appreciate the explicit links between the expression of the general concepts to be learned and their precise applications, in contextualised tasks, according to the necessary dynamic between them. Distinguishing between the kinds of mediations and the way they belong or not to some students' ZPD can be very difficult.
2.b The global level
The local level can be extended to a global level that takes into account the set of mathematical activities, the link with the intended conceptualisation (long term constructive loops), and teaching practices in the long term. We link mathematical students' activities to the intended conceptualisation of the relevant mathematical notion, establishing a "relief map" of this mathematical notion. This relief map is developed from an epistemological and mathematical analysis of the notion, the study of the curricula, and didactical analyses (e.g. students'common difficulties). This global analysis focuses on the similarity between students' activities (intended, observed, effective) and the set of activities that characterise the intended conceptualisation of the relevant notion.
However, the didactical analysis of one teaching session is insufficient. It is necessary to take into account, on a day-to-day basis, all of the tasks students are asked to complete, and teachers' interventions. We use the term scenario to describe a sequence of lessons and exercises on a given topic. The global scenario could be understood as a long term "cognitive road" [START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF].
Example of application: the 'shop sign' situation
To illustrate the utilisation of our Activity Theory, this section presents an example of a situation that aims to contribute to students' conceptualisations of the notion of function. Then we outline some limitations of the methodology at the global level.
The example relates in fact to the GeoGebra 'shop sign' family for learning functions. This family refers to mathematical situations that lie at the interface between two mathematical domains: geometry and functions.
There are many examples of shop sign situations, but they share the idea that a coloured area is the lit area of a shop sign [START_REF] Artigue | The challenge of developing a European course for supporting teachers' use ICT[END_REF] which depends on some moving variables in the shop sign. The task is set for grade 10 students (15 years old). One solution is to identify DE as an independent variable x. Then f(x), the sum of the two areas, is equal to x² (for the square) plus 4(4-x)/2 (for the triangle): equivalent to x²-2x + 8. In the French curriculum at grade 10, the derivative is not known and students must compute and understand the canonical form (x-1)² + 7 as a way to identify the minimum 7 for the distance DE=1 (which is the actual position on the figure).
Students are working in pairs on computers. They have already worked with functions in the traditional pencil and paper context, and they also have manipulated GeoGebra for geometrical tasks that do not refer to functions. In this new situation, GeoGebra helps them to begin the task by making conjectures about the minimum. Students can also trace the graph of the function, as shown in Figure 6. Then, in the algebraic register, they can find the canonical form of the function f(x) and the characteristics of the minimum.
We first identify the relief map on the notion of function and the intended conceptualisation. Then we give the a priori analysis of the task and the intended students' activities. We finish with the observation to two pairs of students to identify observable and effective activities.
3.a The global level: relief map on the notion of function and intended conceptualisation
The function is a central concept in mathematics and links it to other scientific fields and real-life situations. It both formalises and unifies [START_REF] Robert | Why and how to understand what is at stake in a mathematics class?[END_REF] a diversity of objects and situations that students encounter in secondary school: proportionality, geometrical transformations, linear, polynomial growth, etc. A diversity of systems of representations (numerical, graphical, algebraic, formal, etc.) and a diversity of perspectives (pointwise, local and global) are frequently combined when working with them [START_REF] Duval | Sémiosis et pensée humaine: registres sémiotiques et apprentissages intellectuels[END_REF][START_REF] Maschietto | Graphic Calculators and Micro Straightness: Analysis of a Didactic Engineering[END_REF][START_REF] Vandebrouck | Points de vue et domaines de travail en analyse[END_REF]. As it is summarized by [START_REF] Artigue | Mathematics thinking and learning at post-secondary level[END_REF], the processes of teaching and learning of function entail various intertwining difficulties that reinforce one another in complex ways.
Educational research [START_REF] Bergen | A theory of mathematical growth through embodiment, symbolism and proof[END_REF][START_REF] Gueudet | Investigating the secondary-tertiary transition[END_REF]Hitt and Gonzalez-Martin, 2016) shows that an efficient conceptualisation of the notion requires a rich experience that illustrates the diversity illustrated above, and the diversity of settings in which functions are used [START_REF] Douady | Jeux de cadre et dialectique outil-objet[END_REF]. It also means that functions are available as tools for solving tasks, and can be flexibly linked with other concepts. There must be a progression from embodied conceptualisations (where functions are highly dependent on physical experience) to proceptual conceptualisations (where they are considered dialectically and work both as processes and objects), paving the way for more formal conceptualisations [START_REF] Tall | Thinking through three worlds of mathematics[END_REF][START_REF] Bergen | A theory of mathematical growth through embodiment, symbolism and proof[END_REF].
At grade 10, the intended conceptualisation can be characterized by a set of tasks in which functions are used as tools and objects. They can be combined and used to link different settings (including geometrical and functional), numerical, algebraic and graphical representations, and the dialectic between pointwise and global perspectives. The shop sign task is useful in this respect, as students have to engage in such mathematical activities. A priori, optimisation tasks in geometrical modelling help to build the intended functional experience, and link geometrical and functional settings.
Technology provides a new support for physical experience, as the modelling process provides new systems of representation and helps to identify the dynamic connections between them. It also offers a new way to approach and connect pointwise and global perspectives on functional objects, and supports the building of rich functional experiences. A famous contribution is the one of [START_REF] Arzarello | Approaching functions through motion experiments[END_REF], who use sensors to introduce students to the functional domain. The framework is already an activity theoretical framework, together with more semiotic approaches, but it is not in a context of dynamic geometry. Many experiences exist about learning functions throught dynamical geometrical situations. For instance, [START_REF] Falcade | Approaching functions: Cabri tools as instruments of semiotic mediation[END_REF] study the potential of a didactical engineering with Cabri-Géomètre. The authors take a Vygotskian perspective about semiotic mediations which is more precise than our adaptation of Vygotsky inside Activity Theory, but which is also more restrictive in the sense that they don't consider deep connexions between given tasks and mathematical activities. Moreover, it doesn't concern ordinary classrooms. More recently, [START_REF] Minh | Connected functional working spaces: a framework for the teaching and learning of functions at upper secondary level[END_REF] analyze students' activities on functions using Casyopée. This solftware is directly built for the learning of functions and the authors adopt the model of Mathematic Working Spaces [START_REF] Kuzniak | Mathematical Working Spaces in Schooling[END_REF]. They built on three important challenges for students in the learning of functions: to consider functional dependencies, to understand the idea of independent variable, and at last to make sense of functional symbolism. The aims of the 'shop sign' family is consistent with such a progression which is closed to Tall's one introduced above.
3.b The local level: a priori analysis of the task and intended students'activities
The task is to identify the position of E on [DC] in order that the sum of the areas DFGE and AGB are minimal (Figure 2). It requires actual knowledge about geometrical figures and functions. However, it assumes that the notion of function is available, i.e. students have to identify the need for a function by themselves.
In a traditional pencil and paper environment, students first draw a generic figure. They can try to estimateby geometrical measurementssome values of the areas for different positions of E. They can draw a table of values but this kind of procedure is usully not enough to obtain a good conjecture of the minimum value. Moreover such a procedure can reinforce the pointwise perspective because it doesn't bring the continuous aspects of the function at stake. Usually, the teacher quickly ask students to produce algebraic expressions of the areas. Students try themselves to introduce an algebraic variable (DE=x), or the teacher gives them procedural aids.
In the example given here, the teacher provided students with a sheet of paper showing a figure similar to the one given in Figure 2, and the instructions as summarized in Figure 3. Figure 3 shows that the overall task is divided into three subtasks. Organizing activities are directed by procedural mediations (functional regulation), which is a way to ensure that most students can engage in productive activity.
A priori analysis of the first subtask: the construction of the figure
In the geometrical subtask students have to identify the fixed points (A, B, C, D), the free point (E) on [DC], and the dependent points (F and G). The order of construction is crucial to the robustness of the final figure, but are not important in the paper and pencil environment. Consequently, organizing activitiesthe order of instructionsare more important in the GeoGebra environment.
The subtask also requires students to make choices. It is possible to draw either G or F first, and the sequence of instructions is not the same. Moreover, there are other choices that have no equivalent in the paper and pencil environment: whether to define the polygons (the square and triangle) with the polygon instruction, or by the length of their sides; whether to use analytic coordinates of fixed points or a geometrical construction; whether to use a cursor to define E; etc. These choices refer not just to mathematical knowledge, but also to instrumental knowledge (following the instrumental genesis approach). This means that treatment activities include instrumental knowledge and are more complex than in the traditional environment. Once the construction is in place, students can verify its robustnessa treatment that is also specific to the dynamic environment.
A priori analysis of the second subtask: the conjecture
There is no task really equivalent to this subtask in the paper and pencil environment. This again leads to specific treatment activities. These are engaged with the feedback provided by the software, which assigns numerical values of the areas DFGE and AGB, according to the position of E. However, students are required to redefine DFGE and AGB as polygons if they have not already used this instruction to complete subtask 1 (Figure 5). They also have to create in GeoGebra environment a new numerical value that is the sum of the two areas in order to refine their conjecture. It is not clear in what extent these specific treatment activities refer to mathematical knowledge, and we will return to this point later.
A priori analysis of the third subtask: the algebraic proof
This subtask appears similar to its equivalent in the paper and pencil environment. However, as students already know the value of the minimum, the motivation for activity is different and only relates to the proof itself. The most important step is the introduction of x, as a way to pass from the geometrical setting to the functional setting. This step brings recognizing activities (students must recognize that the functional setting is needed), which is triggered by a procedural mediation (the instructions given on the sheet).
Students have to determine the algebraic expression of the function. Existing knowledge about the area of polygons must be available. They also have to recognize a second order polynomial function associated to specific treatments. Treatment activities remain to obtain the canonical form (as students have not been taught about derivatives, they must be helped in this by the teacher). At last, the recognition of the canonical form as a way to obtain the minimum of the area and the position of E which correspond to this minimum is correlate to the importance of the dialectic between pointwise and global perspectives on functions.
3.c A posteriori analysis: observable and effective activities
Students worked in pairs. The teacher only intervened at the beginning of the session (to ensure that all students were working), and at the end (to summarise the session). Students mostly worked autonomously, although the teacher helped individual pairs of students. The following observations are based on two pairs of students: Aurélien and Arnaud, and Lolita and Farah.
Analysis of the first pair of students'activities: Aurélien and Arnaud
This pair took a long time to construct their figure (more than 20 minutes). They began with A,B,C,D, in sequence, using coordinates, and then drawing lines between pairs of points. This approach is closest to the paper and pencil situation, and while it is time-consuming it is not crucial for global reasoning. They then introduced a cursora numerical variable j that took a value between 0 and 4in order to position E on [D, C]. However, the positioning of F (at 0, 3), was achieved without the cursor, which leads to a wrong square (Figure 4). G was drawn correctly. After they had completed their construction, they moved the cursor in order to verify that their figure was robust; an operation which revealed that it was not (Figure 4). This mediation from the screen is supposed to be a constructive mediation: it does not change the nature of the task and it is supposed to permit a constructive regulation of students' activities (lower loop in reference to Figure 1). However, the mediation doesn't encounter the students' ZPD and it is insufficient for them to regulate their activity by their own. The mediation supposes in fact new recognizing activities, specific to dynamic geometry on computers, that these students are not able to develop.
In this case, the teacher makes a procedural mediation and helps the students to rebuild their figure ("You use the polygon instruction to make DFGE […] then again to make the polygon ABG"). Once the two polygons have been correctly drawn, the values of their areas appear in the numerical window of GeoGebra (called poly1 and poly2, shown on the left-hand side of the screens presented in Figure 5). In the conjecture phase (second subtask, 8 minutes), the students made the conjecture that the sum is always 8 ("Look, it's always 8…"), by computing poly1+poly2 in their mind. The numerical window of GebGebra now shows 18 different pieces of information, including the areas of DFGE (poly1) and ABG (poly2). Students must introduce another numerical variable (e.g. poly3) that is equal to the sum of poly1+poly2. However, this requires new organizing activities that GeoGebra does not help with. In fact, there is already too much information in the numerical window. Here again, the teacher provides direct procedural assistance ("introduce poly3=poly1+poly2").
In the algebraic phase (third subtask, 20 minutes), the students are unable to express the areas DFGE and ABG as functions of x. Analyses reveal that again new recognizing activities are awaited to switch from the computer environment to the paper and pencil environment. These new recognizing activities are not an evidence for the students. They suppose both mathematical knowledge and instrumental knowledge about the potentialities of solftware and the mathematical way of proving the existence and the values of the minimum. Then, students' attempt to implement DE=x in the input bar leads to feedback from GeoGebra (in the form of a syntax error), which informs them that their procedure is wrongbut does not provide any guidance about what to do instead.
It is difficult to know whether to categorise this kind of mediation as procedural or constructive as it does not add any mathematical knowledge.
The teacher asks the students to try to find a solution with pencil and paper (procedural assistance). However, the introduction of x, which is linked to the change of mathematical setting (adaptation of knowledge), seems very artificial. The students start working on their algebraic formula by looking at their static figure, with E positioned at (1, 4). The base of the triangle measures 4 and its height is 3. One of the pair suggests that "it depends on x" means that each algebraic expression ends in x, as the following dialogue between the two students shows:
"This is 4x At this point, the teacher provides another direct procedural assistance. This once again shows that although the mediation of GeoGebra help students to discuss and progress, it is insufficient for them to correctly regulate their activity. Without procedural assistance from the teacher, they are unable to find the formula for the area of triangle. In the end, the students don't have enough time to finish the task by themselves.
At end of the session, the teacher gives a procedural explanation to whole class of how to find the canonical form (as "x²-2x+8 = (x-_)²+_"). Although Aurélien and Arnaud write it down, they do not make the link between it, and their classroom work. Consequently, they do not understand the motivation for the activity and cannot make sense of the explanation of the canonical transformation given by the teacher.
Then the teacher gives constructive explanation about the meaning of the coefficients in the canonical form and the way they give the minimum and the corresponding value of x. But according to Aurélien and Arnaud's activities, it is too early and they do not make the link with their numerical conjecture. In other words, the collective mediation of the teacher seems too far from the students' ZPD and it is not at all constructive for this pair of students.
Analysis of the second pair of students' activities: Lolita and Farah
Lolita and Farah are better students and quickly draw their robust figure. Their numerical conjecture is correct and the teacher gives them another subtask: to find a graphical confirmation of their conjecture. The procedural instruction is to find a new point, M, whose abscissa is the same as E and ordinate is the value poly1+poly2. However, Lolita and Farah do not recognize this. One says "this is not a curve" and then "the minima, we have seen this for functions but here…".
They only recognize the trace as a part of a parabola (geometrical setting) and associate its lowest point with the value of the minimum area.
The graphical observation confirms to Lolita and Farah that their numerical conjecture was correct. However, this is a proof for them and they do not understand the motivation of the third subtask which does not make sense to them. Although they succeed in defining the algebraic expression of the function and they find the canonical expression, they do not make the link with their graphical observation.
Here again, the teacher's summary of how to obtain the canonical form of the function, the value of the minimum and the corresponding value of x is not useful for this pair as it is not the problem they encountered.
A constructive intervention about the motivation for the third subtask and how the canonical form was linked to the conjecture would have been a mediation closer to their ZPD.
What does this tell us about students' mathematical activities?
The main result concerns complex activity involving technology: here the complexity is introduced by mathematical activities that require either mathematical or instrumental knowledge, particularly knowledge about the real potentialities of technologies in contrast with what is supposed to be solved within the paper and pencil environment. This leads also to new treatment activities (e.g. in the construction and conjecture subtasks) and new recognizing activities, New, onscreen, representations appear, typically dynamic, and students must recognize them as mathematical objects (or not). The example of Aurélien and Arnaud shows how difficult it was for them to recognize a robust figure, and dynamic and numerical representations of variations in areas. Similarly, it was difficult for Lolita and Farah to recognize the trace of M as a special part of the graph of a function.
The second main result concerns the increase in recognizing activities and the new balance between the three types of critical activities. While in a traditional session, the teacher can point out the mathematical objects to use, the screen presents far more information to students, meaning that they have to recognize what is most important in their treatment activities. Organizing activities also increase, both before treatment activities related to construction, and during conjecture. For instance, Aurélien and Arnaud failed in the conjecture task because they were not able to introduce a third numerical variable by themselves. Classroom observation [START_REF] Vandebrouck | Proximités en acte mises en jeu en classe par les enseignants du secondaire et ZPD des élèves : analyses de séances sur des tâches complexes[END_REF] has led to the idea that most of effective students'activities are treatment activities as the teacher must make productive interventions before most students can begin the task.
Recognizing and organizing activities are mostly activities for best students. These students often have an idea of how to begin the resolution of the task, they are able to adapt quickly their knowledge, and they develop all three types of critical mathematical activities, whereas weaker students find it difficult to engage in the task waiting for any procedurale assistance of the teacher. In classroom sessions that use technology, students are confronted alone with all of these critical activities, which may help to explain the difficulty of weaker students.
A further finding concerns mediations. In such sessions, teacher's mediations are mostly procedural and clearly aim to foster productive activity. Onscreen mediation leads to specific, new recognizing activities (dynamism) but is insufficient for students (not only weaker) to regulate their own activity. It appears that most of the time this mediation is not procedural or constructive enough, leading to more teacher intervention. Moreover, it seems that onscreen mediation is always associated with treatment activities and does not help students in their recognizing or organizing activities.
The last point concerns constructive mediation and the heterogeneity of the students' knowledge (and ZPD).
Student activities in classroom sessions that use technology are difficult for the teacher to evaluate. Even if he/she tries to manage the best "average" constructive mediations for all students, our examples show that this is very challenging. This raises the question of what is the real impact of such sessions with respect to the intended conceptualisation. The availability and recognition of functions as tools to complete such tasks was not really investigated, in the sense that the independent variable x was given to students (on paper), and none of them returned to the geometrical setting as in the traditional modelling cycle -in reference to Kaiser and Blum [START_REF] Maas | What are modelling competencies?[END_REF]. Moreover, Aurélien and Arnaud did not explore the dynamic numerical-graphicalalgebraical flexibility, which was one of the aims of the session; on the other hand Lolita and Farah did, but lacked the constructive mediations needed to complete the cycle.
Conclusion
We have presented Activity Theory in the context of French didactics, notably the dual regulation found in the activity model, which was first developed in ergonomic psychology and then adapted to didactics of mathematics, for studying students' activities. Other works, which we have not discussed here, have looked at teachers' practices in some different ways [START_REF] Robert | A didactical framework for studying students' and teachers' activities when learning and teaching mathematics[END_REF][START_REF] Robert | A cross-analysis of the mathematics teacher's activity. An example in a French 10th-grade class[END_REF]). An important component of this model is the impact of activity on subjects, which represents the developmental dimension of students'activity. This focus highlights the commonalities and complementarities of the constructivist theories of Piaget (extended to Vergnaud's conceptual fields) and Vygotsky. The connection between Activity Theory, the work of Piaget and Vygotsky, and didactics of mathematics, provides a theoretical foundation for a dual approach to students' activity from the viewpoint of mathematics (the didactical approach) and subjects (the cognitive approach).
Our analysis does not provide a model of students' activity (or teachers' practices). However, it leads to the identification of similarities and differences in terms of the relations between subtasks, students' ways of working, mediations, mathematical activities, and compares this complex task with the traditional, paper and pencil environment. One of the specificity of our approach is the deep connection between the students' activities analysis and the a priori tasks' analysis, including mathematical content. But we do not look for the teacher's own intention, unlike what is done in some English research (for instance, [START_REF] Jaworski | Bridging the macro-and micro-divide: using an activity theory model to capture sociocultural complexity in mathematics teaching and its development[END_REF]. Moreover, we do not attempt to raise the global dynamic between individual and collective interactions and learning. We should take now a threefold approach to the investigation of students' practicesdidactical, cognitive and socio-cultural. As [START_REF] Radford | The epistemic, the cognitive, the human: a commentary on the mathematical working space approach[END_REF] argues, with respect to Mathematical Working Space [START_REF] Kuzniak | Mathematical Working Spaces in Schooling[END_REF], the individual-collective dynamic remains difficult to understand in both our Activity Theory and MWS which are discussed together. This represents a new opportunity to better investigate the socio cultural dimension of Activity Theoryespecially the one developed by Engestrom -and integrate it into our didactical and cognitive approach.
Figure 1 :
1 Figure 1: Codetermination of activity and twofold regulatory loop
Figure 2 :
2 Figure 2: Shop sign
First
Figure 3: Main instructions given to students
Figure 4 :
4 Figure 4: Exploring the robustness of the shop sign
Figure 5 :
5 Figure 5: Exploration of varying areas by moving the point E on [DC]
Figure 6 :
6 Figure 6: The shop sign task showing part of the graph of the function |
01682735 | en | [
"chim"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01682735/file/2017-buffeteau-et-al.pdf | Thierry Buffeteau
Delphine Pitrat
Nicolas Daugey
Nathalie Calin
Marion Jean
Nicolas Vanthuyne
Laurent Ducasse
Frank Wien
Thierry Brotin
Chiroptical properties of cryptophane-111 †
The two enantiomers of cryptophane-111 (1), which possesses the most simplified chemical structure of cryptophane derivatives and exhibits the highest binding constant for xenon encapsulation in organic solution, were separated by HPLC using chiral stationary phases. The chiroptical properties of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were determined in CH 2 Cl 2 and CHCl 3 solutions by polarimetry, electronic circular dichroism (ECD), vibrational circular dichroism (VCD), and Raman optical activity (ROA) experiments and were compared to those of cryptophane-222 (2) derivative. Synchroton Radiation Circular Dichroism (SRCD) spectra were also recorded for the two enantiomers of 1 to investigate lowlying excited states in the 1 B b region. Time-dependent density functional theory (TDDFT) calculations of the ECD and SRCD as well as DFT calculations of the VCD and ROA allowed the [CD(À) 254 ]-PP-1 and [CD(+) 254 ]-MM-1 absolute configurations for 1 in CH 2 Cl 2 and CHCl 3 solutions. Similar configurations
were found in the solid state from X-ray crystals of the two enantiomers but the chemical structures are significantly different from the one calculated in solution. In addition, the chiroptical properties of the two enantiomers of 1 were independent of the nature of the solvent, which is significantly different to that observed for cryptophane-222 compound. The lack of solvent molecule (CH 2 Cl 2 or CHCl 3 ) within the cavity of 1 can explain this different behaviour between 1 and 2. Finally, we show in this article that the encapsulation of xenon by 1 can be evidenced by ROA following the symmetric breathing mode of the cryptophane-111 skeleton at 150 cm À1 . † Electronic supplementary information (ESI) available: Synthesis of (rac)-1.
Separation of the two enantiomers of 1 by HPLC using chiral stationary phase. 1 H and 13 C NMR spectra of the two enantiomers of 1 in CD 2 Cl 2 solution. Crystallographic data and pictures of the X-ray crystals of the two enantiomers of 1. UV-vis, ECD, SRCD, IR, VCD, Raman and ROA spectra of the two enantiomers of 1 in various solvents. ROA spectrum calculated at the B3PW91/6-31G** level (IEFPCM = CHCl 3 ) for conformer A of MM-1. Experimental SOR values measured at several wavelengths in various solvents and SOR values calculated at the B3PW91/6-31G** level (IEFPCM = CHCl 3 and CH 2 Cl 2 ) for conformer A of MM-1. CCDC 1537585 and 1537591.
Introduction
The cryptophane backbone displays a very simple and easily recognizable chemical structure, which is composed of six aromatic rings positioned into a rigid molecular frame. 1,2 The six aromatic rings are assembled into two independent cyclotribenzylene (CTB) sub-units connected together by three linkers, whose length and nature can be varied. This structure generates a lipophilic cavity that can accommodate a large variety of guest molecules, such as halogenomethanes and ammonium salts, or atoms in organic or aqueous solutions. 2 The cryptophane-111 skeleton (compound 1 in Scheme 1) appears as the most simplified structure of cryptophane derivatives and its synthesis has been reported for the first time in 2007. 3 This compound exhibits the highest binding constant (10 4 M À1 at 293 K) for xenon encapsulation in organic solvent but it does not bind halogenomethanes due to its small internal cavity. 3,4 In 2010, Rousseau and co-workers published a high-yielding scalable Scheme 1 Chemical structures of PP-1 and PP-2. [START_REF]IUPAC Tentative Rules for the Nomenclature of Organic Chemistry. Section E. Fundamental Stereochemistry[END_REF]19 a Bordeaux University, Institut des Sciences Mole ´culaires, CNRS UMR 5255, 33405 Talence, France. E-mail: [email protected] b Lyon 1 University, Ecole Normale Supe ´rieure de Lyon, CNRS UMR 5182, Laboratoire de Chimie, 69364 Lyon, France. E-mail: [email protected] c Aix-Marseille University, CNRS, Centrale Marseille, iSm2, Marseille, France d Synchrotron SOLEIL, L'Orme des Merisiers, 91192 Gif sur Yvette, France synthesis of this derivative by optimizing the cyclotriphenolene unit dimerization, 5 whereas Holman and co-workers reported the X-ray structure of the racemate of 1 and the first water-soluble cryptophane-111 with Ru complexes. 6 Later, in 2011, Rousseau and co-workers published the synthesis of a metal-free watersoluble cryptophane-111. 7 Finally, Holman and co-workers reported the first rim-functionalized derivatives of cryptophane-111 ((MeO) 3 -111 and Br 3 -111), which limit the range of achievable conformations of the cryptophane-111 skeleton, 8 and they also showed the very high thermal stability (up to about 300 1C) of the Xe@1 complex in the solid state. 9 Besides their interesting binding properties, most of the cryptophane derivatives exhibit an inherently chiral structure due to the anti arrangement of the linkers or to the presence of two different CTB caps. Thus, the anti arrangement of the methylenedioxy linkers makes 1 a chiral molecule. During the past decade, we have thoroughly investigated enantiopure cryptophanes using several techniques such as polarimetry, electronic circular dichroism (ECD), vibrational circular dichroism (VCD), and Raman optical activity (ROA) because the chiroptical properties of these derivatives are extremely sensitive to the encapsulation of guest molecules. [10][11][12][13][14][15][16][17] For instance, water-soluble cryptophanes display unique chiroptical properties depending on the nature of the guest (neutral or charged species) present within the cavity of the host. [10][11][12][13][14] In addition, cryptophane-222 (compound 2 in Scheme 1) possesses unusual chiroptical properties in organic solvents never observed before with cryptophane derivatives. 17 Indeed, a very different behaviour of the specific optical rotation (SOR) values was observed in the nonresonance region above 365 nm in CHCl 3 and CH 2 Cl 2 solutions. This feature was related to conformational changes of the three ethylenedioxy linkers upon encapsulation of the two solvent molecules by 2. This explanation could be confirmed by investigating the chiroptical properties of the new derivative 1. Indeed, 1 differs from 2 only by the length of the three linkers connecting the two CTB units, leading to a smaller size of the cavity. Moreover, the three portals of 1 are too small to allow any solvent molecules to enter the cavity of the host. Even CH 2 Cl 2 (V vdW = 52 Å 3 ) is too large to cross the portals of 1, leaving the cavity only accessible for smaller guests such as methane or xenon. 4 In addition, the replacement of ethylenedioxy by methylenedioxy linkers presents the advantage to decrease the number of conformations for the three bridges. Thus, we believe that the two enantiomers of 1 are important compounds for understanding the role of the solvent on the chiroptical properties of cryptophane derivatives in general. A change of the chiroptical properties of 1 regardless of the nature of the solvent will tend to demonstrate that the bulk solvent plays an important role in the chiroptical properties of cryptophane. In contrast, a lack of modification on the chiroptical properties of 1 will show that only the solvent molecule present within the cavity of the cryptophanes (that is the case for 2) has an effect on their chiroptical properties.
In this article we focus our attention on the chiroptical properties of 1 since they have never been reported in the literature, probably due to the difficulties encountered for the optical resolution of 1 into its two enantiomers (+)-1 and (À)-1.
In addition, the simplified chemical structure of 1 (87 atoms) allows more sophisticated theoretical calculations (better basis set) for the prediction of the VCD, ROA and ECD spectra by using density functional theory (DFT and time-dependent DFT) methods.
We report in this article the separation of the two enantiomers of 1 by high-performance liquid chromatography (HPLC) using chiral stationary phases and the detailed study of their chiroptical properties in CHCl 3 and CH 2 Cl 2 solutions by polarimetry, ECD, VCD, and ROA spectroscopy. Synchrotron Radiation Circular Dichroism (SRCD) spectra of the two enantiomers of 1 were also recorded in the two solvents to investigate the low-lying excited states in the 1 B b region (190-220 nm). The chiroptical properties of 1 were compared to those recently published for 2. 17 DFT and TD-DFT calculations were performed to predict SOR values as well as the ECD, VCD, and ROA spectra for several geometries of 1. The X-ray structures of these two enantiomers were also reported and compared to the optimized geometries of 1 calculated by DFT. Finally, the xenon encapsulation by 1 was followed by VCD and ROA spectroscopy.
Experimental
X-ray crystallography X-ray structures of the two enantiomers of 1 were obtained from crystals mounted on a Kappa geometry diffractometer (Cu radiation) and using the experimental procedure previously published. 16 CCDC 1537591 and 1537585 contain the crystallographic data of [CD(+) 254 ]-1 and [CD(À) 254 ]-1, respectively. †
Polarimetric, UV-vis and ECD measurements
Optical rotations of the two enantiomers of 1 were measured in two solvents (CHCl 3 , CH 2 Cl 2 ) at several wavelengths (589, 577, 546, 436, and 365 nm) using a polarimeter with a 10 cm cell thermostated at 25 1C. Concentrations used for the polarimetric measurements were typically in the range 0.22-0.27 g/ 100 mL. ECD spectra of the two enantiomers of 1 were recorded in four solvents (CHCl 3 , CH 2 Cl 2 , tetrahydrofuran (THF) and CH 3 CN) at 20 1C with a 0.2 cm path length quartz cell (concentrations were in the range 5 Â 10 À5 -1 Â 10 À4 M). Spectra were recorded in the wavelength ranges of 210-400 nm (THF and CH 3 CN) or 230-400 nm (CH 2 Cl 2 and CHCl 3 ) with a 0.5 nm increment and a 1 s integration time. Spectra were processed with standard spectrometer software, baseline corrected and slightly smoothed by using a third order least square polynomial fit. UV-vis spectra of the two enantiomers of 1 were recorded in CH 2 Cl 2 (230-400 nm) and THF (210-400 nm) at 20 1C with a 0.5 and 0.2 cm path lengths quartz cell, respectively.
SRCD measurements
Synchrotron Radiation Circular Dichroism (SRCD) measurements were carried out at the DISCO beam-line, SOLEIL synchrotron. 20,21 Samples of the two enantiomers of 1 were dissolved in CH 2 Cl 2 and CHCl 3 . Serial dilutions of the concentrations in view of data collection in three spectral regions were chosen between 100 g L À1 , 10 g L À1 to 2.5 g L À1 . Accurate concentrations were reassessed by absorption measurements allowing the scaling of spectral regions to each other. Samples were loaded in circular demountable CaF 2 cells of 3.5 mm path lengths, using 2-4 mL. 22 Two consecutive scans for each spectral region of the corresponding dilution, were carried out for consistency and repeatability. CD-spectral acquisitions of 1 nm steps and 1 nm bandwith, between 320-255 nm, 260-216 nm and 232-170 nm were performed at 1.2 s integration time per step for the samples. Averaged spectra were then subtracted from corresponding averaged baselines collected three times. The temperature was set to 20 1C with a Peltier controlled sample holder. Prior, (+)-camphor-10-sulfonic acid was used to calibrate amplitudes and wavelength positions of the SRCD experiment. Data-treatment including averaging, baseline subtraction, smoothing, scaling and standardisation were carried out with CDtool. 23
IR and VCD measurements
The IR and VCD spectra were recorded on an FTIR spectrometer equipped with a VCD optical bench, 24 following the experimental procedure previously published. 16 Samples were held in a 250 mm path length cell with BaF 2 windows. IR and VCD spectra of the two enantiomers of 1 were measured in CDCl 3 and CD 2 Cl 2 solvents at a concentration of 0.015 M. Additional spectra were measured in CDCl 3 in presence of xenon.
ROA measurements
Raman and ROA spectra were recorded on a ChiralRAMAN spectrometer, following the experimental procedure previously published. 15 The two enantiomers of 1 were dissolved in CDCl 3 and CD 2 Cl 2 solvents at a concentration of 0.1 M and filled into fused silica microcell (4 Â 3 Â 10 mm). The laser power was 200 mW (B80 mW at the sample). The presented spectra in CDCl 3 (CD 2 Cl 2 ) are an average over about 32 (52) h. Additional experiments were performed in the two solvents in presence of xenon.
Theoretical calculations
All DFT and TDDFT calculations were carried out with Gaussian 09. [START_REF] Frisch | Gaussian 09[END_REF] Preliminary conformer distribution search of 1 was performed at the molecular mechanics level of theory, employing MMFF94 force fields incorporated in ComputeVOA software package. Twenty one conformers were found within roughly 8 kcal mol À1 of the lowest energy conformer. Their geometries were optimized at the DFT level using B3PW91 functional [START_REF] Perdew | [END_REF] and 6-31G** basis set, 27 leading to ten different conformers within a energetic window of 7.5 kcal mol À1 . Finally, only the three lowest energetic geometries were kept, and reoptimized with the use of IEFPCM model of solvent (CH 2 Cl 2 and CHCl 3 ). 28,29 Vibrational frequencies, IR and VCD intensities, and ROA intensity tensors (excitation at 532 nm) were calculated at the same level of theory. For comparison to experiment, the calculated frequencies were scaled by 0.968 and the calculated intensities were converted to Lorentzian bands with a full-width at half-maximum (FWHM) of 9 cm À1 . Optical rotation calculations have been carried out at several standard wavelengths (365, 436, 532 and 589 nm) by means of DFT methods (B3PW91/6-31G**) for the three conformers reoptimized with the use of PCM solvent model.
ECD spectra were calculated at the time-dependent density functional theory (TDDFT) level using the MPW1K functional 30 and the 6-31+G* basis set. Calculations were performed for the three conformers reoptimized with the use of PCM solvent model (IEFPCM = CH 2 Cl 2 ), considering 120 excited states. For comparison to experiment, the rotational strengths were converted to Gaussian bands with a FWHM of 0.1 eV.
Results
Synthesis and HPLC separation of the two enantiomers of 1
The racemic mixture of 1, (rac)-1, was prepared according to a known procedure (Fig. S1 in the ESI †). 3 Compound 1 does not possess any substituent that could be exploited for separating the two enantiomers of 1 by the formation of diastereomeric derivatives. Consequently, the two enantiomers of 1 were separated using a chiral HPLC column (Chiralpak ID, eluent: heptane/EtOH/CHCl 3 50/30/20, 1 mL min À1 ), which allowed the efficient separation of the two enantiomers of 1 with an excellent resolution factor (R s = 3.24), as shown in Fig. 1. A circular dichroism detector provided the sign of each enantiomer at 254 nm. It was observed that enantiomer [CD(À) 254 ]-1 was first eluted at t = 6. ESI †). Thus, from 350 mg of racemic material, 160 mg of each enantiomer were obtained with an excellent enantiomeric excess (ee 4 99% for [CD(À) 254 ]-1 and ee 4 99.5% for [CD(+) 254 ]-1). In order to improve the chemical purity of the two compounds, an additional purification step was conducted on both enantiomers. Thus, compounds [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were purified on silica gel (eluent: CH 2 Cl 2 /acetone 90/10) and then recrystallized in a mixture of CH 2 Cl 2 /EtOH. These additional purification steps provide both enantiomers with high chemical purity. The 1 H NMR and S1).
Compounds [CD(À) 254 ]-1 and [CD(+) 254 ]-1 crystallize in P2 1 2 1 2 1 and P2 1 space groups, respectively. No disorder was observed in the two X-ray structures and the cavities do not contain any substrate (solvent or gas molecules). The two enantiomers adopt a contracted conformation of the bridges that minimizes the internal cavity volume. Using a probe radius of 1.4 Å, the estimated cavity volume of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were 30 and 32 Å 3 , respectively. It is noteworthy that these two X-ray structures are significantly different from the one reported for the racemate. 6 Indeed, the X-ray structures of the two enantiopure derivatives adopt a more flattened shape with respect to the X-ray structure of the racemate, characterized by a large twist angle of 55.31 between the two CTB caps. [START_REF]average dihedral angles between the arene ring centroids of OCH 2 O-connected arenes with respect to the C 3 axis of the host[END_REF] For the racemate, a twist angle of 18.11 was found between the two CTB caps, associated with a cavity volume of 72 Å 3 . 6 Interestingly, these structures are also less symmetrical than the one observed for racemate and a topview of these two structures reveals that the six benzene rings are totally eclipsed (Fig. S8 in the ESI †). In contrast, the X-ray structure of the racemic derivative shows a strong overlapping of the phenyl rings.
Polarimetry and electronic circular dichoism
The two enantiomers of 1 are well soluble in CH 2 Cl 2 and CHCl 3 but unfortunately they show poor solubility in other organic solvents. Thus, polarimetric experiments were performed only in CH 2 Cl 2 and CHCl 3 solutions. The specific optical rotation (SOR) values of the two enantiomers of 1 are reported in the ESI, † (Table S2) and the wavelength dependence of [CD(+) 254 ]-1 is shown in Fig. 2. In CH 2 Cl 2 , the SOR values of [CD(+) 254 ]-1 are slightly positive at 589 and 577 nm, close to zero at 546 nm and significantly negative at 436 and 365 nm. Nevertheless, despite the low values measured for this compound at 589, 577 and 546 nm, SOR values with opposite sign are obtained for the two enantiomers of 1 (Table S2, ESI †). In CHCl 3 , the wavelength dependence of the SOR values evolves similarly with values slightly higher. This result contrasts with the measurements performed with compound 2 that exhibited significant differences in CH 2 Cl 2 and CHCl 3 solutions. Finally, as previously observed with compound 2, 17 a change of the SOR sign is observed in the nonresonant region (around 546 nm in CH 2 Cl 2 and 475 nm in CHCl 3 ).
UV-Vis and ECD experiments require lower concentration of solute and consequently this allows us to extend the range of solvents. Thus, UV-Vis and ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were successfully recorded in CH 2 Cl 2 , CHCl 3 , THF, and CH 3 CN solvents. The UV-Vis spectra measured in THF and CH 2 Cl 2 solvents are reported in Fig. S9 in the ESI. † These spectra are very similar to those published for compound 2. 17 The ECD spectra of the two enantiomers are reported in Fig. S10 in the ESI, † for the four solvents. A perfect mirror image is observed in all solvents for the two enantiomers as expected for enantiomers with high enantiomeric excess. For CH 2 Cl 2 and CHCl 3 solutions, the ECD spectra give only access to the bands corresponding to the two forbidden 1 L a and 1 L b transitions in the UV-visible region (230-300 nm). For THF and CH 3 CN solutions, the spectral range can be extended up to 210 nm. This allows us to have access to another spectral region corresponding to the allowed 1 B b transition. This spectral region usually gives rise to intense ECD signals in organic solution. It was observed that the ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 are very similar in shape and intensity, regardless of the nature of the solvent used in these experiments. For instance, in CH 2 Cl 2 the ECD spectrum of the [CD(+) 254 ]-1 enantiomer shows four bands, as shown in Fig. 3. Three ECD bands (two negative and one slightly positive from high to low wavelengths) are observed in the spectral region related to the 1 L b transition (260-300 nm). At shorter wavelengths, only a single positive ECD band was observed between 230 and 255 nm ( 1 L a transition). Interestingly, it can be noticed that the ECD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-2 show a lot of similarities even though some significant spectral differences are observed especially in the 1 L a region. Indeed, the bisignate ECD signal usually observed in the 1 L a region for cryptophane derivatives and observed for [CD(À) 254 ]-2 is no longer present in the ECD spectrum of [CD(+) 254 ]-1. In the past, the sign of this bisignate ECD signal was exploited to determine the absolute configuration (AC) of cryptophane-A molecule. [START_REF] Canceill | [END_REF] Then, we have confirmed that this approach could be used to assign the AC of other cryptophane derivatives in organic solution. This study shows that the approach can not be used for cryptophane-111.
Synchrotron radiation circular dichroism experiments were also performed to obtain additional information at lower wavelengths, in the 1 B b region (180-230 nm). The SRCD spectra of the two enantiomers of 1 recorded in CH 2 Cl 2 and CHCl 3 are reported in Fig. S11 in the ESI. † For wavelengths higher than 230 nm, the SRCD spectra of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 are identical in shape and intensities to the ECD spectra described above. For wavelengths lower than 230 nm, the SRCD spectra reveal two additional bands with opposite sign. For instance, the [CD(+) 254 ]-1 enantiomer exhibits in CH 2 Cl 2 a positivenegative bisignate pattern from short to long wavelengths related to the 1 B b transition. It is noteworthy that similar (in shape and in intensities) SRCD spectra were recorded in CHCl 3 solution, in contrast to what was observed for compound 2. 17
VCD and ROA spectroscopy
The chiroptical properties of enantiopure cryptophane 1 have been also investigated by VCD in CDCl 3 and CD 2 Cl 2 solutions. The IR spectra of the [CD(+) 254 ]-1 enantiomer are similar in the 1700-1000 cm À1 region for the two solutions (Fig. S12 in ESI †). In addition, the presence of xenon in the CDCl 3 solution does not modify the IR spectrum in this spectral range. The VCD spectra of the two enantiomers of 1 measured in CDCl 3 and CD 2 Cl 2 solvents are reported in Fig. S13 in ESI, † whereas the comparison of experimental VCD spectra of [CD(+) 254 ]-1 in the two solvents is presented in Fig. 4. As shown in Fig. 4, the VCD spectra of 1 seem independent of the nature of the solvent, even though slight spectral differences are observed in the 1050-1010 cm À1 region. In addition, a slightly lower intensity of the VCD bands is observed in CD 2 Cl 2 solution, which can be related to the lower molar absorptivities measured in CD 2 Cl 2 with respect to CDCl 3 solution. Finally, the presence of xenon in the CDCl 3 solution does not modify the VCD spectrum of [CD(+) 254 ]-1 (Fig. S14 in ESI †).
The ROA spectra of the two enantiomers of 1 measured in CDCl 3 solution (0.1 M), in presence or not of xenon, are shown in Fig. S15 in ESI. † These ROA spectra are nearly perfect mirror images (Fig. S15a andb in ESI †), as expected for enantiopure materials. The ROA spectra measured in CD 2 Cl 2 solution were similar (Fig. S15c andd in ESI †), indicating that the ROA spectra of 1 is independent of the solvent, as already mentioned for ECD and VCD experiments. On the other hand, the ROA spectra of [CD(+) 254 ]-1 in CD 2 Cl 2 solution in presence or not of xenon reveal a clear spectral difference at wavenumbers lower than 200 cm À1 , as shown in Fig. 5. Indeed, in presence of xenon, we observe an important decrease of the intensity of the band at 150 cm À1 . The same effect is observed on ROA spectra measured in CDCl 3 solution (Fig. S16 in ESI †). The vibrational assignment of this mode was made by visual inspection of modes represented and animated by using the Agui program. All the displacement vectors of carbon atoms point towards the center of the cavity, indicating that this mode corresponds to the symmetric breathing mode of the cryptophane-111 skeleton. This result clearly indicates that the presence of a guest inside the cavity of a cryptophane derivative modifies the intensity of its symmetric breathing mode. The examination of this mode could be used in the future to reveal the complexation of guest molecules by cryptophane derivatives. However, it is noteworthy that it would not be possible to observe this effect for the compound 2 in CHCl 3 or CH 2 Cl 2 solutions, since these two solvent molecules can enter the cavity of 2 and would be therefore strong competitors for xenon.
Discussion
AC and conformational analysis of 1
As it is now recommended, different techniques have been used to assign unambiguously the absolute configuration (AC) of the two enantiomers of 1. [33][34][35] Thanks to the determination of the Flack and Hooft parameters, the X-ray crystallography analysis provides an easy way to determine the AC of the two [CD(+) 254 ]-1 and [CD(À) 254 ]-1 enantiomers. Thus, based on the analysis of the two X-ray structures, the following assignment [CD(+) 254 ]-MM-1 and [CD(À) 254 ]-PP-1 has been found for the two enantiomers of 1. Consequently, considering the specific optical rotation measured at 589 nm the AC become (+) 589 -MM-1 and (À) 589 -PP-1. It is noteworthy that these last descriptors are identical to those determined for compound 2, as suggested by the similarity observed in their experimental ECD spectra (Fig. 3).
To confirm the AC of the two enantiomers of 1, determined by X-ray crystallography, we have used VCD and ROA spectroscopy associated with DFT calculations, which are known to be a valuable approach to assign the AC of organic compounds. Conformer distribution search was performed at the molecular mechanics level of theory for the MM-1 configuration, starting from the more symmetrical structure obtained from X-ray analysis of the racemic compound. 6 Twenty one conformers within roughly 8 kcal mol À1 of the lowest energy conformer were found and their geometries optimized at the DFT level (B3PW91/6-31G**), leading to ten different conformers. The electronic and Gibbs energies as well as the twist angle between the two CTB caps for the three most stable conformers are reported in Table 1 and compared to those calculated from the optimized geometries of the enantiomer crystals. The conformer A leads to the lowest Gibbs free energy and represents more than 99% of the Boltzmann population of conformers at 298 K. This conformer exhibits the most symmetrical structure with an average value of the twist angle between the two CTB caps of 19.11 (dihedral angles [START_REF]average dihedral angles between the arene ring centroids of OCH 2 O-connected arenes with respect to the C 3 axis of the host[END_REF] As shown in Fig. 6, the VCD spectrum predicted for the MM configuration of conformer A reproduces very well the sign of most of the bands observed in the experimental spectrum of [CD(+) 254 ]-1, confirming the AC assignment [CD(+) 254 ]-MM-1, determined by X-ray crystallography. A very good agreement between predicted ROA spectrum of conformer A and experimental ROA spectrum is also obtained (Fig. S17 in ESI †). This conformational analysis shows that only one conformer is present for 1, contrary to the conformational equilibrium observed for 2 due to the ethylenedioxy linkers (i.e. possibility of trans and gauche conformations of the three linkers). This lack of conformational equilibrium for 1 may explain the overall higher intensities of the VCD bands for 1 and the lowest FWHM (9 cm À1 for 1 vs. 14 cm À1 for 2) used to reproduce the experimental VCD spectrum for 1.
As above mentioned, the bisignate pattern observed in the 1 L a region (230-260 nm) of the ECD spectra of cryptophane derivatives can be another way to determine the AC of these derivatives in organic solvents. 16,17 Using the Khun-Kirkwood excitonic model, Gottarelli and co-workers have shown that this bisignate resulted from different excited states (one A 2 and two degenerate E components) for cryptophane possessing a D 3 -symmetry. [START_REF] Canceill | [END_REF] For the 1 L a transition, the A 2 component is always located at lower energy and the two E components show opposite rotational strengths. This model leads to a positive/ negative bisignate pattern (from short to long wavelength) for the MM configuration of cryptophane-A derivatives. For cryptophane-111, this bisignate pattern is not observed and this rule does not apply. Indeed, in the case of compound 1, TD-DFT calculations show that the A 2 component located at high wavelength possesses a lower negative rotational strength (R = À0.38 cgs) than the one measured for compound 2 (R = À0.70 cgs). Thus, the contribution of the A 2 component in the experimental ECD spectrum is embedded in the two E components exhibiting a larger rotational strength (R = 1.05 cgs), leading to a broader positive band in the 1 L a region. The strong decrease of the negative A 2 component of the 1 L a transition suggests that the classical excitonic coupling model can not be used to determine the AC of cryptophane-111 molecule and that other contributions should be involved in the interpretation of the ECD spectrum. For instance, as it has been reported by Pescitelli and co-workers in some cases, 36,37 the coupling between the electric and magnetic transition moments (mm term) can contribute significantly to the overall rotational strength for a given excited state. This contribution, which is usually neglected in the case of the classical excitonic coupling model, can dominate the electric-electric coupling (mm term). Nevertheless, the bisignate pattern observed in the 1 B b region (190-230 nm) of the SRCD spectra can be used to determine the AC of 1. Indeed, the positivenegative sequence from short to long wavelength observed for [CD(+) 254 ]-1 was associated with the MM configuration by TD-DFT calculations (Fig. S18 in ESI †).
Comparison between the chiroptical properties of 1 and 2
In a recent article, different behaviours of the chiroptical properties (in particular, polarimetric properties) were observed for 2 in CHCl 3 and CH 2 Cl 2 solutions. 17 These modifications were interpreted by a subtle conformational equilibrium change of the ethylenedioxy linkers upon encapsulation of CHCl 3 and CH 2 Cl 2 molecules. A preferential G À conformation of the linkers was found in CH 2 Cl 2 solution, in order to decrease the cavity size and to favour hostguest interactions. In contrast, a higher proportion of G + conformation of the linkers was found in CHCl 3 solution, increasing the size of the cavity suitable for the complexation of chloroform molecule. The comparison of the chiroptical properties of 1 and 2 is very interesting because these two compounds possess identical CTB units and differ only by the nature of the alkyl linkers. The conformational equilibrium observed for compound 2 due to the possibility of trans and gauche (G + and G À ) conformations of the three ethylenedioxy linkers is not possible for compound 1 which possess methylenedioxy linkers. In addition, it has been shown that (rac)-1 does not bind halogenomethane molecules so that neither CH 2 Cl 2 nor CHCl 3 can enter its cavity. 3,4 Thus, no spectral modification in the ECD (or SCRD) and VCD (or ROA) spectra is expected for 1 in these two solvents. This assumption is confirmed by our experiments, as shown in the result section.
Our results reveal also that the SOR values of 1 behave similarly in the two solvents. We observe a change of the sign of the SOR values in the nonresonant region, as shown in Fig. 3. This surprising effect has been previously reported with compound 2 for experiments in chloroform, acetone and dimethylformamide. Calculations of the SOR at the B3PW91/6-31G** level (IEFPCM = CHCl 3 ) reproduce perfectly the experimental data measured in CHCl 3 solution (Fig. S19 in ESI †).
Conclusions
This article reports a thorough study of the chiroptical properties of the two enantiomers of cryptophane-111 (1) by X-ray crystallography, polarimetry, ECD (SRCD), VCD, and ROA spectroscopy. The absolute configuration of the two enantiomers has been determined based on X-ray crystallographic data. Thus, the (+) 589 -MM-1 ((À) 589 -PP-1) AC has been assigned. This result has been confirmed by the combined analysis of the VCD (ROA) spectra and DFT calculations. In a second part of this article, the chiroptical properties of 1 have been compared to those of cryptophane-222 (2). Despite the similarity in the two structures, derivatives 1 and 2 exhibit different behaviours of their chiroptical properties with respect to CH 2 Cl 2 and CHCl 3 solvents. In these two solvents, polarimetric measurements and SRCD spectra are clearly different for compound 2, whereas they remain almost unchanged for 1. This different behaviour can be explained by the incapacity of compound 1 to encapsulate a solvent molecule within its cavity, regardless of the nature of the solvent. Consequently, the nature of the solvent has almost no influence on the conformation of the methylenedioxy linkers. This result confirm our previous assumption that the different chiroptical properties observed for 2 in chloroform and dichloromethane solutions are certainly due to the conformation equilibrium change of the ethylenedioxy linkers upon encapsulation of CH 2 Cl 2 or CHCl 3 molecules.
Thus, the comparison of the chiroptical properties of cryptophane 1 and 2 sheds light on the importance of the solvent present within the cavity to understand the chiroptical properties of the cryptophane derivatives in general. In addition, our results shows that the bulk solvent has no significant effect on the chiroptical properties of 1.
Fig.1Separation of the two enantiomers of 1 using an analytical chiral HPLC column.
13 C
13 NMR spectra (Fig. S3-S6 in the ESI †) are identical to those reported for (rac)-1. X-ray crystallographic structures of the two enantiomers of 1 X-ray crystals of [CD(+) 254 ]-1 and [CD(À) 254 ]-1 were obtained in a CH 2 Cl 2 /EtOH mixture and in pyridine, respectively (Fig. S7a and b in the ESI †). The crystallographic data of the two X-ray crystal structures are reported in the ESI, † (Table
Fig. 2
2 Fig. 2 Specific optical rotation values (10 À1 deg cm 2 g À1 ) of [CD(+) 254 ]-1 recorded at several wavelengths (365, 436, 546, 577 and 589 nm) in chloroform (c = 0.22), and dichloromethane (c = 0.27) solvents.
Fig. 3
3 Fig. 3 Comparison of experimental ECD spectra of [CD(+) 254 ]-1 (black spectrum) and [CD(À) 254 ]-2 (red spectrum) in CH 2 Cl 2 solution.
Fig. 4
4 Fig. 4 Comparison of experimental VCD spectra of [CD(+) 254 ]-1 in CDCl 3 (black spectrum) and in CD 2 Cl 2 (red spectrum) solutions.
Fig. 5
5 Fig. 5 Comparison of experimental ROA spectra of [CD(+) 254 ]-1 in CD 2 Cl 2 solution in presence (red spectrum) or not (black spectrum) of xenon.
of 19.0, 19.1 and 19.1). Conformers B and C present higher twist angles with average values of 23.61 and 28.71, respectively. It is noteworthy that the structure is less symmetrical than for conformer A with one dihedral angle, which differs from the two others (21.11, 21.51 and 28.31 for conformer B and 23.81, 31.11 and 31.21 for conformer C).
Fig. 6
6 Fig. 6 Comparison of the experimental VCD spectrum of [CD(+) 254 ]-1 recorded in CDCl 3 solution with the calculated spectrum at the B3PW91/ 6-31G** level (IEFPCM = CHCl 3 ) for conformer A of MM-1.
Table 1
1 Conformations, twist angles and energies of the three conformers of MM-1 calculated from the crystal of (rac)-1, and of the one calculated from the crystal of MM-1
Conformers Twist angle Energy (hartrees) Electronic Gibbs DG (kcal mol À1 ) %
A 19.1 À2187.01467627 À2186.367759 0 99.7
B 23.6 À2187.00811518 À2186.362379 3.38 0.3
C 28.7 À2187.00412196 À2186.359201 5.26 0.0
Crystal MM-1 55.3 À2187.00057700 À2186.355790 7.51 0.0
Acknowledgements
Supports from the French Ministry of Research (project ANR-12-BSV5-0003 MAX4US) is acknowledged. The authors are indebted to the CNRS (Chemistry Department) and to Re ´gion Aquitaine for financial support in VCD and ROA equipments. They also acknowledge computational facilities provided by the MCIA (Me ´socentre de Calcul Intensif Aquitain) of the Universite de Bordeaux and of the Universite ´de Pau et des Pays de l'Adour, financed by the ''Conseil Re ´gional d'Aquitaine'' and the French Ministry of Research and Technology. The GDR 3712 Chirafun is acknowledged for allowing a collaborative network between the partners of this project. |
00176692 | en | [
"chim.mate",
"phys.cond.cm-ms",
"phys.cond.cm-ds-nn",
"chim.inor",
"sdu.stu"
] | 2024/03/05 22:32:15 | 2008 | https://hal.science/hal-00176692/file/inpress_JNCS_Massiot.pdf | Dominique Massiot
email: [email protected]
Franck Fayon
Valérie Montouillout
Nadia Pellerin
Julien Hiet
Claire Roiland
Pierre Florian
Jean-Pierre P Coutures
Laurent Cormier
Daniel R Neuville
Structure and dynamics of Oxyde Melts
whether they are published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Oxide glasses are known and used for thousands of years and tuning of properties like colour, durability, viscosity of the molten state were mostly known and dominated by glass makers. Despite this millenary knowledge, the range of glass forming system of interest is still expanding and many non-elucidated points remain in the understanding of the glass and melts structure and properties. The aim of this contribution is to underline, from the experimental point of view provided by Nuclear Magnetic Resonance, the relations existing between the structure and dynamics of the high temperature molten oxide systems and the short and medium range order of their related glasses.
The strength of Nuclear Magnetic Resonance for describing structure and dynamics of amorphous or disorganised system like oxide glasses or melts, comes firstly from its ability to selectively observe the environment of the different constitutive atoms (providing that they bear a nuclear spin) and secondly from its sensitivity to small variation in the first and second coordination sphere of the observed nucleus. This often provides spectral separation of the different types of environment [START_REF] Mackenzie | MultiNuclear Solid State NMR of Inorganic Materials[END_REF]. The information derived from NMR experiments are then complementary to those obtained by other means : optical spectroscopies, IR or Raman, X-Ray absorption, X-Ray or neutrons elastic or inelastic scattering etc… It is important to remark that NMR has a much slower characteristic time (ranging from Hz to MHz) than most of the above mentioned methods, leading to fundamental differences in the signatures of the viscous high temperature molten states.
One dimensional NMR experiments
In liquid state in general, and in the high temperature molten state in the case of oxide glass forming systems, the mobility is such that only the isotropic traces of the anisotropic interactions express in their NMR spectra. Fluctuation of these interactions leads to relaxation mechanisms that can allow discussion of the characteristic times of rearrangement and overall mobility of the system. In solid state materials and in glasses the anisotropy of the different interaction fully express in the static NMR spectra giving broad and often featureless line shapes accounting for all the different orientations of the individual structural motifs of the glass. Although these broad spectra contain many different information on the conformation of the structural motifs (Chemical Shift Anisotropy -CSA), spatial proximity between spins (homo-and hetero-nuclear Dipolar interactions), chemical bonds (indirect J coupling), electric field gradient at the nucleus position (Quadrupolar interaction for I>1/2 nuclei), these information are often hardly evidenced. Magic Angle Spinning is this unique tool that solid state NMR has at hand to average out all (or most) of the anisotropic part of the interactions only leaving their traces mimicking (or giving a coarse approach of) the Brownian reorientation of the liquid phase. Under rapid Magic Angle Spinning, Chemical Shift is averaged to its isotropic value and distribution directly given by the line position and width in the case of a dipolar (I=1/2) spin, while the traceless Dipolar interaction is averaged out, and the scalar (or isotropic) part of J-coupling is usually small enough to be completely masked in a 1D spectrum, even in crystalline phases.
Phosphates, silicates, alumino-silicates or aluminates oxide glasses structures are mostly based on tetrahedral species whose polymerization is characterized by their number of bridging oxygens (Q n : Q=P,Si,Al and n the number of bridging oxygens). Figure 1 presents the 31 P MAS NMR 1D spectra of a (60% PbO-40% P 2 O 5 ) glass. It shows two broad but resolved resonances in a 1/1 ratio that can unambiguously ascribed to end-chain groups (Q 1 750Hz 6.2 ppm width) and middle-chain groups (Q 2 1100Hz 9 ppm width) environments.
Both these lines are considerably broader than that of the corresponding crystalline sample (Pb 3 P 4 O 13 linewidth < 1 ppm) due to the disorder in the glass structure and the loss of long range order. In the case of simple binary glasses of phosphates or silicate the broad lines corresponding to the various Q n tetrahedral sites are often resolved enough to allow quantification of their respective abundance and evaluation of the disproportionation equilibrium constants (K n : 2Q n <->Q n-1 +Q n+1 ) [START_REF] Stebbins | [END_REF]. Figure 2 reports these quantitative results for PbO-SiO 2 [3]and PbO-P 2 O 5 [4] binary systems. In lead-phosphate glasses the K n values remain very small which correspond to a binary distribution and indicates that only two types of Q n environments can co-exist at a given composition, while in lead-silicate glasses the equilibrium constant are much higher, close to that of a randomly constructed network with a competition between lead based and silicon based covalent networks. 207 Pb NMR and L III -EXAFS experiments confirmed this interpretation by showing that the coordination numbers of Pb in silicate is of 3 to 4 oxygen with short covalent bonds and a very asymmetric environment (pyramid with lead at the top) while it is of more than 6 in phosphate glasses with a more symmetric environment, behaving more as a network modifier [3][4][5].
Polyatomic molecular motifs
Although these information already give important details on the structure of these phosphates or silicate binary glasses, it would be of great interest to obtain a larger scale image of the polyatomic molecular motifs constituting these glasses and especially to evaluate the length of phosphate chains possibly present in the glass, that makes the difference between the long range ordered crystalline phase and the amorphous phase. That type of information can be obtained by implementing multidimensional NMR experiments that allow to evidence Dipolar [4] or J-coupling [6,7,8] interaction and further use them to build correlation experiments separating the different contributions of well defined molecular motifs. Figure 1 gives a general picture of the possibilities offered by the J-coupling mediated experiments that allow to directly evidence the P-O-P bonds bridging phosphate units through J 2 P-O-P interaction. Let us consider the example of the 60% PbO-40% P 2 O 5 glass already introduced above. Its 1D spectrum (fig 1a) shows partly resolved Q 1 and Q 2 lines with strong broadening (750 and 1100Hz) signing the to be understood glass disorder. Because the Q 1 and Q 2 line width is essentially inhomogeneous, due to distribution of frequencies for each individual motif, this broadening can be refocused in an echo which is modulated by the small (and unresolved) isotropic J 2 P-O-P coupling [7]. Figure 1b shows the J-resolved spectrum of the glass that reveals the J coupling patterns consisting in a doublet for Q 1 , and to a triplet for Q 2 , thus justifying the spectral attribution previously made based on the 31 P isotropic chemical shift. It is also of importance to notice that this experiment clearly shows that the isotropic J-coupling does vary across the 1D lines, typically increasing with decreasing chemical shift. The new type of information provided by this experiment is likely to bear important information on the covalent bond hybridisation state and geometry. Because this isotropic J-coupling can be measured, it can also be used to reveal -or to spectrally edit -different polyatomic molecular units in the glass. Figure 1c and 1d respectively show the two-dimensional correlation spectra that enable the identification of through-bond connectivity between two linked PO 4 tetrahedra (fig. 1c) [6] and between three linked PO 4 tetrahedra (fig. 1d) [8]. These experiments, fully described in the referenced papers allow spectral separation of dimers, end-chain groups, and middle-chain groups when selecting Q-Q pairs (fig. 1c) and trimers, end-chain triplets or centre-chain triplets when selecting Q-Q-Q triplets (fig. 1d composition [11]. They showed that while the two Q 3 and Q 4 contributions can be resolved from their different chemical shift anisotropy or from their isotropic chemical shift, below glass temperature, they begin to exchange just above glass transition with characteristic times of the order of seconds [12] and finally end up into merging in a unique line in the high temperature molten state. This experiment underlines two important points. First, although silicate glasses can be regarded as SiO 2 based polymer, the melting of silicate glasses implies rapid reconfiguration of the structural motifs through a mechanism that was proposed to involve a higher SiO 5 coordination state of silicon with oxygen, second that the characteristic time scales of NMR spectroscopy allow to explore a large range of time scales involved in this mechanism. We can remark that this has been recently extended to below T g structural reorganisation of BO 3 and BO 4 configurations in borosilicate glasses [START_REF] Sen | XI International Conference on Physics on Non Crystalline Solids[END_REF]. The existence of higher (and previously unexpected) SiO 5 coordination state of silicon was proved experimentally by acquiring high quality 29 Si NMR spectra [START_REF] Stebbins | [END_REF] with clear effects of quenchrates and pressure stabilizing these high coordination silicon environments.
The high temperature NMR setup developed in our laboratory, combining CO 2 laser heating and aerodynamic levitation allows acquisition of 27 Al resolved NMR spectra in molten oxide at high temperature with a good sensitivity [15,16]. Figure 3a shows the experimental setting and an example of a 27 Al spectrum acquired in one scan for a liquid molten sample CaAl 2 O 4 at ~2000°C [17]. The sensitivity of this experiment is such that one can follow in a time-resolved manner the evolution of the 27 Al signal when cooling the sample from high temperature, until disappearance of the signal when the liquid becomes too viscous.
As in the case of the high temperature molten silicate discussed above, we only have a single sharp line giving the average chemical shift signature of the rapidly exchanging chemical species. This later point is confirmed by independent T 1 (spin-lattice) and T 2 (spin-spin) relaxation measurements giving similar values and reliably measured in the 1D spectrum from the linewidth. This relaxation time can be modelled using a simple model of quadrupolar relaxation which requires the knowledge of the instantaneous quadrupolar coupling that can be estimated from the 27 Al MAS NMR spectrum of the corresponding glass at room temperature. The obtained correlation times, corresponding to the characteristic time of the rearrangement of aluminium bearing structural units, can be directly compared to characteristic times of the macroscopic viscosity with a convincing correspondence in the case of aluminates melts [18] (Figure 3b&c).
Structure and dynamics of alumino-silicates
In alumino-silicate glasses of more complex composition, aluminium is able to substitute silicon in tetrahedral network forming positions, providing charge compensation by a neighbouring cation. In such case, the NMR signature of 29 Si spectra is much more complex and difficult to interpret [19]. Because 29 Si Q n species isotropic chemical shifts depend upon show proofs of the depolymerization of the AlO 4 based network [20]. In alumino-silicate glasses, aluminium species with higher coordination were evidenced [22] and quantified [21] using a detailed modelling of the 27 Al MAS and MQMAS NMR spectra obtained at high principal fields. One can also remark that no SiO 5 environments have ever been evidenced in aluminosilicate compositions. Going further and examining the whole SiO 2 -Al 2 O 3 -CaO phase diagram [23], we showed that these AlO 5 environments are not confined to the charge compensation line or to the hyper-aluminous region of the ternary diagram, where there exist a deficit of charge compensators, but that AlO 5 species are present, at a level of ~5%, for any alumino-silicate composition of this ternary diagram, including those presenting the smallest fraction of alumina but excluding the Calcium Aluminates of the CaO-Al 2 O 3 join which nearly exclusively show aluminium in AlO 4 coordination state. For C3A composition XANES unambiguously shows that Al occupy Q 2 environments both in crystal, and glass [20,23].
These finding that there exist no or very few AlO 5 in compositions close to CaO-Al 2 O 3 composition is somehow in contradiction with our previous interpretation of chemical shift temperature dependence with a negative slope [17]. At that time we proposed to consider that there could exist significant amounts of five fold coordinated aluminium in the high
Conclusion
From the above discussed experimental results we can draw several important points about the relations between structure and properties of oxide glasses and their related molten states which appear to be closely related. It first clearly appears that in many cases, even if most of the structure of the glasses, and consequently of their related high temperature molten states are built around a network of µ 2 connected tetrahedra (P, Si, Al…), there exist in many cases unexpected environments showing up as minor contributions in the glass structures (~5% or less) but significantly present and relevant to molecular motifs that can be identified. This is the case of SiO 5 species in binary alkali silicates [START_REF] Stebbins | [END_REF], AlO 5 (AlO 6 ) [21][22][23] in aluminosilicates, violations to Al avoidance principle [25] or tricluster µ 3 oxygens [9]. This implies that modelling of these complex materials in their solid or molten state will often be difficult using limited box sizes. Just consider that 5% of Aluminum species in a glass containing 5% Al 2 O 3 in a Calcium Silicate only represent 1 to 2 atoms over 1000 or that 5% µ 3 oxygens in a CaAl 2 O 4 composition represent less than 3 occurrences in a box of 100 atoms.
Furthermore Charpentier and coworkers [26] recently showed that a proper rendering of NMR parameters from all electrons ab-initio computations in glasses requires a combination of classical and ab-initio MD simulation. Going further we also emphasize that an important part of what we qualify with the general term of disorder can be described in terms of distribution of poly-atomic molecular motifs extending over a much larger length scale than the usual concept of coordination.
Figure Captions :
Summary of NMR experiments on a 60%PbO-40%P 2 O 5 glass evidencing polyatomic molecular motifs with : (a) 1D spectrum, (b) the J resolved spectrum showing doublet for Q 1 and triplet for Q 2 [7], (c) the INADEQUATE experiment evidencing pairs of phosphates (Q-Q) [6], and (d) the 3Quantum spectrum evidencing triplets of phosphates (Q-Q-Q) [8].
Figure 2
Quantitative interpretation of 29 Si and 31 P 1D spectra allowing the measurement of disproportionation constants for (a) lead silicate [3] and (b) lead phosphate glasses [4].
Q 1 -Q 1 Q 1 -Q 2 Q 2 -Q 2 Q-Q pairs Chemical bond Q 1 Q 2 (a) (b) Q-Q-Q triplets Q 1 -Q 2 -Q 1 Q 2 -Q 2 -Q 2 Q 1 -Q 2 -Q 2 (d) (c)
). From these experiments it becomes possible to identify the different structural motifs constituting these glasses in terms of molecular building blocks extending over 6 chemical bonds (O-P-O-P-O-P-O) over lengths up to nearly 10Å if we consider a linear chain. Other experiments of the same type now allow to describe hetero-nuclear structural motifs of different types involving Al-O[9], Al-O-Si, P-O-Si[10] or opening the possibilities of more detailed description of glasses or disordered solids at large length scale.High Temperature NMR experimentsEven if most of the resolution is lost when going to static NMR spectra in the general case, the very different chemical shift anisotropy of Q 3 and Q 4 silicon environment can be source of enough resolution for evidencing dynamic process occurring close to or above glass transition temperature as shown by Stebbins and Farnan in the case of a binary K 2 O-4SiO 2
Al substitution in neighbouring tetrahedra,29 Si silicon spectra are usually broad Gaussian lines covering the full range of possible environment. Similarly 27 Al aluminium spectra are broadened by the combination of a distribution of chemical shifts and a distribution of second order quadrupolar interaction[21] and give only average pictures of the structure with possible resolution of different coordination states but no resolution of the different Al based Q n species except in the case of binary CaO-Al 2 O 3 glasses in which NMR and XANES both
temperature molten state, based on the thermal dependence of chemical shift and on state of the art MD computations. A more detailed study shows that, across the CaO-Al 2 O 3 join, the slope of the temperature dependence of the average chemical shift in the high temperature molten state drastically changes from a positive value for Al 2 O 3 to very negative (~-4 to -5ppm) for composition around CaAl 2 O 4 . Indeed we can even remark that all compositions able to vitrify in aerodynamic levitation contactless conditions have a slope smaller than -2ppm/1000°C (Figure4a). Stebbins and coworkers recently studied the17 O NMR signature of similar composition[24]. They evidenced a significant amount of non bridging oxygen atoms and discussed the possibility of a seldom observed µ 3 tricluster oxygen linking three tetrahedral Al sites that exists in the closely related CA 2 (CaAl 4 O 7 -Grossite) crystalline phase. Thanks to the development of new methods of hetero-nuclear correlation between quadrupolar nuclei through J-coupling at high principal field (750MHz)[9], we could reexamine this question and show that a {17 O} 27 Al experiment carried out on a CaAl 2 O 4 glass was able to clearly evidence the signature of ~5% µ 3 tricluster oxygen linked to aluminium with chemical shift decreased by 5 ppm per linked tricluster (Figure 4b). It thus appears that molecular motif of type µ 3 [AlO 3 ] 3 can be quenched in the glass and do exist in the molten state while AlO 5 remains negligible, rising a new interpretation of the thermal dependence of the 27 Al isotropic chemical shift in the CaO-Al 2 O 3 melts.
Figure 3 (
3 Figure 3 (a) High Temperature aerodynamic levitation NMR setup and a characteristic one shot spectrum, (b) temperature dependence of the chemical shift and (c) viscosity and NMR correlation times[adapted from ref.17]
Figure 4 (
4 Figure 4 (a) Slope of the thermal dependence of average chemical shift in high temperature versus composition for the CaO-Al 2 O 3 join. (b) { 17 O} 27 Al HMQC experiment of CaO-Al 2 O 3 glass at 750 MHz showing clear signature of µ 3 tricluster oxygens[adapted from ref.9].
Figure 1
D
.Massiot, XI PNCS, Rhodos Nov 2006
Figure 2
Figure 3
Figure 4 dδ
4 Figure 4
Acknowledgements
We acknowledge financial support from CNRS UPR4212, FR2950, Région Centre, MIIAT-BP and ANR contract RMN-HRHC. |
01694219 | en | [
"spi.nano"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01694219/file/FULLTEXT01.pdf | Hatim Alnoor
email: [email protected]
Adrien Savoyant
Xianjie Liu
Galia Pozina
Magnus Willander
Omer Nur
An effective low-temperature solution synthesis of Co-doped [0001]-oriented ZnO nanorods
Keywords: Low-temperature aqueous chemical synthesis, ZnO NRs, Co-doping, EPR, intrinsic point defects
We demonstrate an efficient possibility to synthesize vertically aligned pure zinc oxide (ZnO) and Co-doped ZnO nanorods (NRs) using the low-temperature aqueous chemical synthesis (90 ºC). Two different mixing methods of the synthesis solutions were investigated for the Codoped samples. The synthesized samples were compared to pure ZnO NRs regarding the Co incorporation and crystal quality. Electron paramagnetic resonance (EPR) measurements confirmed the substitution of Co 2+ inside the ZnO NRs giving a highly anisotropic magnetic Co 2+ signal. The substitution of Zn 2+ by Co 2+ was observed to be combined with a drastic reduction in the core-defect (CD) signal (g ~ 1.956) which is seen in pure ZnO NRs. As revealed by the cathodoluminescence (CL) the incorporation of Co causes a slight red-shift of the UV peak position combined with an enhancement in the intensity of the defect-related yellow-orange emission compared to pure ZnO NRs. Furthermore, the EPR and the CL measurements allow a possible model of the defect configuration in the samples. It is proposed that the as-synthesized pure ZnO NRs likely contain Zn interstitial (Zni + ) as CDs and oxygen vacancy (VO) or oxygen interstitial (Oi) as surface defects. As a result, Co was found to likely occupy the Zni + leading to the observed CDs reduction, and hence enhancing the crystal quality. These results open the possibility of synthesis of highly crystalline quality ZnO NRs-based diluted magnetic semiconductors (DMSs) using the low-temperature aqueous chemical method.
1-Introduction
Zinc Oxide (ZnO) is a direct wide band gap (3.4 eV at room temperature) semiconductor with a relatively large exciton binding energy of 60 meV and possesses a significant luminescence covering the whole visible region. [1][2][3][4] Moreover, ZnO can be easily synthesized in a diversity of one-dimensional (1D) nanostructure morphologies on any substrate being crystalline or amorphous. [1][2][3][4][5][6][7] In particular, 1D ZnO nanostructures such as nanowires (NWs) and nanorods (NRs) have recently attracted considerable research due to their potential for the development of many optoelectronic devices, such as light-emitting diodes (LEDs), ultraviolet (UV) photodetectors and solar cells. [2][3][4][8][9][10] Also, ZnO NRs-based diluted magnetic semiconductors (DMSs), where a low concentration of magnetic elements (such as manganese (Mn) and cobalt (Co)) is diluted in the ZnO crystal lattice, show great promise for the development of spintronics and magneto-optical devices. [11][12][13][14] Among the different synthesis techniques utilized for ZnO NRs, the low-temperature solution-based methods are promising due to many advantages, i.e., low-cost, large-scale production possibility and the properties of the final product can be varied by tuning the synthesis parameters. [5][6][7] However, synthesizing ZnO NRs with optimized morphology, orientation, electronic and optical properties by low-temperature solution-based methods remains a challenge. The potential of ZnO NRs in all above-mentioned applications would require synthesis of high crystal quality ZnO NRs with controlled optical and electronic properties. [2][3][4]15 It is known that the optical and electronic properties of ZnO NRs are mostly affected by the presence of the native (intrinsic) and impurities (extrinsic) defects. [1][2][3][4] Therefore, understanding the nature of these intrinsic and extrinsic defects and their spatial distribution is critical for optimizing the optical and electronic properties of ZnO NRs. [1][2][3][4][16][17][18] However, identifying the origin of such defects is a complex matter, especially in nanostructures, where the information on anisotropy is usually lost due to the lack of coherent orientation. Recently, we have shown that by optimizing the synthesis parameters such as stirring times and the seed layer properties, the concentration of intrinsic point defects ( i.e. vacancies and interstitial defects ) along the NRs and at the interface between the NRs and substrate can significantly be tuned. 8,19,20 Thus, the ability to tune such point defects along the NRs could further enable the incorporation of Co ions where these ions could occupy such vacancies through substitutional or interstitial doping, e.g. a Co ion can replace a Zn atom or be incorporated into interstitial sites in the lattice. 21 Here, by developing theses synthesis methods, we obtained welloriented ZnO NRs, and by studying them at low temperature, we can access the magnetic anisotropy of such defects. Furthermore, by incorporating a relatively low amount of diluted Co into ZnO NRs the crystal structure of the as-synthesized well-oriented ZnO NRs can significantly be improved. The well-oriented pure ZnO and Co-doped ZnO NRs were synthesized by the lowtemperature aqueous chemical synthesis (90 ºC). The structural, optical, electronic, and magnetic properties of the as-synthesized well-oriented NRs have been systematically investigated by mean of field-emission scanning electron microscopy (SEM), X-ray powder diffraction (XRD), electron paramagnetic resonance (EPR), cathodoluminescence (CL) and X-ray photoelectron spectroscopy (XPS).
2-Experimental
The pure ZnO and Co-doped ZnO NRs were synthesized by the low-temperature aqueous chemical synthesis at 90 ºC on sapphire substrates. For pure ZnO NRs, a 0.075 M synthesis solution was prepared by dissolving hexamethylenetetramine (HMTA) and zinc nitrate hexahydrate in a deionized (DI) water and then stirred for three hours at room temperature (later denoted as M0 sample). After that, a sapphire substrate precoated with ZnO seed layer, 8,19,20 were submerged horizontally inside the above-mixed solutions and kept in a preheated oven at 90 °C for 5 hours.
Afterward, the samples were rinsed with DI-water to remove any residuals and finally, dried using blowing nitrogen. The synthesis process of the pure ZnO NRs is described in more details in Ref. 8,19,20 The Co-doped ZnO NRs were grown under similar conditions where two different approaches were used to prepare the synthesis solution. The first synthesis solution was prepared by mixing a 0 .075 M concentration of HMTA and zinc nitrate and stirred for 15 hours. Then a diluted solutions of Cobalt(II) nitrate hexahydrate with an atomic concentration of 7% was added dropwise to the above solution and stirred for extra 3 hours (later denoted as M1). The second synthesis solution was prepared by mixing a 7% diluted solution of Cobalt(II) nitrate hexahydrate with 0.075 M HMTA and stirred for 15 hours, and then a 0.075 M solution of zinc nitrate hexahydrate was added dropwise to the above-mentioned solution and stirred for extra 3 hours (later denoted as M2).
The morphology of the as-synthesized pure ZnO and Co-doped ZnO NRs was characterized using field-emission scanning electron microscopy (FE-SEM, Gemini LEO 1550). The crystalline and electronic structure were investigated by XRD using a Philips PW1729 diffractometer equipped with Cu-Kα radiation (λ = 1.5418 Å) and EPR, respectively. The EPR measurements were performed using a conventional Bruker ELEXSYS continuous wave spectrometer operating at X-band (ν = 9.38 GHz) equipped with a standard TE102 mode cavity. The angle between the static magnetic field and the NRs axis, denoted by θ, was monitored by a manual goniometer. The optical properties were examined by cathodoluminescence (CL) using Gatan MonoCL4 system combined with Gemini LEO 1550 FE-SEM. The CL measurements were performed on aggregated nanorods using an acceleration voltage of 5 kV. The chemical composition was analyzed by XPS measurements recorded by Scienta ESCA-200 spectrometer using monochromator Al Kα X-ray source (1486.6 eV). All the measurements were carried out at room temperature (RT) except the EPR measurements which were performed at 6 K.
3-Results and discussion
Fig. 1 shows the top-view FE-SEM images of the as-synthesized pure ZnO (M0) and Codoped ZnO NRs (M1) and (M2), respectively. The SEM images reveal that all the as-synthesized NRs were vertically-aligned with a hexagonal shape. The average diameter of the NRs was found to be ~160, ~400 and ~200 nm, for the M0, M1, and M2, respectively. The significant and slight increase in NRs average diameter in the case of M1 and M2 compared to M0 is likely due Co doping. 22,23 Fig. 1: SEM images of pure ZnO (M0) and Co-doped ZnO NRs as-synthesized using approaches M1 and M2, respectively.
The structural quality of the as-synthesized pure ZnO and Co-doped ZnO NRs have been confirmed by the XRD measurements as illustrated in Fig. 2. The XRD patterns showed that all the as-synthesized samples have a wurtzite structure and possess a good crystal quality with preferred growth orientation along the c-axis, as demonstrated by the intensity of the (002) peak. 15,[23][24][25] Also, it should be noted that no secondary phase related to Co was observed in the XRD patterns of all three NRs samples. As shown in the inset of Fig. 2 the position of the (002) peak is slightly shifted toward lower 2θ angle in M1, and toward higher 2θ angle in M2 as compared to M0. The peak position shift toward lower and higher 2θ angle are reported to be a confirmation of the successful incorporation of Co into the ZnO crystal lattice. 15,23,26 Also, the peak position shift is reported to be due to the variation of oxygen vacancies (Vo) and zinc interstitials (Zni) caused by Codoping. 27,28 In this study, the Co concentration in the synthesis solution is the same (7 %) for both M1 and M2. Thus, the observed shifts in the peak position could be attributed to either Co incorporation or to the variation of the defects concentration, e.g. such as vacancies and interstitials induced by Co doping. These results show that the way of preparing the synthesis solution have a significant influence on the Co incorporation in the synthesized ZnO NRs. The inset shows the normalized XRD data for the (002) peaks, indicating peak shifts.
Further, to confirm the crystal quality and the incorporation of Co into the ZnO crystal lattice as suggested by the XRD results, EPR spectra were recorded at 6 K, and the results are shown in Fig. 3 (a)-(b). The EPR spectra of pure ZnO NRs (M0) is characterized by the well-known defect signal from ZnO apparent at 350 mT (g ~1.956) [29][30][31][32][33][34] as shown in Fig. 3 (a), and commonly attributed to core-defects (CDs) arising from ZnO nanostructures rather than shell defects. 32,33 However, the identification of the exact nature of this CDs (1.956) signal is controversial, 35 and up to date, no experiment can give a concerete answer. Previously, many defect signals close to this value (1.956) have been reported and Zn interstitials (Zni + ) and the so-called D * center were proposed. 31,34 Indeed, the angle-dependent spectra of the CD signal shown in Fig. 3(a) display a slight easy-axis magnetic anisotropy and are composed of two overlapped lines. This observed anisotropy is compatible with a Zni + defect (easy-axis) but not with D* defect (easy plane), so that Zni + appears to be the most probable defect. 31,34 In our previous study, these CDs were characterized by three lines, which supports our hypothesis that these lines are likely a variation of the same defect, i.e. the same defects with slightly different parameters. 36 The successful substitution of Co 2+ was confirmed by the observed Co-related signal characterized by an eight-lines structure at g ~ 2.239 (θ = 0º ) and a broad asymmetric signal at g ~ 4.517 (θ = 90º ), 21,36,37 as shown in Fig. 3 (b). The observed magnetic anisotropy of the Co 2+ signal is a clear indication that the as-synthesized NRs are single crystalline,well-aligned and that Co is highly diluted along the NRs. 36 Interestingly, the substitution of Co 2+ caused a drastic reduction of the CD signal (g ~1.956) as indicated by dashed line (Fig. 3(b)) compared to that in the pure ZnO NRs (M0) (as shown Fig. 3 (a)), as previously observed in similar samples. 36 In fact, this could suggest that a certain amount of the incorporated Co is involved in the CDs neutralization. This neutralization could be due to substitutional doping, where a Zn atom is replaced by a Co atom (Cozn), or to interstitial doping, where a Co atom is incorporated into interstitial sites in the lattice (Coi). 21 As shown in Fig. 3(b) the intensity of the Co 2+ signal of M2 at θ = 90º and θ = 0º is significantly higher as compared to that of M1. Moreover, the line width of the Co 2+ signal at θ = 0º for M2 is found to be a slightly smaller (4 G) than that of M1(5 G). As the Co concentration in the synthesis solution is the same (7 %) for both M1 and M2, and by assuming a uniform doping and same coverage of the NRs, these results clearly show that the way of preparing the synthesis solution have a significant influence on the Co incorporation in the synthesized ZnO NRs in agreement with the XRD results shown in Fig. 2. It should be noted that the hyperfine constant (the spacing between two hyperfine line) is ~15.3 G in both samples, that is the same value for the bulk Co-doped ZnO. 36 Thus, we can deduce that the observed EPR signal comes from the substitutional Co 2+ inside the NRs, and not from ions on the surface. However, such an observation did not make exclusive evidence that the presences of Co is on the surface of the as-synthsized Co-doped ZnO NRs. In the solution-based synthesis method, it is possible that Co 2+ can be incorporated in the core of ZnO nanostructures or can be adsorbed at their surface. 21 Furthermore, in order to get more information on the defects in the as-synthesized pure ZnO and Co-doped ZnO NRs, room-temperature CL spectra were carried out, and the results are shown in Fig. 4. The emission spectra of all samples were dominated by UV emission peak centered at ~382 nm (3.24 eV) due to near-band-edge (NBE) emission and a strong broad yellow-orange emission centered at ~ 610 nm (2.03 eV) associated with deep-level defects related emission in ZnO. [1][2][3][4][38][39][40] Apparently, the CL spectra of Co-doped NRs exhibited a small red-shift of the UV peak position from 382 nm to 384 nm (as shown in the inset of Fig. 4) as compared to pure ZnO NRs, which is likely due to the change in the energy of the band structure as a result of doping. 22,41 It is important to note that the CL defect-related yellow-orange emission intensity decreases from M1 to M2 (Fig. 4) and the Co EPR signal increases from M1 to M2 (Fig. 3 (b)). This observation suggests that the way of preparing the synthesis solution have a significant influence on the Co incorporation and defect formation in the as-synthesized ZnO NRs in agreement with the XRD results shown in Fig. 2.
The physical origin of intrinsic defects-related yellow-orange emission is controversial, and it is proposed to be associated to the Vo, Oi and Zni. 23,[38][39][40] Recently, it was proposed that the defects-related orange emission is to be likely from the Zni in the core of ZnO NRs. 39 In this study, we believe that the defect-related yellow-orange emission is likely to originate from the Zni in the core of ZnO NRs or the Vo and Oi on the surface of ZnO NRs. As a consequence, the intensity of the defects-related yellow-orange emission is significantly enhanced by the Co doping (Fig. 4), which probably due to the increase in Vo and Oi in the NRs or to the Co-related defect. 15,23,41 Moreover, this suggests the above-observed red-shift of the UV peak could be attributed to the variation of the Zni + concentration in the Co-doped samples (M1 and M2) compared with the pure ZnO NRs (M0). In fact, these results indicate that the bulk quality of the ZnO NRs is improved by the substitution of Co, while the doping has the adverse effect on the surface defects related emission, in agreement with previous results. 15,41 Fig. 4: CL spectra of the as-synthesized pure ZnO and Co-doped ZnO NRs synthesized using different synthesis preparation approaches as indicated. The inset shows the red-shift in the UV peak. For clarity, the spectra are normalized to the near band edge intensity.
In view of the EPR and the CL results, a defect distribution model for ZnO NRs is shown in Fig. 5, 32 which propose that the incorporation of Co during the synthesis process could probably result in occupying Zni + through substitutional or interstitial doping and, subsequently, enhance the crystal quality. The other possibility is that a substitutional Co 2+ very close to a Zni + interstitial may form a non-magnetic complex, which is then not anymore EPR detectable. Also, the incorporation of Co was found to lead to the increase concentration of surface defects such as VO and Oi. Further experimental study combined with detailed theoretical calculations are necessary to fully understand the observed phenomena.
To elaborate more on surface related defect concentration, XPS spectra of all samples have been investigated. Figure 6 (a) shows the Zn 2p core level spectra of all samples, which is composed of two peaks centered at ~ 1022.2 and 1045.0 eV corresponding to binding energy lines of the Zn 2p3/2 and Zn 2p1/2, respectively, with a spin-orbital splitting of 23.1 eV suggesting that Zn is present as Zn 2+ . 22 Co signal in the ZnO NRs was not detected by the XPS; this could be attributed to the surface sensitive XPS technique with the Co 2+ at the inner core of ZnO NRs, as indicated in Fig. 5, and also due to the low Co concentration, as suggested by the EPR measurements in Fig. 3 (b). The O1s core level peak for all samples exhibits an asymmetric profile, which can be decomposed into three Gaussian peaks, donated to OI, OII, and OIII, respectively, as shown in Fig. 6 (b). The OI peak at low binding energy at ~530.9 eV is attributed to the Zn-O bond within the ZnO crystal lattice.
Whereas the OII peak centered at ~532.2eV is commonly assigned to oxygen-deficiency in the ZnO crystal lattice. 16,42 Finally, the OIII peak centered at ~ 533.1 eV is related to the absorbed oxygen on the ZnO surface, e.g. H2O, O2. 16,42 Fig. 5: Schematic illustration of the cross-sectional view of the as-synthesized pure ZnO and Codoped ZnO NRs containing Zni + as core-defect and oxygen vacancies/interstitials as surface defects, respectively.
The relative concentration of oxygen vacancies is estimated from the intensity ratios of OII/OI using the integrated XPS peak areas and the element sensitivity of the O and Zn. 42 The ratios of OII/OI were found to be 0.54, 0.52 and 0.49 for M0, M1, and M2, respectively, suggesting that M2 has a lower concentration of oxygen vacancies compared to M1 and M0, respectively. However, there is no obvious relationship between the samples defect composition estimated from the CL and XPS measurements. For instance, M0 shows lower CL defect emission intensity and higher OII/OI ratio.
4-Conclusions
The optical properties of ZnO NRs are commonly dominated by the presence of native intrinsic point defects, and identifying these defects is a difficult matter, especially in the nanostructure, where the information on anisotropy is usually lost due to the lack of coherent orientation. Here, by studying well-oriented ZnO NRs at low temperature, we were able to access the magnetic anisotropy of theses defects. Furthermore, by incorporating a relatively low amount of diluted Co inside ZnO NRs the crystal structure of the as-synthesized well-oriented ZnO NRs is significantly improved. Pure ZnO and Co-doped ZnO NRs were synthesized by the lowtemperature aqueous chemical method, where the crystal structure, orientation, and incorporation of the Co ion is tuned by the preparation procedure of the synthesis solution. The SEM and XRD measurements showed that the as-synthesized pure ZnO and Co-doped ZnO NRs are vertically aligned along c-axis and have a wurtzite crystal structure of high quality, as demonstrated by the intensity of the (002) diffraction peak. Moreover, the (002) peak position was observed to be shifted to lower or higher 2θ angle depending on the synthesis solution mixing procedure used. This is probably attributed to either Co incorporation or to the variation of the defect concentration in the samples, e.g. such vacancies and interstitials induced by Co doping. EPR measurements have confirmed the substitution of Co 2+ inside ZnO NRs giving a highly anisotropic magnetic Co 2+ signal characterized by eight lines indicating that the as-synthesized NRs are single crystalline, well-aligned and the Co is homogeneously distributed along the NRs. Also, the substitution of the Co 2+ was observed to be accompanied by a drastic reduction in the CD signal (g ~ 1.956) found in pure ZnO NRs. As revealed by CL, the incorporation of Co causes a red shift in the UV peak position with an observed enhancement in the intensity of defect-related emission as compared to pure ZnO NRs. In view of the different results from these complementary measurements, we proposed that the as-synthesized pure ZnO NRs likely contain Zn interstitial (Zni + ) as CDs and oxygen vacancy (VO) or interstitial (Oi) as surface defects. These results open for the possibility of synthesis of highly crystalline quality ZnO-based DMSs using the low-temperature aqueous chemical method.
Fig. 2 :
2 Fig. 2: XRD patterns of the as-synthesized pure ZnO (M0) and Co-doped ZnO NRs (M1 and M2).
Fig. 3 :
3 Fig. 3: (a) EPR spectra show anisotropy of the CD signal in the pure ZnO sample (M0). (b) EPR spectra of Co-doped ZnO NRs ( M1 and M2 ) for parallel (θ = 0º) and perpendicular (θ = 90º) orientation of magnetic field, recorded at T = 5 K. The upper axis gives corresponding g factor values.
Fig
Fig. : XPS core level spectra of the (a) Zn 2p peak and (b) O 1s peak of the as-synthesized pure and Co-doped ZnO NRs as indicated.
Acknowledgement:
This work was supported by the NATO project Science for Peace (SfP) 984735, Novel magnetic nanostructures. |
01767018 | en | [
"sdv.bid.spt",
"sdu.stu.pg"
] | 2024/03/05 22:32:15 | 2016 | https://hal.science/hal-01767018/file/Croitor%26Cojocaru2016_author.pdf | Roman Croitor
email: [email protected]
Ion Cojocaru
An Antlered Skull of a Subfossil Red Deer, Cervus elaphus L., 1758 (Mammalia: Cervidae), from Eastern Romania
Keywords: Carpathian red deer, Cervus elaphus maral, morphology, systematics, taxonomy, Romania 1898: p. 79
A subfossil antlered braincase of red deer discovered in the Holocene gravel deposits of Eastern Romania is described. The morphology of antlers suggests that the studied specimen is related to the Caucasian and Caspian stags and belongs to the oriental subspecies Cervus elaphus maral OGILBY, 1840. An overview and discussion of taxonomical issues regarding modern red deer from South-eastern Europe and some fossil forms of the region are proposed. The so-called Pannonian red deer (Cervus elaphus pannoniensis BANWELL, 1997) is considered a junior synonym of Cervus elaphus maral OGILBY, 1840. Cervus elaphus aretinus AZZAROLI, 1961 from the last interglacial stage of Italy seems to be very close to Cervus elaphus maral.
Introduction
The subspecies status and systematic position of the red deer from the Carpathian Mts. is still a matter of discussions. The comparatively larger Carpathian red deer has massive antlers with less developed crown tines as compared to the red deer subspecies from Western Europe. It was assigned to two subspecies, C. vulgaris montanus BOTEZAT, 1903 (the "mountain common deer") and C. vulgaris campestris BOTEZAT, 1903 (the "lowland common deer"). [START_REF] Botezat E | Gestaltung und Klassifikation der Geweihe des Edelhirsches, nebst einem Anhange über die Stärke der Karpathenhirsche und die zwei Rassen derselben[END_REF] proposed for red deer species the name Cervus vulgaris, since, according to his opinion, the Linnaean Greek-Latin name Cervus elaphus is tautological. [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] and [START_REF] Grubb P | Valid and invalid nomenclature of living and fossil deer, Cervidae[END_REF] considered the name C. vulgaris as a junior synonym of C. elaphus. LYDEKKER (1898) included the Eastern Carpathians in the geographical range of the Caspian red deer Cervus elaphus maral OGILBY. Nonetheless, in his later publication, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] generally accepted BOTEZAT's viewpoint on the taxonomical distinctiveness between the two Carpathian forms of red deer. However, LYDEKKER (1915) indicated that C. vulgaris campestris is preoccupied since it has been used as Cervus campestris CUVIER, 1817 (a junior synonym of Odocoileus virginianus). Therefore, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] considered the red deer from the typical locality Marmoros and Bukovina districts of the Hungarian and Galician Carpathians as Cervus elaphus ssp. According to [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] this deer may be to some degree intermediate between Cervus elaphus germanicus from Central Europe and Cervus elaphus maral from Northern Iran and Caucasus. With some doubts, [START_REF] Lydekker | Artiodactyla, Families Cervidae (Deer), Tragulidae (Chevrotains), Camelidae (Camels and Llamas), Suidae (Pigs and Peccaries), and Hippopotamidae (Hippopotamuses)[END_REF] included C. vulgaris montanus in the synonymy of Cervus elaphus maral and suggested that both Carpathian red deer forms described by BOTEZAT may represent recently immigrated dwarfed forms of C. elaphus maral. [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF] also rejected BOTEZAT's subspecies name campestris as preoccupied; however, they recognized the validity of Cervus elaphus montanus BOTEZAT with type locality in Bukovina (Romania) and the vast area of distribution that included the entire Carpathian-Balkan region. This subspecies is characterised by underdeveloped neck mane, the missing black stripe bordering the rump patch (or caudal disk), generally grayish colour of pelage, poorly developed distal crown in antlers, and comparatively larger body size [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. [START_REF] Flerov | Musk deer and deer. The Fauna of USSR[END_REF] and [START_REF] Sokolov | Hoofed animals (Orders Perissodactyla and Artiodactyla). Fauna of the USSR[END_REF] placed the Carpathian red deer in the nominotypical subspecies Cervus elaphus elaphus Linnaeus since the diagnostic characters of antler morphology, pelage colour as well as body size used for the description of the Carpathian red deer are not constant characters and, therefore, are not suitable for subspecies designation. According to [START_REF] Flerov | Musk deer and deer. The Fauna of USSR[END_REF], the morphological peculiarities of the Carpathian and Crimean red deer are insignificant and do not permit to place those populations in any separate subspecies. ALMAŞAN et al. (1977) referred the Carpathian red deer to the Central European subspecies Cervus elaphus hippelaphus ERXLEBEN, 1777. According to [START_REF] Danilkin | Deer (Cervidae). (Series: Mammals of Russia and adjacent regions)[END_REF], the "Carpathian race" montanus is a transitional form between the Western European C. elaphus elaphus and the Caucasian C. elaphus maral. [START_REF] Tatarinov | Mammals of Western Regions of Ukraine[END_REF] applied a new subspecies name Cervus elaphus carpathicus for the red deer from the Ukrainian part of the Carpathian Mts. [START_REF] Heptner | Artiodactyla and Perissodactyla[END_REF] regarded TATARINOV's subspecies as a junior synonym of campestris and montanus and considered it as a nomen nudum. [START_REF] Grubb P | Valid and invalid nomenclature of living and fossil deer, Cervidae[END_REF] considered C. vulgaris campestris BOTEZAT and C. vulgaris montanus BOTEZAT as homonyms of Cervus campestris CUVIER, 1817 and Cervus montanus CATON, 1881, respectively, and, therefore, both names were suggested to be invalid. [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF] proposed another new subspecies name, Cervus elaphus pannoniensis, for red deer from Hungary, Romania and the Balkan Peninsula. [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF][START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF] described a set of specific morphological characters that distinguish the so-called "maraloid" Pannonian red deer from the Western European red deer. However, BANWELL did not provide the diagnostic characters distinguishing Cervus elaphus pannoniensis from Cervus elaphus maral. Nonetheless, BANWELL 's subspecies C. elaphus pannoniensis was accepted by several authors (GROVES & GRUBB 2011; MARKOV 2014) and even its taxonomic status was raised to the species level [START_REF] Groves C | Ungulate Taxonomy[END_REF]. [START_REF] Zachos | Species inflation and taxonomic artefacts -A critical comment on recent trends in mammalian classification[END_REF] regard the full-species status for the Pannonian red deer as an objectionable "taxonomic inflation". [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF], in his comprehensive publication on evolution, biology and systematics of red deer and wapiti (C. elaphus canadensis ERXLEBEN, 1777, or Cervus canadensis according to the latest genetic studies, see e.g. [START_REF] Polziehn | A Phylogenetic Comparison of Red Deer and Wapiti Using Mitochondrial DNA[END_REF], did not indicate explicitly the systematical position of the Carpathian red deer. However, he supported BOTEZAT's idea on the presence of two forms of red deer in the Carpathian region. According to [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF] 1777) who first applied this name for the red deer from Germany and Ardennes and gave its scientific description supplemented with synonymy and detailed bibliographic references. Later, KERR (1792) applied the species and subspecies name Cervus elaphus hippelaphus ("maned stag") with a reference to ERXLEBEN's (1777) work.].
The recently published results on genetic analysis of red deer populations from Western Eurasia bring new views on systematic position and taxonomical status of red deer from the Carpathian region. According to [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF], the analysis of mtDNA cytochrome b sequence could not distinguish the red deer from the Balkan-Carpathian region from the red deer forms of Central and Western Europe. However, the study of [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF] confirmed the subspecies status of C. elaphus barbarus from North Africa, C. elaphus maral from the Caspian Region, and C. elaphus bactrianus and C. elaphus yarkandensis from Central Asia. All the mentioned subspecies and forms of red deer are included in the so-called Western group of red deer. KUZNETZOVA et al. (2007) confirmed that the molecular-genetic analysis of red deer from Eastern Europe did not support the validity of red deer subspecies C. elaphus montanus from the Balkan-Carpathian area and C. elaphus brauneri from Crimea as well as C. elaphus maral from North Caucasus. The genetic integrity of the Carpathian populations of red deer was confirmed through the haplotype distribution, private alleles and genetic distances [START_REF] Feulner | Mitochondrial DNA and microsatellite analyses of the genetic status of the presumed subspecies Cervus elaphus montanus (Carpathian red deer)[END_REF]. Therefore, the complicated ancestral pattern for Carpathian red deer suggested by [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF] was not supported. [START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF] and [START_REF] Zachos | Phylogeography, population genetics and conservation of the European red deer Cervus elaphus[END_REF] suggested that the modern Carpathian red deer had originated from the Balkan Late Glacial refugium. [START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF] also assumed that the Balkan Late Glacial refugium could extend further to the south-east (Turkey and Middle East). [START_REF] Sommer | Late Quaternary distribution dynamics and phylogeography of the red deer (Cervus elaphus) in Europe[END_REF] regarded Moldova (East Carpathian foothills) as a part of the East European Late Glacial refugium.
However, a certain caution is needed with the results of the genetic analysis. [START_REF] Micu | Ungulates and their management in Romania[END_REF] reported that the Austrian red deer with multi-tine crowns were introduced to Romania in the 19th and early 20th centuries in order to "improve" the quality of antlers of the local red deer race. Therefore, although the level of genetic introgression may be low, the modern populations of Carpathian red deer are not truly natural anymore (ZACHOS & HARTL 2011).
The taxonomic status and systematic position of the Carpathian red deer is complicated further by the fact that the previously published data on morphology of Cervus elaphus from the Carpathian Region are poor and quite superficial (ALMAŞAN et al. 1977;[START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF].
In the context of the above-mentioned controversies, the new subfossil material of red deer from the Carpathian Region represents a special interest and may elucidate the systematic position of the aboriginal red deer forms. In the present work, we propose a morphological description of the well preserved antlered braincase from Holocene gravel deposits in Eastern Romania and a discussion on the systematic position of the original red deer from the Eastern Carpathian area.
Material and Methods
The studied specimen represents an antlered braincase with almost complete left antler and proximal part of the right antler. The specimen was discovered in a gravel pit located in the area of Răchiteni Village, Iasi County, north of the Roman town (Fig. 1). Most likely, the gravel deposits from Răchiteni are of Post-Glacial (Holocene) age (Paul TIBULEAC, personal communication). The cranial measurements are taken according to von den DRIESCH (1976). The antler measurements are taken following [START_REF] Heintz E | Les cervidés villafranchiens de France et d'Espagne. -Mémoires du Muséum[END_REF]. The terminology of antler morphology is according LISTER (1996).
Results
Systematics
Genus Cervus LINNAEUS, 1758
Cervus elaphus LINNAEUS, 1758
Cervus elaphus maral OGILBY, 1840
Description
The antlered skull of red deer from Răchiteni belongs to a mature but not old male individual: its pedicles are rather short and robust (their height is significantly smaller than their diameter; Table 1, Fig. 2), the bone sutures of neurocranium are still visible but in some places (the area between pedicles) are completely obliterated and, therefore, indicate the fully mature age (MYSTKOWSKA 1966). We assume, therefore, that the antlers of the red deer from Răchiteni most probably attained their maximal development.
The cranial measurements of the specimen suggest that the individual from Răchiteni was rather large, exceeding body size of modern red deer from Bialowieza Forest and Caucasus. The greatest breadth of the skull across orbits in males of Cervus elaphus hippelaphus from Bialowieza Forest (three individuals) range 165-181 mm; the breadth of occipital condyles ranges 72-76 mm [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. The analogous measurements of males of Cervus elaphus maral from Caucasus (nine individuals) range 145-187 mm, and from 67 mm to 80 mm, respectively [START_REF] Heptner | Deer of the USSR (Systematics and Zoogeography). -Transactions on Study of Fauna and Flora of the USSR[END_REF]. The corresponding measurements of the skull from Răchiteni were greater than the measurements of the largest Caucasian stag reported by HEPTNER & ZALKIN (1947) with ca. 1 cm (the greatest breadth across the orbits and the breadth of occipital condyles were 198.0 mm and 87.8 mm, respectively).
The antlers from Răchiteni were characterized by a comparatively long curved brow (first) tine situated at a short distance from the burr, the missing bez (second) tine, and the rather long and strong trez (third) tine, which is, however, shorter than the brow tine (Table 2). The antler beam was somewhat bent toward the posterior at the level of trez tine insertion and after slightly arched acquiring the upright orientation in lateral view. The distal portion of antler formed a crown that consisted of six tines (Fig. 3). Therefore, the total number of antler tines amounted to eight. The crown of antler was formed by two transversely oriented forks, the additional prong and the apical tine (broken). The antler beam was curved towards the posterior in the area of distal crown and formed the pointed posterior axe of the crown, reminding the morphological pattern typical of the Caucasian and Caspian red deer C. elaphus maral (LYDEKKER 1915: 127, fig. 23). The antler surface was covered with a characteristic "pearling" specific for the so-called Western group of red deer [START_REF] Geist | Deer of the World: Their Evolution, Behaviour and Ecology[END_REF].
Discussion
According to LYDEKKER (1898), the number of tines of Cervus elaphus maral seldom exceeded eight. GEIST (1998) described the antlers of Carpathian stags as large, heavy but poorly branched as compared to Western European red deer. LYDEKKER (1898) reported also a frequent poor development of bez tine in Cervus elaphus maral. According to LYDEKKER (1898), the bez tine was often much shorter than brow tine or even might be absent in the Carpathian red deer, as could be seen in the case of the specimen from Răchiteni.
The antlers of red deer from Prăjeşti (Siret Valley) described by [START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF] also show a rather weak bez tine, which is less developed than brow tine and much shorter than trez tine. The distal crown in two better preserved larger antlers from Prăjeşti (SARAIMAN & ŢARĂLUNGĂ 1978: Pl. V, figs. 1, 2) is rather weak. It consists of four tines, of which the first crown tine is much distinct in the crown as in modern Caspian deer (see the description in LYDEKKER 1898). Therefore, the crown shape of red deer from Prăjeşti resembles the typical morphological condition seen in the Caucasian and Caspian red deer. The remains of red deer from Prăjeşti have been found together with a fragment of skull of Bos primigenius. [START_REF] Saraiman | Prezenţa speciilor Bos primigenius Boj. şi Cervus elaphus L., în terasa de 8-10 m a Siretului. -Analele ştiinţifice ale Universităţii[END_REF] have suggested the Würmian age for the osteological remains from Prăjeşti. [START_REF] Spassov | The Remains of Wild and Domestic Animals from the Late Chalcolithic Tell Settlement of Hotnitsa (Northern Bulgaria)[END_REF] described from the Late Chalcolithic (4100-4500 BC) of North Bulgaria remains of a very large form of red deer that rivalled the size of Siberian maral Cervus canadensis. Besides the larger size, the subfossil red deer from Bulgaria was characterised by massive antler beams, a simplified antler crown and a relatively limited number of tines. This brief description generally corresponds to the characteristics of the Caucasian and Caspian red deer Cervus elaphus maral LYDEKKER, 1898, and suggests its close resemblance to the Romanian subfossil red deer. The larger size of the subfossil red deer, as compared to the modern forms from the same area, is explained by the long tradition of trophy hunting that has likely led to dwarfing of the populations of game species [START_REF] Spassov | The Remains of Wild and Domestic Animals from the Late Chalcolithic Tell Settlement of Hotnitsa (Northern Bulgaria)[END_REF].
Understanding the significance of the observed peculiarities of antler morphology of fossil and subfossil red deer from Eastern Romania and neighbouring countries, and their resemblance to the Caucasian and Caspian modern red deer, requires a discussion of already described taxa of red deer from Southeastern Europe. A conspicuously weak bez tine may be also noticed in the modern Crimean deer, which is often regarded as a true subspecies: Cervus elaphus brauneri [START_REF] Charlemagne N | Les Mammiferes de l'Oukraine. Court manuel de determination, collection et observation des mammiferes de l'Oukraine[END_REF]. DANILKIN (1999: fig. 122-2) presented the antlered skull of Crimean red deer from the collection of the Zoological Museum of the Moscow State University that shows a very weak bez tine on the left antler and a missing bez tine on the right antler, while its distal crown reminds the morphological condition of Cervus elaphus maral.
The origin of the modern Crimean population is not clear and its taxonomic status is controversial. FLEROV (1952: 162) placed the Crimean stag in an informal group together with the Balkan and Carpathian red deer within the European subspecies Cervus elaphus elaphus, since, according to his opinion, the morphological peculiarities of the above mentioned populations are not taxonomically significant. SOKOLOV (1959: 219) also considered that the separation of the Crimean subspecies brauneri is not justified. Nonetheless, HEPTNER et al. (1988) believed that the Crimean deer represented a taxonomically independent form that occupied an intermediate position between the Carpathian and Caucasian red deer. [START_REF] Danilkin | Deer (Cervidae). (Series: Mammals of Russia and adjacent regions)[END_REF] regarded the Crimean population of red deer as a small-sized "insular" form of North-Caucasian red deer that was introduced in Crimea in the early 20th century. Finally, VOLOKH (2012) reported multiple and uncontrolled introductions of red deer individuals to Crimea at least from the times of the Crimean Khanate until very recent times. Therefore, the debates on the taxonomical status of the modern Crimean red deer become useless. [START_REF] Ludt Ch | Mitochondrial DNA phylogeography of red deer (Cervus elaphus)[END_REF] discovered that the modern red deer from Crimea belongs to the haplogroup of Western European red deer. However, this conclusion was based only on two modern specimens from Crimea. Obviously, the adequate results of genetic analysis could be obtained only from subfossil and archaeozoological remains. [START_REF] Stankovic | First ancient DNA sequences of the Late Pleistocene red deer (Cervus elaphus) from the Crimea, Ukraine[END_REF] analysed the ancient DNA sequences of Late Pleistocene red deer from Crimea and revealed a very interesting fact: the Crimean Peninsula was colonized several times by various forms of red deer of different zoogeographic origin: the youngest form of red deer from Crimea (two specimens dated 33.100 ± 400 BP and 42.000 ± 1200 BP) are genetically close to C. elaphus songaricus from China, while the older specimen (>47,000 BP) is close to the Balkan populations of red deer. The origin of indigenous Holocene Crimean population of red deer still remains unclear. It is necessary to mention that the subfossil red deer from Crimea (early Iron Age, settlement of Uch-Bash, Sevastopol) is characterised by a peculiar high frequency of primitive unmolarised lower fourth premolar (P4), which distinguishes this population from Cervus elaphus of Western Europe (CROITOR, 2012).
The recently established new subspecies
Cervus elaphus pannoniensis BANWELL, 1997 from the Middle Danube area also requires a special discussion here. Although [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF][START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF] had the opportunity to see the red deer from Anatolia and the Balkan Peninsula, the description of his new subspecies was based only on morphological differences between the so-called Pannonian red deer and Western European ("Atlantic") Cervus elaphus hippelaphus, while a differential diagnosis between Cervus elaphus pannoniensis and Cervus elaphus maral and a comparison of these two subspecies were not provided. The antlered skull from Southern Hungary (displayed in the Chateau Chambord) presented by [START_REF] Banwell | The Pannonians -Cervus elaphus pannoniensis -a race apart[END_REF], should be considered as a type specimen (lectotype according to GROVES & GRUBB 2011). Its extremely large antlers bear additional long tines on its beams and crowns, well-developed both brow and bez tines and apparently represent an exceptional hunter's trophy specimen. BANWELL (1998) provides a good and very detailed morphological description of the Pannonian red deer, which are distinguished from the Western European forms, according to the description, by the larger size and elongated Romannosed face (obviously, these two characters are correlated), poorly developed mane, underdeveloped caudal disk, large antlers with poorly developed distal crown. Finally, as BANWELL (1997,1998,2002) reasonably noticed, the Pannonian red deer belongs to the Oriental "maraloid" type. The area of distribution of the new Pannonian subspecies includes, according to BANWELL (1998), Hungary, Romania, the Western Balkan states, Bulgaria, and may range until Crimea, Eastern Turkey and Iran. One can notice that the assumed area of distribution of BANWELL's subspecies broadly overlaps with the known area of distribution of Cervus elaphus maral. Although [START_REF] Groves C | Ungulate Taxonomy[END_REF] affirm that BANWELL has provided a set of characters (colour, spotting, mane and antlers) distinguishing Cervus elaphus pannoniensis from Cervus elaphus maral, such data are not available. The latter subspecies was ignored in BANWELL's (1994,1997,1998,2002) publications. Therefore, taking in consideration the absence of distinguishing diagnostic characters and the overlapping of claimed areas of distribution, we regard Cervus elaphus pannoniensis BANWELL as a junior synonym of Cervus elaphus maral OGILBY.
Most probably, the studied fossil and sub-fossil Carpathian red deer are also closely related to Cervus elaphus aretinus AZZAROLI, 1961 from the last interglacial phase of Val di Chiana (Central Italy). The Italian fossil red deer is characterised by a presence of only one basal tine (the brow tine) and a massive distal crown, which, however, still resembles the maral type (Fig. 4). It is necessary to mention here the observed by [START_REF] Banwell | Identification of the Pannonian, or Danubian, Red Deer[END_REF][START_REF] Banwell | In defence of the Pannonian Cervus elaphus pannoniensis[END_REF] development of slight distal palmation in the so-called Pannonian red deer; in our opinion, this also makes it similar to Cervus elaphus aretinus. One of the authors of the present study [START_REF] Croitor R | Functional morphology of small-sized deer from the early and middle Pleistocene of Italy: implication to the paleolandscape reconstruction[END_REF][START_REF] Croitor R | Early Pleistocene small-sized deer of Europe[END_REF] assumed in his previous publications that Cervus elaphus aretinus (or Cervus aretinus) represents a local archaic specialized form. However, the morphological resemblance between the fossil form Cervus elaphus aretinus and the modern Cervus elaphus maral, in our opinion, is obvious and one may not exclude that those two subspecies could be even synonymous. Another antlered fragment of skull that strongly reminds the morphology of Cervus elaphus maral is reported from the Late Pleistocene of Liguria (Le Prince, Italy; BARRAL & SIMONE 1968: 87, Figs. 14-1).
Apparently, the origin of the indigenous Carpathian red deer is linked to the Balkan-Anatolian-Caucasian glacial refugium [START_REF] Sommer | Late Quaternary distribution dynamics and phylogeography of the red deer (Cervus elaphus) in Europe[END_REF][START_REF] Skog | Phylogeography of red deer (Cervus elaphus) in Europe[END_REF][START_REF] Meiri | Late-glacial recolonization and phylogeography of European red deer[END_REF]. The Italian Cervus elaphus aretinus could be very close also to the red deer form from the glacial refugium in Eastern Europe. The placement of the postglacial Carpathian red deer in the subspecies Cervus elaphus maral, according to our opinion, is supported by the reported in the present study antler morphology. Nonetheless, the history of the red deer from the Carpathian-Balkan area and the adjacent regions requires a more complex and extensive interdisciplinary research combining zoological, archaeozoological, palaeontological and genetic data in the future.
Fig. 1 .
1 Fig. 1. Geographical location of the Răchiteni site, Iaşi County, Romania
Fig. 2 .
2 Fig. 2. Cervus elaphus maral OGILBY from Răchiteni: A, lateral view of the braincase; B, occipital view of the braincase; C basal view of the braincase.
Fig. 3 .
3 Fig. 3. Cervus elaphus maral OGILBY from Răchiteni: A, frontal view; B lateral view; C medial view of left antler.
Fig. 4 .
4 Fig. 4. Cervus elaphus aretinus AZZAROLI from the last interglacial phase of Val di Chiana, Italy (adapted from AZZAROLI 1961): A frontal view of antlered frontlet; B lateral view of antler crown.
, European west (C. elaphus elaphus) and east (C. elaphus maral) types of red deer meet in the Balkans. Within this context, GEIST (1998) also discussed the so-called "cave stag", Strongyloceros spelaeus OWEN, 1846 from Western Europe, a Glacial Age wapiti that rivalled the size of the giant deer Megaloceros giganteus Blumenbach, 1799. GEIST (1998), taking in consideration PHILIPOWICZ's (1961) description of the Carpathian red deer, presumed that the largest European red deer
with somewhat simplified smooth antlers (not pearled as in West European red deer) from the Carpathian Alpine meadows is a descent of the giant Glacial Age wapiti. Later,[START_REF] Geist V | Defining subspecies, invalid taxonomic tools, and the fate of the woodland caribou[END_REF] placed the Carpathian red deer in Central European subspecies Cervus elaphus hippelaphus KERR, 1792 [Sic! The authorship of subspecies Cervus elaphus hippelaphus belongs to ERXLEBEN (
Table 1 .
1 Measurements of the skull of Cervus elaphus maral OGILBY from Răchiteni (measurements are numbered according to von den Driesch 1976: fig. 11).
Measurements mm notes
dorsal view
(10) Median frontal length 198.0 incompletely preserved
(11) Lambda -Nasion 152.0 incompletely preserved
(31) Least frontal breadth 178.0 orbits incompletely preserved
(32) Greatest breadth across the orbits 198.0 orbits incompletely preserved
(41) distal circumference of the burr 211.0 in both antlers
Distance between antler burrs 79.8
Distance between pedicles and nucal crest 113.0
lateral view
(38) basion -the highest point of the superior nuchal crest 97.0
(40) proximal circumference of the burr 190.0
bazal view
(6) basicranial axis 130.0 basicranium length
91.0 taken from the visible suture to the posterior
edge
(26) Greatest breadth of occipital condyles 87.8
(28) Greatest breadth of the foramen magnum 35.4
(27) Greatest breadth at the bases of the paraoccipital 158.0 incompletely preserved
processes
Table 2 .
2 Measurements of the antlers of Cervus elaphus maral OGILBY from Răchiteni.
Acknowledgements:
We thank Adrian LISTER, Nikolai SPASSOV and Stefano MATTIOLI for their kindness while providing missing bibliographical sources used in this research. |
00475834 | en | [
"phys.cond.cm-ms"
] | 2024/03/05 22:32:15 | 2010 | https://hal.science/hal-00475834/file/Cu-Co%20CHATAIN.pdf | I Egry
D M Herlach
L Ratke
M Kolbe
D Chatain
S Curiotto
L Battezzati
E Johnson
N Pryds
Interfacial properties of immiscible Co-Cu alloys
Keywords: miscibility gap, interfacial tension, surface tension, levitation, oscillating drop, microgravity
Using electromagnetic levitation under microgravity conditions, the interfacial properties of an Cu 75 Co 25 alloy have been investigated in the liquid phase. This alloy exhibits a metastable liquid miscibility gap and can be prepared and levitated in a configuration consisting of a liquid cobalt-rich core surrounded by a liquid copper-rich shell. Exciting drop oscillations and analysing the frequency spectrum, both surface and (liquid-liquid) interfacial tension can be derived from the observed oscillation frequencies. This paper briefly reviews the theoretical background and reports on a recent experiment carried out on board the TEXUS 44 sounding rocket.
Introduction
Alloys with a metastable miscibility gap are fascinating systems due to the interplay between phase separation and solidification. In contrast to systems with a stable miscibility gap, the demixed microstructure can be frozen in by rapid solidification from the undercooled melt. Electromagnetic levitation offers the possibility to study compound drops consisting of a liquid core, encapsulated by a second liquid phase. The oscillation spectrum of such a compound drop contains information about both, the surface and the interface tension. The binary monotectic alloy CuCo is an ideal model system for such investigations. Its phase diagram is shown in Figure 1.
In order to study this system, including potential industrial applications, the European Space Agency ESA funded a European project, COOLCOP [1]. In the past years, this team devoted a lot of effort to understand the behaviour of such systems, starting from phase diagram calculations [2], drop dynamics [3], modelling of interfacial properties [4], and extending to solidification theories and experiments [5]. The investigations laid the ground for microgravity experiments. First results for a Co 25 Cu 75 alloy onboard a sounding rocket are reported here.
As the temperature of a homogeneous melt of the alloy is lowered below the binodal temperature, demixing sets in and small droplets of one liquid, L1, in the matrix of the other liquid, L2, are formed. These two immiscible liquids do not consist of the pure components, but have concentrations according to the phase boundary of the miscibility gap; therefore, L1 is rich in component 1, while L2 is rich in component 2. Initially, depending on the nucleation kinetics, a large amount of liquid droplets is created. This initial phase is energetically very unfavourable, due to the high interface area created between the different drops. In the next stage, Ostwald ripening sets in [START_REF] Ratke | Immiscible Liquid Metals and Organics[END_REF]. This diffusive mechanism leads to the growth of large drops at the expense of the small ones, thereby coarsening the structure of the dispersion and finally leading to two separated liquid phases. For a levitated drop, without contact to a substrate, the liquid with the lower surface tension -in the present case the copper-rich liquidencapsulates the liquid (cobalt-rich) core. Terrestrial levitation experiments suffer from the detrimental side effects of the levitation field, in particular by electromagnetic stirring effects which destroy the separated two-phase configuration. Therefore, it was decided to perform such an experiment under microgravity conditions on board a TEXUS sounding rocket. As will be discussed below, the drawback of this carrier is the short available experiment time of about 160 s. Due to a specific preparation of the sample, it was nevertheless possible to conduct three melting cycles during this short time.
Drop Dynamics
Generally speaking, the interfacial tension between two liquids is difficult to measure, and only few data exist [START_REF] Merkwitz | [END_REF]. The oscillating drop technique [8] is a non contact measurement technique for surface tension measurements of levitated liquid drops. In its original form, it assumes a homogeneous non-viscous drop, free of external forces. In this ideal case, the frequency of surface oscillations is simply related to the surface tension s 0 by Rayleigh's formula [9] :
2 0 0 3 0 0 8 R s w r = (1)
where r 0 is the density of the drop and R 0 its radius. By substituting r 0 R 0 3 = 3M/4p, the apparent density dependence of the frequency disappears which makes this equation particularly easy to use.
The oscillating drop technique can be extended to the measurement of the interfacial tension between two immiscible liquids [10]. Based on the theory of Saffren et al. [START_REF] Saffren | Proceedings of the 2 nd International Colloquium on Drops and Bubbles[END_REF], the theory was worked out for force-free, concentric spherical drops. The geometry considered is summarized in Figure 2. Due to the presence of the interface between liquid L1 and L2, this sytem possesses two fundamental frequencies, driven by the surface tension s 0 and the interfacial tension s 12 .
Adopting the nomenclature of ref [START_REF] Saffren | Proceedings of the 2 nd International Colloquium on Drops and Bubbles[END_REF], the normal mode frequencies w of a concentric, force-free, inviscid compound drop read:
2 W K J w ± ± = (2)
K and J are dimensionless, while W is a frequency squared. W/J is given by:
2 8 0 10 1 (1 ) 2 3 i i W J w t s r t r = + D + D (3)
Here, a number of symbols has been introduced which are defined as follows: w 0 is the unperturbed Rayleigh frequency (eqn (1)) of a simple drop with density r 0 , Radius R 0 and surface tension s 0 (see also Figure 2 for the definition of the symbols).
0 i R R t = (4)
is the square root of the ratio between outer and inner radius,
0 12 s s s = (5)
is the square root of the ratio of the surface tension and the interface tension, and
0 0 3 5 i i r r r r - D = (6)
is the weighted relative density difference between liquids L1 and liquid L2. It remains to write down the expression for K. It is given by:
2 3 3 0 0 3 3 1 1 1 2 4 i i m m m m K s t s t t s t s ± ae ö ae ö = + ± - + ç ÷ ç ÷ è ø è ø (7)
where two additional symbols have been introduced, namely:
5 5 0 3 2 5 5 m t t - = + (8) and 5 5
(1 )
i i i m r t r t - = + D -D (9)
For large s and small Dr i , approximate equations can be derived for the two frequencies w + and w -:
2 2 0 2 4 1 1 w w s t + ae ö = + ç ÷ è ø (10) 6 2 2 1 0 0 2 5 1 3 t w w t s - - ae ö = - ç ÷ è ø (11)
From an experimental point of view it is interesting to discuss the frequencies as a function of the initial, homogeneous composition of the drop. To this end we introduce the relative mass fraction of component 2, i.e. the component with the lower surface tension which will eventually constitute the outer liquid shell. it is given by:
1 3 0 0 1 shell i i rel i m R m m R R r r - ae ö ae ö ç ÷ = = + ç ÷ ç ÷ - è ø è ø (12)
In Figure 3, the frequency spectrum is shown as a function of m rel for parameters corresponding to the Cu-Co system.
Figure 3. The normalized normal mode frequencies w ± /w 0 as a function of the relative mass fraction m rel . For the figure, following parameters were chosen: s 0 =1.3 N/m, s 12 =0.5 N/m, r 0 =7.75 g/cm 3 ,
r i =7.86 g/cm 3 .
Although the oscillations of the inner radius, R i , cannot be observed optically for nontransparent liquid metals, both eigenfrequencies can be determined from the oscillations of the outer radius, R 0 , alone. This is due to the coupling of the two oscillators via the common velocity field in the melt. The relative amplitudes of the oscillations of the outer and inner surface are shown in Figure 4 for both oscillatory modes. The larger the value of |dR 0 /dR i |, the better the detectability. Consequently, the optimal choice to detect both modes, lies between 0.7 < m rel < 0.8.
Results
The experiments were carried out using the TEXUS-EML module during the TEXUS 44 campaign. Two experiments, one on demixing of CuCo, described here, and one on calorimetry and undercooling of an Al-Ni alloy were accommodated. The allotted time span of microgravity for the present experiment was 160 s.
As this time is much too short for undercooling and complete phase separation, it was decided to perform the experiment on a sample which was prepared ex-situ as a two-phase compound drop using a DTA furnace and a melt flux technique which allows deep undercooling and subsequent phase separation of the Cu-Co sample [START_REF] Willnecker | [END_REF]. Of course, such a system is not in equilibrium when it is remelted, but it takes some time to destroy the interface between the two liquids L1 and L2, and this time is sufficient to excite and observe oscillations of the (unstable) interface.
The experiment consisted of three heating cycles:
• one cycle with a completely phase-separated sample
• one cycle with a homogenised sample
• one cycle for maximum undercooling
The experiment was conducted with one sample of the composition Cu 75 Co 25 with a pre-separated microstructure. Careful heating should melt the Cu shell first and then, at higher temperature, the Co core. The aim of the experiment was to observe the oscillations of this separated microstructure in the liquid and to experimentally determine the interface energy of Cu-Co. In a second heating cycle, the microstructure was homogenized and two pulses were applied to observe oscillations of a homogeneous sample for comparison. A third heating cycle was used to investigate growth of a droplet dispersion starting after undercooling at the binodal.
The most important parameter in the preparation of the experiment was the choice of the maximum temperature in the first heating cycle. It had to be chosen such that both, the outer copper shell, and the inner cobalt core are fully molten, but not intermixed. Microstructure analysis of the samples from previous parabolic flights showed that a maximum temperature of 1800°C of the heating cycle was too high, as the pre-separated microstructure has been destroyed. On the other hand, a minimum temperature of about 1500 °C is required to melt the cobalt-rich core. Consequently, a maximum temperature of 1600°C has been chosen for the TEXUS experiment.
The experiment was successful and three heating cycles could be conducted. The second cycle led probably to homogenisation of the liquid sample (T max » 1850°C). Two heating pulses for excitation of the homogeneous droplet oscillations have been applied in the high temperature region. The third cycle led to an undercooling of the melt and a recalescence due to release of latent heat, which is indicated by an arrow in the temperature-time profile in Figure 5. The sample was saved and the experiment with Cu-Co was finished.
Discussion
Temperature Calibration
The emissivity of the sample changes depending on whether or not it is phase separated. The pyrometer data were calibrated for e=0.1, corresponding to a demixed sample. It is assumed that the second heating homogenizes the sample, leading to an emissivity of e=0.13. Therefore, the pyrometer signal had to be corrected according to:
ln 0 1 1 2 2 2 1 1 T T c l e e -= (13)
where l 0 is the operating wavelength of the pyrometer, and c 2 = 1.44 10 4 µm K.
The pyrometer operates in a band of 1.45 -1.8 µm. Assuming an effective wavelength of l 0 = 1.5 µm results in a correction of -2.733 10 -5 K -1 .
Taking this correction into account, the pyrometer signal was recalibrated and is shown in Figure 5. Also shown (dashed line) is the heater control voltage, controlling the heating power in the coil system of the EML module. The sample is molten within 30 s, between 45260 and 45290 s. During cooling, short heater pulses are applied to excite oscillations of the liquid drop. Due to the time resolution of the data acquisition, not all such pulses are shown in Figure 5.
The temperature signal is rather noisy, especially during heating. This is due to sample movement and sample rotation. As explained above, the sample was prepared in a melt flux, and part of this flux was still attached to the sample surface. This flux has a much higher emissivity than the metallic sample. Whenever such a clod of glass entered the measuring spot of the pyrometer, its signal went up, resulting in spikes. In fact, the temporal distance of these spikes is a quantitative measure of the sample's rotation frequency.
The solidus temperature is, according to the phase diagram, T s = 1080 °C and is visible in the signal around 45280 s. The liquidus temperature, T L , of the phaseseparated sample is around 1437 °C, also visible at 45290 s. The liquidus temperature of the homogeneous, single-phase sample is determined as T L = 1357 °C. The binodal temperature is located at T b = 1248 °C and can be recognised around 45350 s. After the final sequence the sample undercooled and solidified at 45410 s, displaying a recalescence peak. Undercooling relative to the corresponding liquidus temperature was about DT = 200 K.
Oscillation Spectra
For the analysis of the spectra, a number of sample parameters need to be known. First of all, the mass was determined before and after the flight. The sample mass before the flight was M 0 = 1.31 g and M ¥ = 1.30 g after the flight, resulting in a small weight loss due to evaporation of dM = 0.01 g, which can be assumed to be mainly copper.
The initial masses of copper and cobalt were M Cu = 1.00566 g, M Co = 0.3064 g, resulting in 76.65 wt% copper. Due to evaporation, the copper content decreased to 76.38 wt% after flight. Therefore, the concentration changed only by 0.27 wt%, which is acceptable.
For the evaluation of the oscillation frequencies, the radius in the liquid phase is required. This cannot be measured directly, and we estimate it from the sample mass according to
3 eff 3M R 4pr = (14)
The densities of liquid copper and cobalt were measured by Saito and coworkers [13,14]. At the melting point, the quoted values are: r Cu (T m )=7.86 g/cm 3 , r Co (T m )=7.75 g/cm 3 . The temperature dependent densities are as follows:
r Co (T) = 9.71 -1.11 10 -3 T g/cm 3 r Cu (T) = 8.75 -0.675 10 -3 T g/cm 3 At T = 2000 K, we obtain r Co (2000 K)=7.49 g/cm 3 and r Cu (2000 K)=7.44 g/cm 3 . As these two densities are very close, we have decided to neglect the density difference and to assume r Co = r Cu = 0.765 r Cu (2000 K) + 0.235 r Co (2000 K) = 7.45 g/cm 3 throughout the analysis. Inserting this value into above equation, we obtain
R eff = R 0 = 3.
mm
We still need to determine m rel and R i . For these two quantities we need to know the compositions of the two separated liquids L1 and L2. This of course depends on the solidification path and is not known a priori. From EDX analysis of samples prepared identically to the flight sample we estimate that the L2 liquid consists of app. 90 wt% copper and 10 wt% cobalt, while L1 is composed of 16 wt% copper and 84 wt% cobalt. We therefore estimate m rel = (1.31) -1 and obtain .
3 i 0 r e l R R 1 m 2 149 mm = - = .
In order to get a feeling for the oscillation frequencies, we need estimates for the surface and interfacial tensions. The surface tensions of the Cu-Co system have been measured by Eichel et al. [15]. For the composition Cu 70 Co 30 which is very close to our sample, their result is:
( ) . . ( )
3 0 N T 1 22 0 29 10 T 1365 C m s - = - × - °
For T = 1665 °C this yields s 0 =1.13 N/m. Inserting this into the Rayleigh equation, eqn
, we obtain a Rayleigh frequency
n 0 = w 0 /(2p) = 27.06 Hz.
For the interfacial tension, we assume complete wetting, yielding s 12 = s L1s L2 .
From ref 15 we have:
( ) . . ( )
3
Surface Tension
As pointed out before, sample oscillations were excited by short current pulses through the heating coil, which led to a compression and subsequent damped oscillations of the sample. The sample shape was recorded by a video camera, looking along the symmetry axis of the sample (top view) operating at 196 Hz.
The obtained images were analysed off-line by image processing with respect to a number of geometrical parameters; the most important ones are: area of the visible cross section and radii in two orthogonal directions. From the latter, two more parameters can be constructed, namely the sum and the difference of these two radii. In case of non-spherical samples, the latter should have slightly different peaks in their oscillation spectra [16], while the Fourier spectrum of the area signal should contain all peaks. Although there were no big differences between the signals, the area signal was used for further analysis. The time signals of these oscillations are shown in Figure 6 for the first melting cycle. In the first cycle, three oscillations are clearly visible, but the first one is somewhat disturbed. The second cycle also shows three oscillations; they are not shown here. In order to obtain the oscillation frequencies, each oscillation was analysed separately by performing a Fourier transformation. The result is shown in Figure 7 for all pulses analysed. Except for the first pulse of the first cycle, all spectra display a single peak around 28 Hz. The first pulse of the first cycle displays two peaks at 28 Hz, and a small peak around 15 Hz. Positions and corresponding temperatures of the main peaks are shown in Table 1. Assuming that, after the first pulse, the sample is single-phase, these frequencies correspond to the Rayleigh frequency, eqn(1). We then obtain the surface tension as a function of temperature, as shown in Figure 8. Linear fit to the data yields
( ) . . ( ) / 4 T 1 29 2 7710 T 1357 C N m s - = - - ° (15)
This is in excellent agreement with the data measured by Eichel [15].
As is evident from Figure 6 and Figure 7, the oscillations during the first pulse of the first cycle are more complex than for the other pulses. We have therefore analysed this pulse in greater detail, as shown Figure 9. Regardless of the parameter analysed, two peaks around 29 Hz and 28 Hz and a small peak at 15 Hz are clearly visible. Therefore, we conclude that the liquid drop was initially phase separated, giving rise to two peaks around 15 Hz and 29 Hz, and homogenized in the course of the oscillations yielding the Rayleigh frequency at 28 Hz. If this is correct, we must be able to fit all three frequencies by two values for the surface tension s 0 and the interfacial tension s 12 . This is shown in Table 2. From the fit we obtain:
Pulse
s 0 = 1.21 N/m, s 12 = 0.17 N/m
The value for the surface tension corresponds to 1590 °C and agrees well with the fit obtained from the other pulses, see Figure 8. The value of the interfacial tension is somewhat lower than previously estimated. This may be due to a slight shift in the compositions of the two liquids L 1 and L
Summary
Using the EML module on board the TEXUS 44 microgravity mission, a Co 25 Cu 75 sample was successfully processed. Following results were obtained:
• surface tension as function of temperature • interfacial tension at 1590 °C • size distribution of precipitated Co drops The interfacial tension could not be measured as a function of temperature because the unstable interface was destroyed during the first pulse. The final and decisive experiment will have to be performed on board the ISS, when time is sufficient to keep the sample in the undercooled phase until complete phase separation is obtained and a metastable interface exists between the two liquid phases.
Figure 1 .
1 Figure 1. Phase diagram of Cu-Co showing the metastable miscibility gap. Symbols indicate experimentally determined liquidus and binodal temperatures.
Figure 2 .
2 Figure 2. Cross section of a spherical, concentric compound drop consisting of two immiscible liquids with densities r i and r 0 , radii R 0 and R i , surface tension of the outer liquid s 0 , and interfacial tension s 12 .
Figure 4 .
4 Figure 4. Relative amplitudes of the oscillations of inner and outer surface as a function of mass fraction for both modes.
Figure 5 .
5 Figure 5. Temperature-time profile of the Cu-Co TEXUS 44 experiment. Dotted lines show the heater activity, not all pulses are shown due to the time resolution of the display. The arrow indicates final solidification.
two-phase drop.
Figure 6 .
6 Figure 6. Oscillations of the visible cross section during the first melting cycle.
Figure 7 .
7 Figure 7. Fourier transforms of the area signal for all evaluated pulses. The spectra are shifted vertically for clarity. From bottom to top: cycle1/pulse 1, cycle 1/pulse 2, cycle 1/pulse 3, cycle 2/pulse 1, cycle 2/pulse 2.
Figure 8 .
8 Figure 8. Surface tension of Cu 75 Co 25 alloy.
Figure 9 .
9 Figure 9. Fourier spectra of the 1 st pulse in the 1 st cycle. FFT of cross section (top) and radius sum (bottom) is shown. Spectra are shifted vertically for clarity.
Table 1 .
1 Temperatures and peak positions of the pulsed oscillations.
Temperature Frequency Remarks
°C Hz
cycle 1 split peak
pulse 1 1590 (28,6) (28,1+29,06)/2
cycle 1
pulse 2 1490 28,7
cycle 1
pulse 3 1410 28,8
cycle 2
pulse 0 1750 27,8
cycle 2
pulse 1 1660 28,0
cycle 2
pulse 2 1570 28,3
Table 2 .
2 2 . Measured and calculated frequencies for the first pulse of the first cycle.
measured calculated
n 0 28.1 28.0
n + 29.1 29.1
n - 15.5 15.4
Acknowledgements
Our sincere thanks go to the EADS team in Bremen and Friedrichshafen, to the launch team at Esrange, and, last but not least, to the DLR-MUSC team for their continuous and excellent support. We also would like to thank ESA for providing this flight opportunity. Without their help, this experiment would not have been possible. |
01767057 | en | [
"math.math-ds",
"nlin.nlin-cd"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01767057/file/Lozi_Garasym_Lozi_Industrial_Mathematics_2017.pdf | Jean-Pierre Lozi
email: [email protected]
Oleg Garasym
René Lozi
J.-P Lozi
The Challenging Problem of Industrial Applications of Multicore-Generated Iterates of Nonlinear Mappings
Keywords: Chaos, Cryptography, Mappings, Chaotic pseudorandom numbers Attractors AMS Subject Classification 37N30, 37D45, 65C10, 94A60 p. 6
The study of nonlinear dynamics is relatively recent with respect to the long historical development of early mathematics since the Egyptian and the Greek civilization, even if one includes in this field of research the pioneer works of Gaston Julia and Pierre Fatou related to one-dimensional maps with a complex variable, nearly a century ago. In France, Igor Gumosky and Christian Mira began their mathematical researches in 1958; in Japan, the Hayashi' School (with disciples such as Yoshisuke Ueda and Hiroshi Kawakami), a few years later, was motivated by applications to electric and electronic circuits. In Ukraine, Alexander Sharkovsky found the intriguing Sharkovsky's order, giving the periods of periodic orbits of such nonlinear maps in 1962, although these results were only published in 1964. In 1983, Leon O. Chua invented a famous electronic circuit that generates chaos, built with only two capacitors, one inductor and one nonlinear negative resistance. Since then, thousands of papers have been published on the general topic of chaos. However, the pace of mathematics is slow, because any progress is based on strictly rigorous proof. Therefore, numerous problems still remain unsolved. For example, the long-term dynamics of the Hénon map, the first example of a strange attractor for mappings, remain unknown close to the classical parameter values from a strictly mathematical point of view, 40 years after its original publication. In spite of this lack of rigorous mathematical proofs, nowadays, engineers are actively working on applications of chaos for several purposes: global optimization, genetic algorithms, CPRNG (Chaotic Pseudorandom Number Generators), cryptography, and so on. They use nonlinear maps for practical applications without the need of sophisticated theorems. In this chapter, after giving some prototypical examples of the industrial
Introduction
The last few decades have seen the tremendous development of new IT technologies that incessantly increase the need for new and more secure cryptosystems.
For instance, the recently invented Bitcoin cryptocurrency is based on the secure Blockchain system that involves hash functions [START_REF] Delahaye | Cryptocurrencies and blockchains[END_REF]. This technology, used for information encryption, is pushing forward the demand for more efficient and secure pseudorandom number generators [START_REF] Menezes | Handbook of Applied Cryptography[END_REF] which, in the scope of chaos-based cryptography, were first introduced by Matthews in the 1990s [START_REF] Matthews | On the derivation of chaotic encryption algorithm[END_REF]. Contrarily to most algorithms that are used nowadays and based on a limited number of arithmetic or algebraic methods (like elliptic curves), networks of coupled chaotic maps offer quasi-infinite possibilities to generate parallel streams of pseudorandom numbers (PRN) at a rapid pace when they are executed on modern multicore processors. Chaotic maps are able to generate independent and secure pseudorandom sequences (used as information carriers or directly involved in the process of encryption/decryption [START_REF] Lozi | Noise-resisting ciphering based on a chaotic multi-stream pseudorandom number generator[END_REF]). However, the majority of well-known chaotic maps are not naturally suitable for encryption [START_REF] Li | Period extension and randomness enhancement using high-throughput reseeding-mixing PRNG[END_REF] and most of them do not exhibit even satisfactory properties for such a purpose.
In this chapter, we explore the novel idea of coupling a symmetric tent map with a logistic map, following several network topologies. We add a specific injection mechanism to capture the escaping orbits. In the goal of extending our results to industrial mathematics, we implement these networks on multicore machines and we test up to 100 trillion iterates of such mappings, in order to make sure that the obtained results are firmly grounded and able to be used in industrial contexts such as e-banking, e-purchasing, or the Internet of Things (IoT).
The chaotic maps, when used in the sterling way, could generate not only chaotic numbers, but also pseudorandom numbers as shown in [START_REF] Noura | Design of a fast and robust chaos-based cryptosystem for image encryption[END_REF] and as we show in this chapter with more sophisticated numerical experiments.
Various choices of PNR Generators (PRNGs) and crypto-algorithms are currently necessary to implement continuous, reliable security systems. We use a software approach because it is easy to change a cryptosystem to support protection, whereas p. 2
Chapter 4
replacing hardware used for True Random Number Generators would be costly and time-consuming. For instance, after the secure software protocol Wi-Fi Protected Access (WPA) was broken, it was simply updated and no expensive hardware had to be replaced.
It is a very challenging task to design CPRNGs (Chaotic Pseudo Random Number Generators) that are applicable to cryptography: numerous numerical tests must ensure that their properties are satisfactory. We mainly focus on two-to fivedimension maps, although upper dimensions can be very easily explored with modern multicore machines. Nevertheless, in four and five dimensions, the studied CRPNGs are efficient enough for cryptography.
In Sect. 4.2, we briefly recall the dawn and the maturity of researches on chaos. In Sect. 4.3, we explore two-dimensional topologies of networks of coupled chaotic maps. In Sect. [START_REF] Lozi | Noise-resisting ciphering based on a chaotic multi-stream pseudorandom number generator[END_REF].4, we study more thoroughly a mapping in higher dimensions (up to 5) far beyond the NIST tests which are limited to a few millions of iterates and which seem not robust enough for industrial applications, although they are routinely used worldwide. In order to check the portability of the computations on multicore architectures, we have implemented all our numerical experiments on several different multicore machines. We conclude this chapter in Sect. 4.5.
The Dawn and the Maturity of Researches on Chaos
The study of nonlinear dynamics is relatively recent with respect to the long historical development of early mathematics since the Egyptian and the Greek civilizations (and even before). The first alleged artifact of mankind's mathematical thinking goes back to the Upper Paleolithic era. Dating as far back as 22,000 years ago, the Ishango bone is a dark brown bone which happens to be the fibula of a baboon, with a sharp piece of quartz affixed to one end for engraving. It was first thought to be a tally stick, as it has a series of what has been interpreted as tally marks carved in three columns running the length of the tool [START_REF] Bogoshi | The oldest mathematical artifact[END_REF].
Twenty thousand years later, the Rhind Mathematical Papyrus is the best example of Egyptian mathematics. It dates back to around 1650 BC. Its author is the scribe Ahmes who indicated that he copied it from an earlier document dating from the 12th dynasty, around 1800 BC. It is a practical handbook, whose the first part consists of reference tables and a collection of 20 arithmetic and 20 algebraic problems and linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + x 3 + x 4 = 2 for x [START_REF] Smith | History of Mathematics[END_REF]. Since those early times, mathematics have known great improvements, flourishing in many different fields such as geometry, algebra (both linked, thanks to the invention of Cartesian coordinates by René Descartes [START_REF] Descartes | Discours de la méthode[END_REF]), analysis, probability, number and set theory, and so on.
However, nonlinear problems are very difficult to handle, because, as shown by Galois' theory of algebraic equations which provides a connection between field theory and group theory, it is impossible to solve any polynomial equation p. 3 of degree equal or greater than 5 using only the usual algebraic operations (addition, subtraction, multiplication, division) and the application of radicals (square roots, cube roots, etc.) [START_REF] Galois | Mémoire sur les conditions de résolubilité des équations par radicaux (mémoire manuscrit de 1830)[END_REF].
The beginning of the study of nonlinear equation systems goes back to the original works of Gaston Julia and Pierre Fatou regarding to one-dimensional maps with a complex variable, nearly a century ago [START_REF] Julia | Mémoire sur l'itération des fonctions rationnelles[END_REF][START_REF] Fatou | Sur l'itération des fonctions transcendantes entières[END_REF]. Compared to thousands of years of mathematical development, a century is a very short period. In France, 30 years later, Igor Gumosky and Christian Mira began their mathematical researches with the help of a computer in 1958 [START_REF] Gumowski | Recurrence and Discrete Dynamics systems[END_REF]. They developed very elaborate studies of iterations. One of the best-known formulas they published is
x n+1 = f (x n ) + by n y n+1 = f (x n+1 ) -x n , with f (x) = ax + 2(1 -a) x 2 1 + x 2 (4.1)
which can be considered as a non-autonomous mapping from the plane R 2 onto itself that exhibits esthetic chaos. Surprisingly, slight variations of the parameter value lead to very different shapes of the attractor (Fig. 4.1).
In Ukraine, Alexander Sharkovsky found the intriguing Sharkovsky's order, giving the periods of periodic orbits of such nonlinear maps in 1962, although these results were only published in 1964 [START_REF] Sharkovskiȋ | Coexistence of cycles of a continuous map of the line into itself[END_REF]. In Japan the Hayashi' School (with disciples like Yoshisuke Ueda and Hiroshi Kawakami), a few years later, was motivated by applications to electric and electronic circuits. Ikeda proposed the Ikeda attractor [START_REF] Ikeda | Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system[END_REF][START_REF] Ikeda | Optical turbulence: chaotic behavior of transmitted light from a ring cavity[END_REF] which is a chaotic attractor for u ≥ 0.6 (Fig. 4.2).
x n+1 = 1 + u(x n cos t n -y n sin t n ) y n+1 = u(x n sin t n + y n cos t n ), with t n = 0.4 - 6 1 + x 2 n + y 2 n (4.2)
In 1983, Leon O. Chua invented a famous electronic circuit that generates chaos built with only two capacitors, one inductor and one nonlinear negative resistance [START_REF] Chua | The double scroll family[END_REF]. Since then, thousands of papers have been published on the general topic of chaos. However the pace of mathematics is slow, because any progress is based on strictly rigorous proof. Therefore numerous problems still remain unsolved. For example, the long-term dynamics of the Hénon map [START_REF] Hénon | Two-dimensional mapping with a strange attractor[END_REF], the first example of a strange attractor for mappings, remains unknown close to the classical parameter values from a strictly mathematical point of view, 40 years after its original publication.
Nevertheless, in spite of this lack of rigorous mathematical results, nowadays, engineers are actively working on applications of chaos for several purposes: global optimization, genetic algorithms, CPRNG, cryptography, and so on. They use nonlinear maps for practical applications without the need of sophisticated theorems. During the last 20 years, several chaotic image encryption methods have been proposed in the literature.
Dynamical systems which present a mixing behavior and that are highly sensitive to initial conditions are called chaotic. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems. This effect, popularly known as the butterfly effect, renders long-term predictions impossible in general [START_REF] Lorenz | Deterministic nonperiodic flow[END_REF]. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. Mastering the global properties of those dynamical systems is a challenging issue nowadays that we try to fix by exploring several network topologies of coupled maps.
In this chapter, after giving some prototypical examples of industrial applications of iterations of nonlinear maps, we focus on the exploration of topologies of coupled nonlinear maps that have a very rich potential of complex behavior. Very long computations on multicore machines are used, generating up to one hundred trillion iterates, in order to assess such topologies. We show the emergence of randomness from chaos and discuss the promising future of chaos theory for cryptographic security.
p. 5
Miscellaneous Network Topologies of Coupled Chaotic Maps
Tent-Logistic Entangled Map
In this section we consider only two 1-D maps: the logistic map
f µ (x) ≡ L µ (x) = 1 -µx 2 (4.3)
and the symmetric tent map
f µ (x) ≡ T µ (x) = 1 -µ|x| (4.4)
both associated to the dynamical system
x n+1 = f µ (x n ), ( 4.5)
where µ is a control parameter which impacts the chaotic degree. Both mappings are sending the one-dimensional interval [-1,1] onto itself.
Since the first study by R. May [START_REF] May | Stability and Complexity of Models Ecosystems[END_REF][START_REF] May | Biological populations with nonoverlapping generations: stable points, stable cycles, and chaos[END_REF] of the logistic map in the frame of nonlinear dynamical systems, both the logistic (4.3) and the symmetric tent map (4.4) have been fully explored with the aim to easily generate pseudorandom numbers [START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF].
However, the collapse of iterates of dynamical systems [START_REF] Yuan | Collapsing of chaos in one dimensional maps[END_REF] or at least the existence of very short periodic orbits, their non-constant invariant measure, and the easilyrecognized shape of the function in the phase space, could lead to avoid the use of such one-dimensional maps (logistic, baker, tent, etc.) or two-dimensional maps (Hénon, Standard, Belykh, etc.) as PRNGs (see [START_REF] Lozi | Can we trust in numerical computations of chaotic solutions of dynamical systems?[END_REF] for a survey). Yet, the very simple implementation as computer programs of chaotic dynamical systems led some authors to use them as a base for cryptosystems [25,[START_REF] Ariffin | Modified baptista type chaotic cryptosystem via matrix secret key[END_REF]. Even if the logistic and tent maps are topologically conjugates (i.e., they have similar topological properties: distribution, chaoticity, etc.), their numerical behavior differs drastically due to the structure of numbers in computer realization [START_REF] Lanford | Informal remarks on the orbit structure of discrete approximations to chaotic maps[END_REF].
As said above, both logistic and tent maps are never used in serious cryptography articles because they have weak security properties (collapsing effect) if applied alone. Thus, these maps are often used in modified form to construct CPRNGs [START_REF] Wong | A modified chaotic cryptographic method[END_REF][START_REF] Nejati | A realizable modified tent map for true random number generation[END_REF][START_REF] Lozi | Mathematical chaotic circuits: an efficient tool for shaping numerous architectures of mixed chaotic/pseudo random number generator[END_REF].
Recently, Lozi et al. proposed innovative methods in order to increase randomness properties of the tent and logistic maps over their coupling and sub-sampling [START_REF] Lozi | Emergence of randomness from chaos[END_REF][START_REF] Rojas | New alternate ring-coupled map for multirandom number generation[END_REF][START_REF] Garasym | Robust PRNG based on homogeneously distributed chaotic dynamics[END_REF]. Nowadays, hundreds of publications on industrial applications of chaosbased cryptography are available [START_REF] Jallaouli | Design and analyses of two stream ciphers based on chaotic coupling and multiplexing techniques[END_REF][START_REF] Garasym | Application of observer-based chaotic synchronization and identifiability to the original CSK model for secure information transmission[END_REF][START_REF] Farajallah | Fast and secure chaos-based cryptosystem for images[END_REF][START_REF] Taralova | Chaotic generator synthesis: dynamical and statistical analysis[END_REF].
In this chapter, we explore more thoroughly the original idea of combining features of tent (T µ ) and logistic (L µ ) maps to produce a new map with improved properties, through combination in several network topologies. This idea was recently introduced [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] in order to improve previous CPRNGs. Looking at both Eqs. (4.3) and (4.4), it is possible to reverse the shape of the graph of the tent map T and to entangle it with the graph of the logistic map L. We obtain the combined map
f µ (x) ≡ T L µ (x) = µ|x| -µx 2 = µ(|x| -x 2 ) (4.6)
When used in more than one dimension, the T L µ map can be considered as a twovariable map
T L µ (x (i) , x ( j) ) = µ(|x (i) | -(x ( j) ) 2 ), i = j (4.7)
Moreover, we can combine again the T L µ map with T µ in various ways. If with choose, for instance, a network with a ring shape (Fig. 4.3). It is possible to define a mapping M µ, p : J p → J p where
J p = [-1, 1] p ⊂ R p : M µ, p ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n x (2) n . . . x ( p) n ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n+1 x (2) n+1 . . . x ( p) n+1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ T µ (x (1) n ) + T L µ (x (1) n , x (2) n ) T µ (x (2) n ) + T L µ (x (2) n , x (3) n ) . . . T µ (x ( p) n ) + T L µ (x ( p) n , x (1) n ) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (4.8)
However, if used in this form, system (4.8) has unstable dynamics and iterated points x (1) n , x (2) n , . . . ,
x ( p)
n quickly spread out. Therefore, to solve the problem of keeping dynamics in the torus J p = [-1, 1] p ⊂ R p , the following injection mechanism has to be used in conjunction with (4.8) if (x (i) n+1 < -1) then add 2 if (x (i) n+1 > 1) then subtract 2
, i = 1, 2, . . . , p. (4.9)
p. 7 The T L µ function is a powerful tool to change dynamics. Used in conjunction with T µ , the map T L µ makes it possible to establish mutual influence between system components x (i) n in M µ, p . This multidimensional coupled mapping is interesting because it performs contraction and distance stretching between components, improving chaotic distribution.
The coupling of components has an excellent effect in achieving chaos, because they interact with global system dynamics, being a part of them. Component interaction has a global effect. In order to study this new mapping, we use a graphical approach, however other theoretical assessing functions are also involved.
Note that system (4.8) can be made more generic by introducing constants k i which generalize considered topologies. Let k = (k 1 , k 2 , . . . , k p ), we define
M k µ, p ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n x (2) n . . . x ( p) n ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x (1) n+1 x (2) n+1 . . . x ( p) n+1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ T µ (x (1) n ) + k 1 × T L µ (x (i) n , x ( j) n ), i, j = (1, 2) or (2, 1) T µ (x (2) n ) + k 2 × T L µ (x (i) n , x ( j) n ) i, j = (2, 3) or (3, 2) . . . T µ (x ( p) n ) + k p × T L µ (x (i) n , x ( j) n ) i, j = ( p, 1) or (1, p) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (4.10) System (4.10) is called alternate if k i = (-1) i or k i = (-1) i+1 , 1 ≤ i ≤ p, or non-alternate if k i = +1, or k i = -1. It
i j i' j' #1 +1 +1 1 2 1 2 #2 +1 -1 1 2 1 2 #3 -1 +1 1 2 1 2 #4 -1 -1 1 2 1 2 #5 +1 +1 2 1 2 1 #6 +1 -1 2 1 2 1 #7 -1 +1 2 1 2 1 #8 -1 -1 2 1 2 1 #9 +1 +1 1 2 2 1 #10 +1 -1 1 2 2 1 #11 -1 +1 1 2 2 1 #12 -1 -1 1 2 2 1 #13 +1 +1 2 1 1 2 #14 +1 -1 2 1 1 2 #15 -1 +1 2 1 1 2 #16 -1 -1 2 1 1 2
Two-Dimensional Network Topologies
We first consider the simplest coupling case, in which only two equations are coupled.
The first condition needed to obtain a multidimensional mapping, in the aim of building a new CPRNG, is to obtain excellent uniform distribution of the iterated points. The second condition is that the CPRNG must be assessed positively by the NIST tests [START_REF] Rukhin | Statistical test suite for random and pseudorandom number generators for cryptographic applications[END_REF]. In [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] this two-dimensional case is studied in detail. Using a bifurcation diagram and computation of Lyapunov exponents, it is shown that the best value for the parameter is µ = 2. Therefore, in the rest of this chapter we use this parameter value and we only briefly recall the results found with this value in both of those articles. The general form of
M k 2,2 is then M k 2,2 x (1) n x (2) n = x (1) n+1 x (2) n+1 = T 2 (x (1) n ) + k 1 × T L 2 (x (i) n , x ( j) n ) T 2 (x (2) n ) + k 2 × T L 2 (x (i ) n , x ( j ) n ) (4.11)
with i, j, i , j = 1 or 2, i = j, and i = j . Considering this general form, it is possible to define 16 different maps (Table 4.1). Among this set of maps, we study case #3 and case #13. The map of case #3 is called Single-Coupled alternate due to the shape of the corresponding network and denoted T T L SC 2 ,
p. 9
T T L SC 2 = ⎧ ⎨ ⎩ x (1) n+1 = 1 -2|x (1) n | -2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (1) n ) -T L 2 ((x (1) n ), (x (2) n )) x (2) n+1 = 1 -2|x (2) n | + 2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (2) n ) + T L 2 ((x (1) n ), (x (2) n ))
(4.12) and case #13 is called Ring-Coupled non-alternate and denoted T T L RC 2 ,
T T L RC 2 = ⎧ ⎨ ⎩ x (1) n+1 = 1 -2|x (1) n | + 2(|x (2) n | -(x (1) n ) 2 ) = T 2 (x (1) n ) + T L 2 ((x (2) n ), (x (1) n )) x (2) n+1 = 1 -2|x (2) n | + 2(|x (1) n | -(x (2) n ) 2 ) = T 2 (x (2) n ) + T L 2 ((x (1) n ), (x (2) n )) (4.13)
Both systems were selected because they have balanced contraction and stretching processes between components. They allow achieving uniform distribution of the chaotic dynamics. Equations (4.12) and (4.13) are used, of course, in conjunction with injection mechanism (4.9). The largest torus where points mapped by (4.12) and (4.13) are sent is [-2, 2] 2 . The confinement from torus [-2, 2] 2 to torus [-1, 1] 2 of the dynamics obtained by this mechanism is shown in Figs. 4.5 and 4.6: dynamics cross from the negative region (in blue) to the positive one, and conversely to the negative region, if the points stand in the positive regions (in red). Through this operation, the system's dynamics are trapped inside [-1, 1] 2 . In addition, after this operation is done, the resulting system exhibits more complex dynamics with additional nonlinearity, which is advantageous for chaotic encryption (since it improves security).
A careful distribution analysis of both T T L SC 2 and T T L RC 2 has been performed using approximated invariant measures. x (1) n ≡ x (1) n -2; if x (1) n < -1 then x (1) n ≡ x (1) n + 2 (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
p. 10
n ≡ x (2) n -2; if x (2) n < -1 then x (2) n ≡ x (2)
n + 2 (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Approximated Invariant Measures
We recall in this section the definition of approximated invariant measures which are important tools for assessing the uniform distribution of iterates. We have previously introduced them for the first studies of the weakly coupled symmetric tent map [START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF]. We first define an approximation P M,N (x) of the invariant measure, also called the probability distribution function linked to the one-dimensional map f (Eq. (4.5)) when computed with floating numbers (or numbers in double precision). To this goal, we consider a regular partition of M small intervals (boxes) r i of J = [-1, 1] defined by
s i = -1 + 2i M , i = 0, M, (4.14)
r i = [s i , s i+1 [, i = 0, M -2, (4.15) r M-1 = [s M-1 , 1], (4.16
)
J = M-1 0 r i . (4.17)
The length of each box r i is equal to
s i+1 -s i = 2 M (4.18)
All iterates f (n) (x) belonging to these boxes are collected (after a transient regime of Q iterations decided a priori, i.e., the first Q iterates are discarded). Once the p. 11 computation of N + Q iterates is completed, the relative number of iterates with respect to N /M in each box r i represents the value P N (s i ). The approximated P N (x) defined in this article is therefore a step function, with M steps. Since M may vary, we define
P M,N (s i ) = 1 2 M N (#r i ) (4.19)
where #r i is the number of iterates belonging to the interval r i and the constant 1/2 allows the normalisation of P M,N (x) on the interval J .
P M,N (x) = P M,N (s i ), ∀ x ∈ r i (4.20)
In the case of p-coupled maps, we are more interested by the distribution of each component x (1) , x (2) , . . . , x ( p) of the vector
X = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
x (1) x (2) . . .
x ( p) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
rather than by the distribution of the variable X itself in J p . We then consider the approximated probability distribution function P M,N (x ( j) ) associated to one component of X . In this chapter, we use either N disc for M or N iter for N , depending on which is more explicit. The discrepancies E 1 (in norm L 1 ), E 2 (in norm L 2 ), and E ∞ (in norm L ∞ ) between P N disc ,N iter (x) and the Lebesgue measure, which is the invariant measure associated to the symmetric tent map, are defined by
E 1,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L 1 (4.21) E 2,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L 2 (4.22) E ∞,N disc ,Niter (x) = P N disc ,Niter (x) -0.5 L ∞ (4.23)
In the same way, an approximation of the correlation distribution function C M,N (x, y) is obtained by numerically building a regular partition of M 2 small squares (boxes) of J 2 , embedded in the phase subspace (x l , x m )
s i = -1 + 2i M , t j = -1 + 2 j M , i, j = 0, M (4.24) r i, j = [s i , s i+1 [×[t j , t j+1 [, i, j = 0, M -2 (4.25) r M-1, j = [s M-1 , 1] × [t j , t j+1 ], j = 0, M -2 (4.26) r i,M-1 = [s i , s i+1 [×[t M-1 , 1], j = 0, M -2 (4.27) r M-1,M-1 = [s M-1 , 1] × [t M-1 , 1] (4.28)
p. 12
The measure of the area of each box is
(s i+1 -s i ).(t i+1 -t i ) = 2 M 2 (4.29)
Once N + Q iterated points (x l n , x m n ) belonging to these boxes are collected, the relative number of iterates with respect to N /M 2 in each box r i, j represents the value C N (s i , t j ). The approximated probability distribution function C N (x, y) defined here is then a two-dimensional step function, with M 2 steps. Since M can take several values in the next sections, we define
C M,N (s i , t j ) = 1 4 M 2 N (#r i, j ) (4.30)
where #r i, j is the number of iterates belonging to the square r i, j and the constant 1/4 allows the normalisation of C M,N (x, y) on the square J 2 .
C M,N (x, y) = C M,N (s i , t j ) ∀(x, y) ∈ r i, j (4.31)
The discrepancies y) and the uniform distribution on the square are defined by
E C 1 (in norm L 1 ), E C 2 (in norm L 2 ) and E C ∞ (in norm L ∞ ) between C N disc ,N iter (x,
E C 1 ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L 1 (4.32) E C 2 ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L 2 (4.33) E C ∞ ,N disc ,N iter (x, y) = C N disc ,N iter (x, y) -0.25 L ∞ (4.34)
Finally, let AC N disc ,N iter be the autocorrelation distribution function which is the correlation function C N disc ,N iter of (4.31), defined in the delay space (x (i) n , x (i) n+1 ) instead of the phase (x l , x m ) space. We define in the same manner than (4.32), (4.33), and (4.34)
E C 1 ,N disc ,N iter (x, y), E C 2 ,N disc ,N iter (x, y), and E C ∞ ,N disc ,N iter (x, y).
Study of Randomness of TTL SC
2 and TTL RC
, and Other Topologies
Using numerical computations, we assess the randomness properties of the two-dimensional maps T T L SC 2 and T T L RC 2 . If all requirements 1-8 of Fig. 4.7 are verified, the dynamical systems associated to those maps can be considered as pseudorandom and their application to cryptosystems is possible.
p. 13 Whenever one among the eight criteria is not satisfied for a given map, one cannot consider that the associated dynamical system is a good CPRNG candidate. As said above, when µ = 2, the Lyapunov exponents of both considered maps are positive.
In the phase space, we plot the iterates in the system of coordinates x (1) n versus x (2) n in order to analyze the density of the points' distribution. Based on such an analysis, it is possible to assess the complexity of the behavior of dynamics, noticing any weakness or inferring on the nature of randomness. We also use the approximate invariant measures to assess more precisely the distribution of iterates.
The graphs of the attractor in phase space for the T T L RC 2 non-alternate (Fig. 4.8) and T T L SC 2 alternate (Fig. 4.9) maps are different. The T T L SC 2 map has wellscattered points in the whole pattern, but there are some more "concentrated" regions forming curves on the graph. Instead, the map T T L RC 2 has good repartition. Some other numerical results we do not report in this chapter show that even if those maps have good random properties, it is possible to improve mapping randomness by modifying slightly network topologies.
p. 14
T T L SC 2 (x (1) n , x (2) n ) = x (1) n+1 = 1 + 2(x (2) n ) 2 -4|x (1) n | x (2) n+1 = 1 -2(x (2) n ) 2 + 2(|x (1) n | -|x (2) n |) (4.35)
In [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF], it is shown that if the impact of component x (1) n is reduced, randomness is improved. Hence, the following MT T L SC 2 map is introduced
MT T L SC 2 (x (1) n , x (2) n ) = x (1) n+1 = 1 + 2(x (2) n ) 2 -2|x (1) n | x (2) n+1 = 1 -2(x (2) n ) 2 + 2(|x (1) n | -|x (2) n |) (4.36)
and the injection mechanism (4.9) is used as well, but it is restricted to three phases: (1) n+1 > 1) then subtract 2 if (x (2) n+1 < -1) then add 2 if (x (2) n+1 > 1) then subtract 2 (4.37)
⎧ ⎪ ⎨ ⎪ ⎩ if (x
This injection mechanism allows the regions containing iterates to match excellently (Fig. 4.10).
The change of topology leading to MT T L SC 2 greatly improves the density of iterates in the phase space (Fig. 4.11) where 10 9 points are plotted. The point distribution of iterates in phase delay for the variable x (2) is quite good as well (Fig. 4.12). On both pictures, a grid of 200 × 200 boxes is generated to use the box counting method defined in Sect. 4.3.3. Moreover, the largest Lyapunov exponent is equal to 0.5905, indicating a strong chaotic behavior.
p. 15 2 alternative map, on the (x (1) , x (2) ) plane (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) However, regarding the phase delay for the variable x (1) , results are not satisfactory. We have plotted in Fig. 4.13 10 9 iterates of MT T L SC 2 in the delay plane, and in Fig. 4.14 the same iterates using the counting box method.
When such a great number of iterates is computed, one has to be cautious with raw graphical methods because irregularities of the density repartition are masked due to the huge number of plotted points. Therefore, these figures highlight the necessity of using the tools we have defined in Sect. 4.3.3.
Nevertheless, NIST tests were used to check randomness properties of MT T L SC 2 . Since they only require binary sequences, we generated 4 × 10 6 iterates whose 5 × 10 5 first ones were cut off. The rest of the sequence was converted to binary form according to the IEEE-754 standard (32-bit single-precision floating point).
p. 16
Fig. 4.12 Approximate density function of the MT T L SC
2 alternative map, on the (x (1) n , x (1) n+1 ) plane (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) As said in the introduction, networks of coupled chaotic maps offer quasi-infinite possibilities to generate parallel streams of pseudorandom numbers. For example, in [39], the following modification of MT T L SC 2 is also studied and shows good randomness properties
N T T L SC 2 (x (1) n , x (2) n ) = ⎧ ⎪ ⎨ ⎪ ⎩ x (1) n+1 = 1 -2|x (2) n | = T 2 (x (2) n ) x (2) n+1 = 1 -(2x (2) n ) 2 -2(|x (2) n | -|x (1) n |) = L 2 (x (2) n ) + T 2 (x (2) n ) -T 2 (x (1) n ) (4.38)
p. 17
Mapping in Higher Dimension
Higher dimensional systems make it possible to achieve better randomness and uniform point distribution, because more perturbations and nonlinear mixing are involved. In this section, we focus on a particular realization of the M k µ, p map (4.10) from dimension two to dimension five. (2) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
p. 18
Usually, three or four dimensions are complex enough to create robust random sequences as we show here. Thus, it is advantageous if the system can increase its dimension. Since the MT T L SC 2 alternative map cannot be nested in higher dimensions, we describe how to improve randomness and to obtain the best distribution of points, and how to produce more complex dynamics than the T T L SC 2 (x (2) , x (1) ) alternative map in dimension greater than 2. Let
T T L RC, pD 2 = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ x (1) n+1 = 1 -2|x (1) n | + 2(|x (2) n | -(x (1) n ) 2 ) x (2) n+1 = 1 -2|x (2) n | + 2(|x (3) n | -(x (2) n ) 2 ) . . . x ( p) n+1 = 1 -2|x ( p) n | + 2(|x (1) n | -(x ( p) n ) 2 ) (4.39)
be this realization. We show in Figs. 4.17 and 4.18 successful NIST tests for T T L RC, pD 2 in 3-D and 4-D, for the variable x (1) .
Fig. 4.17 N I ST test for T T L RC,3D
2 for x (1) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]) p. 19 2 for x (1) (from [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Numerical Experiments
All NIST tests for dimensions three to five for every variable are successful, showing that these realizations in 3-D up to 5-D are good CPRNGs. In addition to those tests, we study the mapping more thoroughly, far beyond the NIST tests which are limited to a few million iterates and which seem not robust enough for industrial mathematics, although they are routinely used worldwide.
In order to check the portability of the computations on multicore architectures, we have implemented all our numerical experiments on several different multicore machines.
Checking the Uniform Repartition of Iterated Points
We first compute the discrepancies E 1 (in norm L 1 ), E 2 (in norm L 2 )n and E ∞ (in norm E ∞ ) between P N disc ,N iter (x) and the Lebesgue measure which is the uniform measure on the interval J = [-1, 1]. We set M = N iter = 200, and vary the number N iter of iterated points in the range 10 4 to 10 14 . From our knowledge, this article is the first one that checks such a huge number of iterates (in conjunction with [39]). We compare E 1,200,N iter (x (1) ) for T T L RC, pD 2 with p = 2 to 5 (Table 4.2, Fig. 4
.19).
As shown in Fig. 4.19, E 1,200,N iter (x (1) ) decreases steadily when N iter increases. However, the decreasing process is promptly (with respect to N iter ) bounded below for p = 2. This is also the case for other values of p, however, the boundary decreases with p, therefore showing better randomness properties for higher dimensional mappings.
Table 4.3 compares x (1) , x (2) , …,x ( p) for T T L RC,5D
2
, for different values of N iter . It is obvious that the same quality of randomness is obtained for each one of them, contrarily to the results obtained for MT T L SC 2 .
p. 20 with p = 2 to 5, with respect to N iter (horizontal axis, logarithmic value) Table 4.3 E 1,200,N iter (x (i) ) for T T L RC,5D 2 for i = 1 to 5
N iter
x (1) x (2) x (3) x (4) x (5) 0.000160547 0.000159192 0.000160014 0.000159213 0.000159159 10 13 5.04473e-05 5.03574e-05 5.05868e-05 5.04694e-05 5.01681e-05 10 14 1.59929e-05 1.60291e-05 1.59282e-05 1.59832e-05 1.60775e-05 p. 21 (1) ), E 2,200,N iter (x (1) ), and E ∞,200,N iter (x (1) ) for T T L RC,5D 0.000160547 0.000201102 0.0008602 10 13 5.04473e-05 6.32233e-05 0.00026894 10 14 1.59929e-05 2.00533e-05 9.89792e-05 Fig. 4. [START_REF] May | Stability and Complexity of Models Ecosystems[END_REF] Comparison between E 1,200,N iter (x (1) ), E 2,200,N iter (x (1) ), and E ∞,200,N iter (x (1) ) (vertical axis) for T T L RC,5D 2 with respect to N iter (horizontal axis, logarithmic value)
The comparisons between E 1,200,N iter (x (1) ), E 2,200,N iter (x (1) ), and E ∞,N iter (x (1) ) for T T L RC,5D 2 in Table 4.4 and Fig. 4.20 show that E 1,200,N iter (x (1) ) < E 2,200,N iter (x (1) ) < E ∞,N iter (x (1) ) (4.40)
for every value of N iter .
Autocorrelation Study in the Delay Space
In this section, we assess autocorrelation errors
E AC 1 ,N disc ,N iter (x, y), E AC 2 ,N disc ,N iter (x,
n+2 ), and E AC1,200,N iter (x (1) n , x (1) n+3 (1) n , x (1) n+1 ), E AC1,200,N iter (x (1) n , x (1) n+2 ), and ments for M = 20 to 20, 000, however, in this chapter, we only present the results for M = 200. We first compare E AC 1 ,200,N iter (x (1) n , x (1) n+1 ) with E AC 1 ,200,N iter (x (1) n , x (1) n+2 ) and E AC 1 ,200,N iter (x (1) n , x (1) n+3 ) for T T L RC, pD 2 when the dimension of the system is within the range p = 2 to 5 (Tables 4.5, 4.6, 4.7 and 4.8). It is possible to see that better randomness properties are obtained for higher dimensional mappings.
) for T T L RC,2D 2 N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1)
E AC1,200,N iter (x (1) n , x (1) n+3 ) for T T L RC,3D 2 N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1) n+3 ) 10 4 1
The comparison between E AC 1 ,200,N iter (x (1) n , x (1) n+1 ), E AC 2 ,200,N iter (x (1) n , x (1) n+1 ), and E AC ∞ ,200,N iter (x (1) n , x (1) n+1 ) for T T L RC,5D 2 in Table 4.9 shows that numerically p. 23
n+2 ), and E AC1,200,N iter (x (1) n , x (1) n+3 ) for T T L RC,4D
2 0.000160547 0.000159144 0.000159246 10 13 5.0394e-05 10 14 1.59929e-05
N iter (x (1) n , x (1) n+1 ) (x (1) n , x (1) n+2 ) (x (1) n , x (1)
E AC 1 ,200,N iter (x (1) n , x (1) n+1 ) < E AC 2 ,200,N iter (x (1) n , x (1) n+1 ) < E AC ∞ ,200,N iter (x (1) n , x (1) n+1 ) (4.41) Equation (4.41) is not only valid for M = 200, but also for other values of M and every component of X .
In order to illustrate the numerical results displayed in these tables, we plot in Fig. 4.21 the repartition of iterates of T T L RC,5D 2 in the delay plane (x (1) n , x (1) n+1 ), using the box counting method. On a grid of 200 × 200 boxes (N iter = M = 200), p. 24 (1) n , x (1) n+1 ) for T T L RC,5D 0.000160547 0.000201102 0.0008602 10 13 5.0394e-05 6.31756e-05 0.000280168 10 14 1.59929e-05 2.00533e-05 9.89792e-05 we have generated 10 6 points. The horizontal axis is x (1) n , and the vertical axis is x (1) n+1 . In order to check very carefully the repartition of the iterates of T T L RC,5D 2 , we have also plotted the repartition in the delay planes (x (1) n , x (1) n+2 ), (x (1) n , x (1) n+3 ), and (x (1) n , x (1) n+4 ) (Figs. 4.22, 4.23, and 4.24). This repartition is uniform everywhere as shown also in Table 4.8.
We find the same regularity for every component x (2) , x (3) , x (4) , and x (5) , as shown in Figs. 4.25, 4.26, 4.27, 4.28, and in
Autocorrelation Study in the Phase Space
Finally, in this section, we assess the autocorrelation errors E C 1 ,N disc ,N iter (x, y), E C 2 ,N disc ,N iter (x, y), and E C ∞ ,N disc ,N iter (x, y), defined by Eqs. (4.32), (4.33), and (4.34), in the phase space. We checked all combinations of the components. Due to space limitations, we only provide part of the numerical computations we have performed to carefully check the randomness of T T L RC, pD 2 for p = 2, 5 and i = 1, p. Like in the previous section, we only provide the results for M = 200. We first compare E C 1 ,200,N iter (x (1) n , x (2) n ), E C 2 ,200,N iter (x (1) n , x (2) n ), and E C ∞ ,200,N iter (x (1) n , x (2) n ) (Table 4.11), and our other results verified that p. 26 (2) n , x (2) n+1 ) of T T L RC,5D 2 ; box counting method, 10 6 points are generated on a grid of 200 × 200 boxes, the horizontal axis is x (2) n , and the vertical axis is x (1) n ,
(2) n+1 E C 1 ,N disc ,N iter (x
x (2) n ) < E C 2 ,N disc ,N iter (x (1) n , x (2) n ) < E C ∞ ,N disc ,N iter (x (1) n , x (2) n ) (4.42)
We have also assessed the autocorrelation errors
E C 1 ,N disc ,N iter (x (i) n , x ( j)
n ) for i, j = 1, 5, i = j, and various values of the number of iterates for T T L RC,5D 2 (Table 4.12). We have performed the same experiments for E C 1 ,N disc ,N iter (x (1) n , x (2) n ) for p = 1, 5 (Table 4.13).
p. 27 Our numerical experiments all show a similar trend: T T L RC, pD 2 is a good candidate for a CPRNG, and the randomness performance of such mappings increases in higher dimensions.
Checking the Influence of Discretization in Computation of Approximated Invariant Measures
In order to verify that the computations we have performed using the discretization M = N disc = 200 of the phase space and the delay space in the numerical experip. 28
(i) n , x (i) n+1 ), E AC1,200,N iter (x (i) n , x (i) n+2 ), and E AC1,200,N iter (x (i) n , x (i) n+3 ) for T T L RC,5D 2 for i = 1 to 5 N iter i (x (i) n , x (i) n+1 ) (x (i) n , x (i) n+2 ) (x (i) n , x (i) n+3 ) 10 4 1
Computation Time of PRNs
The numerical experiments performed in this section have involved several multicore machines. We show in Table 4.15 different computation times (in seconds) for the generation of N iter PRNs for T T L RC, pD 2 with p = 2 to 5, and various values of the number of iterates (N iter ). The machine used is a laptop computer with a Core i7 4980HQ processor with eight logical cores. Table 4.16 shows the computation time of only one PRN in the same experiment. Time is expressed in 10 -10 s.
p. 30 These results show that the pace of computation is very high. When T T L RC,5D 2 is the mapping tested, and the machine used is a laptop computer with a Core i7 4980HQ processor with 8 logical cores, computing 10 11 iterates with five parallel streams of PRNs leads to around 2 billion PRNs being produced per second. Since these PRNs are computed in the standard double precision format, it is possible to extract from each 50 random bits (the size of the mantissa being 52 bits for a double precision floating-point number in standard IEEE-754). Therefore, T T L RC,5D
Conclusion
In this chapter, we thoroughly explored the novel idea of combining features of a tent map (T µ ) and a logistic map (L µ ) to produce a new map with improved properties, through combination in several network topologies. This idea was recently introduced [START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF]39] in order to improve previous CPRNGs. We have summarized the previously explored topologies in dimension two. We have presented new results of numerical experiments in higher dimensions (up to five) for the mapping T T L RC, pD 2 on multicore machines and shown that T T L RC,5D 2 is a very good CPRNG which is fit for industrial applications. The pace of generation of random bits can be incredibly high (up to 200 billion random bits per second).
Fig. 4 . 1 Fig. 4 . 2
4142 Fig. 4.1 Gumowski-Mira attractor for parameter values a = 0.92768 and a = 0.93333
Fig. 4 . 3
43 Fig. 4.3 Auto and ring-coupling of the T L µ and T µ maps (from [38])
Fig. 4 . 4
44 Fig. 4.4 Return mechanism from the [-2, 2] p torus to [-1, 1] p (from [38])
Fig. 4 . 5
45 Fig. 4.5 Injection mechanism of the iterates from torus [-2, 2] 2 to torus [-1, 1] 2 . If x (1) n > 1 thenx(1) n ≡ x(1) n -2; if x(1)
Fig. 4 . 6
46 Fig. 4.6 If x (2) n > 1 then x (2)
Fig. 4 . 7 Fig. 4 . 8
4748 Fig. 4.7 The main criteria for assessing CPRNG (from [34])
Fig. 4 . 9
49 Fig. 4.9 Phase space behavior of T T L SC 2 alternative (4.18), plot of 20, 000 points
Fig. 4 . 10
410 Fig. 4.10 Injection mechanism (4.21) of the MT T L SC 2 alternative map (From [38])
Fig. 4 .
4 Fig. 4.13 Plot of one billion iterates of MT T L SC 2 in the delay plane
Fig. 4 .Fig. 4 .
44 Fig. 4.14 Plot of one billion iterates of MT T L SC 2 using the counting box method
Fig. 4 .
4 Fig. 4.16 N I ST tests for the variable x(2) (from[START_REF] Garasym | How useful randomness for cryptography can emerge from multicore-implemented complex networks of chaotic maps?[END_REF])
Fig. 4 .
4 Fig. 4.18 N I ST test for T T L RC,4D
Fig. 4 .
4 Fig. 4.21 Repartition of iterates in the delay plane (x (1) n , x (1) n+1 ) of T T L RC,5D2
Fig. 4 . 2 , 21 Fig. 4 .
42214 Fig. 4.[START_REF] Lozi | Giga-periodic orbits for weakly coupled tent and logistic discretized maps[END_REF] Repartition of iterates in the delay plane (x(1) n , x(1) n+2 ) of T T L RC,5D 2 , as in Fig.4.21
Fig. 4 . 2 , 21 Fig. 4 .
42214 Fig.4.[START_REF] Lozi | Can we trust in numerical computations of chaotic solutions of dynamical systems?[END_REF] Repartition of iterates in the delay plane (x(1) n , x(1) n+4 ) of T T L RC,5D 2 , as in Fig.4.21
Fig. 4 . 2 ,Fig. 4 .
424 Fig. 4.26 Repartition of iterates in the delay plane (x (3) n , x (3) n+1 ) of T T L RC,5D 2 , as in Fig. 4.25
Fig. 4 . 2 ,
42 Fig. 4.[START_REF] Wong | A modified chaotic cryptographic method[END_REF] Repartition of iterates in the delay plane (x(5) n , x(5) n+1 ) of T T L RC,5D 2 , as in Fig.4.25
Fig. 4 . 29 2 ,
4292 Fig. 4.29 Comparison between E C1,N disc ,N iter (x (1) n , y (2) n ), for T T L RC,4D 2 , M = N disc = 20, 200, 2000, 20, 000, and various values of the number of iterates
Table 4 . 1
41 can be a mix of alternate and non-alternate if k i = +1 or -1 randomly. The sixteen maps defined by Eq.(4.11)
p. 8
Table 4 .
4 2 E 1,200,N iter (x(1) ) for T T L
RC, pD 2 with p = 2 to 5
N iter p = 2 p = 3 p = 4 p = 5
10 4 1.5631 1.5553 1.5587 1.5574
10 5 0.55475 0.5166 0.51315 0.5154
10 6 0.269016 0.159306 0.158548 0.158436
10 7 0.224189 0.050509 0.0501934 0.0505558
10 8 0.219427 0.0164173 0.0159175 0.0160018
10 9 0.218957 0.00640196 0.00505021 0.00509754
10 10 0.218912 0.00420266 0.00160505 0.00160396
10 11 0.218913 0.00392507 0.000513833 0.000505591
10 12 0.218913 0.00389001 0.000189371 0.000160547
10 13 0.218914 0.00388778 0.000112764 5.04473e-05
10 14 0.218914 0.003887 0.000101139 1.59929e-05
Fig. 4.19 Graph of E 1,200,N iter (x (1) ) for T T L RC, pD 2
Table 4 . 4
44 Comparison between E 1,200,N iter (x
Table 4 . 5
45 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC1,200,N iter (x (1)
n , x
Table 4 . 6
46 Comparison between E AC1,200,N iter (x
n+3 )
Table 4 . 7
47 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC1,200,N iter (x (1)
n , x
Table 4 . 9
49 Comparison between E AC1,200,N iter (x
(1) n , x (1) n+1 ), E AC2,200,N iter (x (1) n , x (1) n+1 ), and
Table 4 .
4 10.
p. 25
Table 4 .
4 10 Comparison between E AC1,200,N iter (x
Table 4 .
4 11 Comparison between E AC1,200,N iter (x
(1) n , x (2) n ), E AC2,200,N iter (x (1) n , x (2) n ), and
Table 4 .
4 [START_REF] Fatou | Sur l'itération des fonctions transcendantes entières[END_REF] Comparison between E C1,200,N iter (x (i) n , x
( j) n ), for i, j = 1 to 5, i = j, and for various
values of number of iterates for T T L RC,5D 2
N iter 10 6 10 8 10 10 10 12 10 14
x(1), x(2) 0.158058 0.0160114 0.0015927 0.000158795 1.60489e-05
x(1), x(3) 0.158956 0.0159261 0.00159456 0.000159326 1.73852e-05
x(1), x(4) 0.15943 0.0160321 0.00160091 0.000160038 1.74599e-05
x(1), x(5) 0.159074 0.0158962 0.00160204 0.000159048 1.59133e-05
x(2), x(3) 0.15825 0.0159754 0.00159442 0.000160659 1.60419e-05
x(2), x(4) 0.159248 0.0159668 0.00159961 0.000160313 1.73507e-05
x(2), x(5) 0.15889 0.0160116 0.0015934 0.000160462 1.73496e-05
x(3), x(4) 0.159136 0.0158826 0.00158123 0.000158758 1.59451e-05
x(3), x(5) 0.159216 0.0159341 0.00161268 0.000159079 1.75013e-05
x(4), x(5) 0.158918 0.0160516 0.0016008 0.000159907 1.59445e-05
Table 4 .
4 [START_REF] Gumowski | Recurrence and Discrete Dynamics systems[END_REF] Comparison between E C1,200,N iter (x
(i) n , x ( j) n ), for T T L RC, pD 2 for p = 2, . . . , 5, and
various values of the number of iterates
N iter p = 2 p = 3 p = 4 p = 5
10 4 1.5624 1.5568 1.55725 1.55915
10 5 0.57955 0.5163 0.51083 0.514
10 6 0.330084 0.160282 0.158256 0.158058
10 7 0.294918 0.0509584 0.0504002 0.0505508
10 8 0.291428 0.0176344 0.0157924 0.0160114
10 9 0.291012 0.00911485 0.00506758 0.00507915
10 10 0.291025 0.00783204 0.00159046 0.0015927
10 11 0.291033 0.00771201 0.000521561 0.000506086
10 12 0.291036 0.00769998 0.000209109 0.000158795
10 13 0.00769867 0.000150031 5.03666e-05
10 14 0.00769874 0.000144162 1.60489e-05
Table 4 .
4 14 Comparison E C1,N disc ,N iter (x
(1) n , x (2) n ),for T T L RC,4D 2 M = N disc =
Table 4 .
4 15 Comparison of computation times (in second) for the generation of N iter PRNs for T T L RC, pD 2 with p = 2 to 5, and various values of N iter iterates
N iter p = 2 p = 3 p = 4 p = 5
10 4 0.000146 0.000216 0.000161 0.000142
10 5 0.000216 0.000277 0.000262 0.000339
10 6 0.001176 0.002403 0.001681 0.002467
10 7 0.011006 0.016195 0.018968 0.022351
10 8 0.113093 0.161776 0.166701 0.227638
10 9 1.09998 1.58949 1.60441 2.29003
10 10 11.4901 18.0142 18.537 26.1946
10 11 123.765 183.563 185.449 257.244
Table 4 .
4 [START_REF] Ikeda | Optical turbulence: chaotic behavior of transmitted light from a ring cavity[END_REF] Comparison of computation times (in 10 -10 s) for the generation of only one PRN for T T L
RC, pD 2 with p = 2 to 5, and various values of the number of iterates
N iter p p = 3 p 4 p = 5
10 4 73.0 72.0 40.25 28.4
10 5 10.8 9.233 6.55 6.78
10 6 5.88 8.01 4.2025 4.934
10 7 5.503 5.39833 4.742 4.702
10 8 5.65465 4.0444 4.16753 4.55276
10 9 5.4999 5.2983 4.01103 4.58006
10 10 5.74505 4.50335 4.63425 5.23892
10 11 6.18825 6.11877 4.63622 5.14488
can produce 100 billion random bits per second, an incredible pace! With a machine with 4 Intel Xeon E7-4870 processors having a total of 80 logical cores, the computation is twice as fast, producing 2 × 10 11 random bits per second.p. |
01767096 | en | [
"shs.eco"
] | 2024/03/05 22:32:15 | 2018 | https://shs.hal.science/halshs-01767096/file/TCRCPv2b.pdf | Nicolas Drouhin
Theoretical considerations on the retirement consumption puzzle and the optimal age of retirement
Keywords: C61 D91 J26 life cycle theory of consumption and saving; optimal retirement, retirement consumption puzzle, discontinuous optimal control
principle of optimality, it provides a very general and parsimonious formula for determining the optimal age of retirement taking into account the possible discontinuity of the optimal consumption profile at the age of retirement.
Introduction
In this article, I build a model that address at the same time the retirement consumption puzzle and and the optimal age of retirement.
Since Hamermesh (1984a) many empirical studies document a drop in consumption at retirement, the retirement consumption puzzle [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF][START_REF] Bernheim | What accounts for the variation in retirement wealth among us households?[END_REF]Battistin et al., 2009, among others). This phenomena is seen as puzzling and "paradoxical" because it seems in contradiction with the idea that, within the intertemporal choice model, which is the backbone of modern economics, when preferences are convex, consumption smoothing is the rule. Then, explanation of this paradox has been searched in relaxing some assumptions of the model of a fully rational forward looking agent. For example the agent may systematically underestimate the drop in earnings associated with retirement (Hamermesh, 1984a). Or, the agent may not be fully time consistent as in the hyperbolic discounting model [START_REF] Angeletos | The hyperbolic consumption model: Calibration, simulation, and empirical evaluation[END_REF].
Without denying that those phenomena may be important traits of "real" agents behavior, building on an insight of [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF] in their conclusion, this paper will emphasize the point that a closer look at the intertemporal choice model of consumption and savings in continuous time allows to understand that what is smooth in the model is not necessarily consumption, but marginal utility of consumption. Of course, if consumption is the only variable of the utility function the two properties are equivalent. But if utility is multi-variate, any discontinuity in a dimension, may imply an optimal discontinuity response in the others. I will illustrate that insight into a very general model of inter-temporal choice that can be considered as a realistic generalisation of the basic one. Two ingredients will be required. First, I will assume a bi-variate, additively intertemporaly separable utility function that depends on consumption and leisure. Second I will assume, realistically, that retirement is not a smooth process with a per period duration of labor that tend progressively to zero, but a discontinuous process.
I will show that, as long as the per period utility function is not additively separable in consumption and leisure, discontinuity of the consumption function is the rule in this general model. However, as insightful is the preceding statement, it is not so easy to prove it formally with all generality because the assumptions imply a discontinuous payoff function, a case that is not standard with usual intertemporal optimization techniques in continuous time. I will provide a general and simple lemma that will make the problem tractable and it's resolution at the same time rigourous and insightful.
So, if we want to solve the paradox within a quite standard model of intertemporal choice, we have to drop additive separability of utility of consumption and leisure. And if we want to extend the problem to the choice of the optimal retirement age, we have to carry on with this non-separability. However, as pointed by d 'Albis et al. (2012) most of the study addressing the question has been made precisely under the assumption of additive separability in consumption and leisure (see d [START_REF] Albis | Endogenous retirement and monetary cycles[END_REF][START_REF] Bloom | Optimal retirement with increasing longevity[END_REF][START_REF] Boucekkine | Vintage human capital, demographic trends, and endogenous growth[END_REF][START_REF] Hazan | Longevity and lifetime labor supply: Evidence and implications[END_REF][START_REF] Heijdra | The individual life-cycle, annuity market imperfections and economic growth[END_REF][START_REF] Heijdra | Retirement, pensions, and ageing[END_REF][START_REF] Kalemli-Ozcan | Mortality change, the uncertainty effect, and retirement[END_REF][START_REF] Prettner | Increasing life expectancy and optimal retirement in general equilibrium[END_REF]Sheshinski, 1978, among others). And if there are some important papers that study a general life cycle model of consumption and savings, without additive separability of consumption and leisure [START_REF] Heckman | Life cycle consumption and labor supply: An explanation of the relationship between income and consumption over the life cycle[END_REF][START_REF] Heckman | A life-cycle model of earnings, learning, and consumption[END_REF][START_REF] Bütler | Neoclassical life-cycle consumption: a textbook example[END_REF] they are mostly focused on the the explanation of co-movement of earnings and consumption all over the life-cycle. Hamermesh (1984b) and [START_REF] Chang | Uncertain lifetimes, retirement and economic welfare[END_REF] study the retirement decision with non separability of consumption and leisure, but they fully endogenize the work decision, without any granularity concerning the per-period duration of worktime, and thus without any discontinuity of per period labor supply, implying model that are unable to explain at the same time retirement consumption paradox and the retirement decision. The model I propose can easily be expanded to endogenize the retirement decision and provide very general condition that fulfills the optimal age of retirement.
I will show that when optimal consumption is discontinuous at the age of retirement, this condition is qualitatively very different than in the traditional case.
consumption paradox
Let's assume that we are in a very standard continuous time life-cycle model of consumption and savings with preference for leisure and retirement.
P max c T t e -θ(s-t) u (c (s) , l(s)) ds s.t.∀s ∈ [t, T ], ȧ(s) = ra(s) + w(s)(1 -l(s)) + b(s) -c(s) a(t)
given and a(T ) ≥ 0 t is the decision date and T is life duration, u is the per-period bi-variate utility function that depends on consumption and leisure. c, is the intertemporal consumption profile, the control variable of the program, l, is the intertemporal leisure profile, that I will assume, in a first stance, to be exogenous. a, is a life-cycle asset, the state variable of the program, that brings interest at the rate r. w, is labor income per period when the individual spend all this time working. b is social security income profile, interpreted as social security benefit when positive (typically after retirement) and social security contribution when negative (typically before retirement). l, c, w and b are assumed to be piecewise continuous and a is assumed to be piecewise smooth, assumptions that are fully compatible with the use of standard optimal control theory. I assume that the utility function includes standard minimum requirements of the microeconomic theory of consumption/leisure trade-off: u 1 > 0, u 2 > 0, u 11 < 0, u 22 < 0 and quasi-concavity (i.e. the indifference curves are convex). It implies that -
u 11 u 2 2 + 2u 12 u 1 u 2 -u 22 u 2 1 > 0.
It is important to notice that, without further assumptions, the sign of the second order crossed derivative is undetermined.
I will assume that there exists a retirement age t R such that:
∀s ∈ [t, t R ), l(s) = κ < 1 ∀s ∈ [t R , T ], l(s) = 1
Of course this assumption is a simplification, but it allows to characterize directly the central idea of the paper: retirement is fundamentally a discontinuity in the labor/leisure profile. This assumption seems much more realistic than usual idea that retirement is the smooth process with per period work duration tending to zero at the age of retirement.1
I denote c * the optimal consumption profile, solution of the program P and a * the associated value of the state variable. Of course those optimal functions are parameterized by all the given of the problem (t, t R , T, a(t), r, w, b, l).
I denote V * the optimal value of the problem i. e.
V * (t, t R , T, a(t), r, w) = T t e -θ(s-t) u (c * (t, t R , T, a(t), r, w, b, l, s), l(s)) ds
Because of the discontinuity of the instantaneous payoff function in t R , the problem is non standard. Therefore it is useful to decompose the problem in two separate ones:
P 0 P 1 max c t R t e -θ(s-t) u (c (s) , κ) ds s.t. ȧ(s) = ra(s) + (1 -κ)w(s) + b(s) -c(s) a(t), a(t R ) given max c T t R e -θ(s-t) u (c (s) , 1)) ds s.t. ȧ(s) = ra(s) + b(s) -c(s) a(t R )
given and a(T ) ≥ 0 P 0 and P 1 . As c * they are also implicit functions of the parameter of their respective program and I can define the value function of P 0 and P 1 .
V 0 (t, t R , a(t), a(t R ), r, w, b, κ) = t R t e -θ(s-t) u c 0 (t, t R , a(t), a(t R ), r, w, b, κ), κ ds V 1 (t R , T, a(t R ), r, b) = T t R e -θ(s-t) u c 1 (t R , T, a(t R ), r, b, s), 1 ds
The two programs are linked by the asset level at the age of retirement. By application of the optimality principle, I can deduce:
Lemma 1 (A Principle of Optimality).
If (c * , a * ) is an admissible pair solution of program P then we have: [START_REF] Bellman | Dynamic programming[END_REF] principle of optimality.
1. V * (t, t R , T, a(t), r, w) = V 0 * (t, t R , a(t), a * (t R ), r, w) + V 1 * (t R , T, a * (t R ), r, w) 2. a * (t R ) = argmax a(t R ) {V 0 (t, t R , a(t), a(t R ), r, w) + V 1 (t R , T, a(t R ), r, w)} Proof: It is a direct application of
I have now all the material to solve the program P.
Proposition 1 (Discontinuity of the consumption profile).
If I denote c 0 (t R ) def = lim s→t R c 0 (s)
, and restrict my analysis to per period utility with a second order cross derivative that is either, everywhere strictly positive, everywhere strictly negative or everywhere equal to zero:
1. The optimal consumption profile solution of program P is unique.
The optimal consumption profile solution of program
P is continuous for every age s in [t, t R ) (t R , T ]. 3. In t R , u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1
) and the continuity of the optimal consumption profile is determined solely the cross derivative of the per period utility function.
(a) c 0 (t R ) > c 1 (t R ) ⇔ u 12 (c, l) < 0 (b) c 0 (t R ) = c 1 (t R ) ⇔ u 12 (c, l) = 0 (c) c 0 (t R ) < c 1 (t R ) ⇔ u 12 (c, l) > 0
Proof: Relying on Lemma 1, I start by solving the program P 0 and P 1 for a given a(t R ). Denoting µ 0 the costate variable, the Hamiltonian of the Program P 0 is:
H 0 (c(s), a(s), µ 0 (s), s) = e -θ(s-t) u (c (s) , κ) + µ 0 (s) [r a(s) + (1 -κ)w(s) + b(s) -c(s)] (1)
According to Pontryagin maximum principle the necessary condition for optimality is:
∀s ∈ [t, t R ), ∂H 0 (•) ∂c(s) = 0 ⇒ µ 0 (s) = e -θ(s-t) u 1 (c (s) , κ) (2) ∀s ∈ [t, t R ), ∂H 0 (•) ∂a(s) = -μ0 (s) ⇒ μ0 (s) = -r µ 0 (s) (3) ∀s ∈ [t, t R ), ȧ(s) = ra(s) + (1 -κ)w(s) + b(s) -c(s) (4)
Moreover by construction of the Hamiltonian and Pontryagin maximum principle it is well known that:
∂V 0 (t, t R , a(t), a(t R ), r, w, b, κ) ∂a(t R ) = -µ 0 (t R ) (5)
Similarly for program P 1 , we have:
H 1 (c(s), a(s), µ 1 (s), s) = e -θ(s-t) u (c (s) , 1) + µ 1 (s) [r a(s) + b(s) -c(s)] (6) ∀s ∈ (t R , T ], ∂H 1 (•) ∂c(s) = 0 ⇒ µ 1 (s) = e -θ(s-t) u 1 (c (s) , 1) (7) ∀s ∈ (t R , T ], ∂H 1 (•) ∂a(s) = -μ1 (s) ⇒ μ1 (s) = -r µ 1 (s) (8) ∀s ∈ (t R , T ], ȧ(s) = ra(s) + b(s) -c(s) (9) ∂V 1 (t, t R , a(t), a(t R ), r, b) ∂a(t R ) = µ 1 (t R ) (10)
Moreover, P 1 being a constrained endpoint problem, we have to fulfill the transversality condition:
µ 1 (T )a(T ) = 0 ⇒ a(T ) = 0 (11) P 0 and P 1 verifying the standard strict concavity condition of their respective Hamiltonian, they both admit continuous and unique solution on their respective domain.
Let us now turn to the solution problem of the optimal value of the asset at retirement date a * (t R ). Relaying on the principle of optimality (Lemma 1), a necessary condition for a * (t R ) to be a maximum of (V
0 (•) + V 1 (•)) is: ∂V 0 (•) ∂a(t R ) + V 1 (•) ∂a(t R ) = -µ 0 (t R ) + µ 1 (t R ) = 0 ⇔ u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1) (12)
It is easy to check that the left hand term of the last equality is increasing in a(t R )
while the right hand one is decreasing, assuring the uniqueness of a
* (t R ). If for all c, l in R + × [0, 1], u 12 < 0, then u 1 (c 0 (t R ), κ) < u 1 (c 0 (t R ), 1). Because u 11 < 0, we can only have u 1 (c 0 (t R ), κ) = u 1 (c 1 (t R ), 1), if and only if c 0 (t R ) > c 1 (t R ).
The reasoning is the same for the two other cases.
In this setting, a negative cross derivative of the per period utility of consumption and leisure is necessary to obtain a discontinuous drop in consumption at the age of retirement, i.e. to resolve the retirement consumption puzzle. It means that, if we believe that the model is a proper simplification of the intertemporal choice of agent in the real world, the observation of that kind of drop, informs us on the negative sign of the cross derivative. It may seems strange because many workhorse utility function in labor economics such as the cobb-Douglas or the CES utility function are characterized by a positive cross derivative.
However, it is important to notice that relying on a different model of intertemporal choice with full endogeneity of labor, [START_REF] Heckman | Life cycle consumption and labor supply: An explanation of the relationship between income and consumption over the life cycle[END_REF] also conclude that a negative cross derivative of the per period utility of consumption and leisure was required to explain the hump shape of the intertemporal consumption profile.
In this part, I have given a complete theoretical treatment of an idea that was alluded in [START_REF] Banks | Is there a retirement-savings puzzle?[END_REF] and in the "back-of-the-envelope calculation" in [START_REF] Battistin | The retirement consumption puzzle: Evidence from a regression discontinuity approach[END_REF]. This calculation was grounded on the following parametrical form:
u(c, l) = (c α l 1-α ) 1-γ 1 -γ
with γ > 0 interpreted as the reciprocal of the intertemporal elasticity of substitution. They rightfully conclude that to solve the retirement consumption puzzle in this model, γ > 1 is required, but they miss the right insight for explaining that. Because, in this model, γ fully capture the intensity of the response of consumption to a variation of the rate of interest only when leisure is fully endogenous, but in this case there will be no discontinuity in the consumption function. As we have shown, explaining such a discontinuity, requires leisure to be exogenous at the age of retirement 2 , then it is -c u 11 /u 1 = α(γ -1) + 1 that will capture the intensity of response of consumption to a change of the rate of interest. Moreover, if the model is based on a Cobb-Douglass utility function, it is in fact a power transformation of a Cobb-Douglass, a transformation that can alter the sign of the second order cross derivative.
We have u 12 = α(1 -α)(1 -γ)c α(1-γ)-1 l (1-α)(1-γ)-1 . With this special parametrical form, the sign of the cross derivative of utility is fully given by the position of γ with respect to unity. When γ is higher than one, this cross derivative is negative explaining the downward discontinuity in consumption, as confirmed by the general statement of Proposition 1.3. The effect has nothing to do with the the intertemporal elasticity of substitution per se.
2 Or at least a constraint for a minimum per-period work duration that is binding.
3 Optimal age of retirement I have solved the program P with the age of retirement, t R , being a parameter. I have all the material to characterize the optimal age of retirement, the one that maximizes the value of the program. In particular, the decomposition of the general Program in two sub-programs delimited by the age of retirement, allows to derive this optimal age of retirement in a parsimonious and elegant manner.
Proposition 2 (The optimal age of retirement).
When an interior solution exists, and denoting b
0 (t R ) def = lim s→t - R b (
s) < 0, the optimal age of retirement tR is such that:
u(c 1 ( tR ), 1) -u(c 0 ( tR ), κ) = u 1 (c 0 (t R ), κ) ((1 -κ)w( tR ) + b 0 ( tR )) -b( tR )) + (c 1 ( tR ) -c 0 ( tR )) (13) Proof: tR is a solution of max t R
V * (t, t R , T, a(t), r, w). Because V * is continuous and differentiable in t R , a necessary condition for having an interior solution is:
∂V * (t, t R , T, a(t), r, w) ∂t R = 0 (14)
Relying on Lemma 1 and noting that by construction of the Hamiltonian and Pontryagin maximum principle:
∂V 0 (t, t R , a(t), r, w) ∂t R = H 0 (c 0 (t R ), a 0 (t R ), µ 0 (t R ), t R ) and ∂V 1 (t R , T, a(t), r, w) ∂t R = -H 1 (c 1 (t R ), a 1 (t R ), µ 1 (t R ), t R )
we can easily conclude that tR is such that:
H 0 (c 0 ( tR ), a * ( tR ), µ 0 ( tR ), tR ) = H 1 (c 1 ( tR ), a * ( tR ), µ 1 ( tR ), tR ) (15)
Using the definitions of the Hamiltonian and first order conditions of program P 0 and P 1 and remembering that, in any case, a is continuous in t R , we get the right hand side.
This is a standard marginal condition for optimality. The left hand side of Equation ( 13) is the direct cost in utility of a marginal increase in the retirement age, while the right hand side is the indirect gain in utility due to supplementary resources generated by a longer work duration. The important and innovative point is that when taking into account the retirement consumption puzzle, the endogenous drop of consumption implies that less resources are required to maintain a same level of utility. Thus the earnings differential can be higher when the agents decide to retire.
Proposition 2 provides a very general characterisation of the optimal retirement age.
Moreover, when expanding consumption before and after retirement as implicit function of the parameters of the problem, and when endogenizing the budgetary constraint of the social security system, it allows to derive comparative static results on the optimal age of retirement.
Conclusion
This short paper provides a general methodology to resolve the retirement consumption puzzle and the choice of the optimal age of retirement. The principle is illustrated in a simple model of intertemporal choice in which utility depend on consumption and leisure with certain horizon. To solve the puzzle we need only two assumptions: 1.
retirement implies a discontinuity in the leisure intertemporal profile and, 2. the crossderivative of the utility function is negative. The method is general and can easily be extended in more realistic models with uncertain lifetime3 .
This idea could be generalized by endogenizing per period work duration taking into account a granularity assumption. In general for organizational reason work duration can be zero or something significantly different from zero.
In a companion paper, I am actually working on a calibrated version of the model taking into account a realistic modeling of uncertain lifetime in the spirit of[START_REF] Drouhin | A rank-dependent utility model of uncertain lifetime[END_REF] and the possibility of a non stationary intertemporal utility, allowing for per period utility to change with age in the spirit of[START_REF] Drouhin | Non stationary additive utility and time consistency[END_REF] |
01767225 | en | [
"chim.poly"
] | 2024/03/05 22:32:15 | 2017 | https://theses.hal.science/tel-01767225/file/ANTOINE_SEGOLENE_2017.pdf | Keywords: Atom Transfer Radical Polymerization BCP, Block Copolymer C3, Triple cylinders-on-cylinders C4, Quadruple cylinders-on-cylinders CaH2, Calcium hydride CED, Cohesive Energy Density CF4, Tetrafluoromethane CH2Cl2, Dichloromethane CHCl3, Chloroform CO2, Carbon dioxide CSC, Core-shell cylinders D, Diffusion coefficient DIC, N,N'-diisopropylcarbodiimide DMA, 2-(dimethylamino)ethyl methacrylate DOSY, Diffusion-Ordered Spectroscopy DPE, Diphenylethylene Fourier Transform InfraRed spectroscopy
Les résultats obtenus lors de cette thèse sont le fruit d'un travail collaboratif, je souhaite remercier l'ensemble des personnes qui ont contribué à ce projet.
Je remercie le Pr. Georges Hadziioannou, mon directeur de thèse, qui m'a accueilli dans son équipe au sein du Laboratoire de Chimie des Polymères Organiques (LCPO). J'ai été très heureuse de faire partie du B8. Les équipements, instruments, produits chimiques et les locaux mis à notre disposition permettent de travailler dans des conditions des plus agréables. Monsieur Hadziioannou, je vous remercie d'avoir été particulièrement présent lors de la fin de ma thèse, de m'avoir donné l'opportunité de présenter mes travaux à vos étudiants de master 2, et de votre confiance tout au long de la thèse. Vous m'avez beaucoup apporté, et notamment votre vision très large sur mon sujet et ses applications. Vous m'avez appris à communiquer plus efficacement et à valoriser intelligemment mes résultats.
Je tiens également à remercier le Dr Karim Aissou, mon tuteur de thèse. Karim merci pour ton accompagnement, ton soutien, ta disponibilité et ton dynamisme. Merci d'avoir toujours pris le temps de parler avec moi de mes résultats (bons ou mauvais) et pour ton écoute. Tu m'as formée à la démarche scientifique, tu m'as permis de structurer mes idées, remettre en question des points clefs et tu m'as transmis ta passion pour la science. J'ai énormément apprécié travailler et échanger avec toi. Merci de m'avoir soutenue sur la partie chimie et éclairer sur la partie physique de ma thèse. J'ai énormément appris grâce à toi lors de ces 3 années de thèse, aussi bien sur le plan professionnel que personnel.
Je tiens à remercier le Dr Guillaume Fleury pour ses conseils lors de ma thèse. Guillaume, tu as suivi mon projet d'un peu plus loin que Karim, mais toujours avec attention. Je te remercie pour tes conseils et ton écoute. Comme Karim, tu as toujours laissé la porte ouverte, et tu as été une oreille attentive lors de ces 3 années.
Je tiens à remercier l'ensemble des personnes présentes dans le jury pour avoir accepté de juger ce travail :
-Monsieur Gigmes et Monsieur Sinturel, en qualité de rapporteur pour avoir apporté des remarques pertinentes sur mon manuscrit. -Monsieur Lecommandoux pour avoir accepté la responsabilité d'être président du jury, mais aussi de m'avoir accueilli au sein du LCPO. -Monsieur Chapel et Monsieur Iliopoulos pour l'intérêt porté à mon travail et la discussion que vous avez animée lors de ma soutenance. Je tiens à remercier toutes les personnes du LCPO, et tout particulièrement mon bureau, Dimitrios, Alberto, Sylvain, Muriel, Damien, Paul et bien sûr Florian pour avoir corriger ma thèse en anglais. La bonne entente entre les membres du Bu5 a été une vraie bouffée de bonne humeur tous les jours. Merci pour votre soutien. Je remercie aussi tout particulièrement mes collègues et amies les plus proches : Anna, Camille, et Cindy. Vous avez été de vrais piliers pendant ma thèse. Toujours partantes pour prendre un verre après le boulot, et parler de tout et de rien. Merci pour les bons moments passés ensembles. Je remercie mes amis d'enfance de Paris, et en particulier Maud, Priscille, Karen, Anne-Lise, Hugues, et Nicolas. Merci pour d'avoir toujours été là pour moi dans les bons et les moins bons moments. Je remercie aussi mes amis de Bordeaux qui m'ont accueilli comme une vraie famille. Bené, Améline, Alexis, Thomas et Antho un grand merci, grâce à vous, je me suis toujours sentie comme chez moi à Bordeaux. Je remercie également mon colocataire et ami François pour la super année passée au 26 rue
Dans cette thèse, nous nous sommes tout d'abord concentrés sur la synthèse de terpolymères ABC linéaires et en étoile puis sur l'étude de l'auto-assemblage de ces terpolymères en film minces.
Dans la première partie de cette thèse, nous avons mis en place une méthode de synthèse efficace pour la synthèse de terpolymères ABC linéaires et en étoile. L'un des paramètres clés est le degré d'incompatibilité entre les blocs, représenté par le paramètre d'interaction de Flory-Huggins. La conception de terpolymères dont les blocs sont hautement incompatibles permet de promouvoir la micro-séparation de phase. Ici, nous avons choisi de travailler avec du polystyrène (PS), poly(2-vinylpyiridine) (P2VP), et du polyisoprène (PI) dont les paramètres de Flory-Huggins, , entre les blocs sont élevés (PS-PI 2 PS-P2VP 3 0,1 et PI-P2VP 4 0,8).
Notre but a été de trouver une méthode de synthèse efficace permettant de garder la masse moléculaire de PS et de P2VP constante tout en variant la masse moléculaire de PI. Nous avons choisi de travailler avec des méthodes de couplage qui permettent d'attacher un dibloc PS-b-P2VP de masse moléculaire constante à des blocs PI de masses moléculaires différentes afin de moduler facilement la masse moléculaire de PI.
Dans un premier temps, nous avons synthétisé des terpolymères ABC linéaires. La polymérisation anionique a été choisie comme voie de synthèse car elle permet un bon contrôle de la croissance des chaînes polymères lors de la synthèse ce qui permet d'obtenir des terpolymères bien définis. Dans la littérature, la méthode la plus reportée pour la synthèse de terpolymères ABC linéaires est la polymérisation anionique séquentielle. Cette méthode consiste en la polymérisation anionique du monomère A qui, après avoir été consommé, initie la polymérisation anionique du monomère B et ainsi de suite. Dans notre étude, nous avons choisi de travailler avec un PS-b-P2VP-b-PI. La polymérisation anionique séquentielle des deux premiers blocs (PS et P2VP) est possible, tandis que le centre actif porté par la P2VP vivante ne pourra pas initier la polymérisation du PI. C'est pourquoi, la polymérisation anionique séquentielle n'a pas pu être envisagée dans cette étude. 2 Ren, Y., Lodge, T. P. & Hillmyer, M. A. Synthesis, characterization, and interaction strengths of difluorocarbenemodified polystyrene-polyisoprene block copolymers. Macromolecules 33, 866-876 (2000). 3 Hammond, M. R., Cochran, E., Fredrickson, G. H. & Kramer, E. J. Temperature dependence of order, disorder, and defects in laterally confined diblock copolymer cylinder monolayers. Macromolecules 38, 6575-6585 (2005). 4 Funaki, Y. et al. Influence of casting solvents on microphase-separated structures of poly(2-vinylpyridine)block-polyisoprene. Polymer (Guildf). 40, 7147-7156 (1999).
Ainsi, les terpolymères ABC linéaires ont été obtenus par synthèse de PS-b-P2VP et de PI tous deux fonctionnalisés en bout de chaîne puis couplage.
La synthèse de PI fonctionnalisé en bout de chaîne a été réalisée par polymérisation anionique sans étape de protection ou de deprotection. La fonctionnalisation en bout de chaine a été réalisée par carboxylation du PI en croissance. Un schéma général de la synthèse est montré cidessous (Fig. 1). La polymérisation anionique de l'isoprène a été réalisée dans le THF à -30°C en utilisant le sec-Butyllithium (sec-BuLi) comme amorceur. Une fois tout le monomère consommé, du dioxyde de carbone a été bullé dans la solution puis la réaction a été stoppée par ajout de méthanol. Les différentes masses moléculaires de PI ont été obtenues en faisant varier la quantité d'amorceur lors de la polymérisation.
Les PI synthétisés ont été caractérisés par résonnance magnétique nucléaire (RMN) du proton et par chromatographie d'exclusion stérique (SEC). Le nombre d'unités 1.2 ( = 5,5 -6 ppm) et 3.4 ( = 4,4 -5 ppm) a été déterminé par RMN du proton et est respectivement de 30 et 70% dans le PI. Les masses moléculaires du PI ont été déterminées par SEC dans le THF de 9, 13, 16 et 28 kg.mol -1 . La fonctionnalisation quantitative du PI a été vérifiée par un titrage à la phénoltphtaléine.
La synthèse du dibloc de PS-b-P2VP a été réalisée par polymérisation anionique séquentielle. La fonctionnalisation en bout de chaîne a été effectuée par ajout d'oxyde d'éthylène sur les chaines vivantes de PS-b-P2VP. Un schéma général de la réaction est présenté ci-dessous (Fig. 2). La polymérisation anionique du styrène est amorcée par le sec-BuLi dans le THF à -78°C. Lorsque tout le monomère est consommé, la 2VP est ajoutée. Le polystyrillithium amorce alors la polymérisation anionique de la 2VP. Lorsque toute la 2VP est consommée, de l'oxyde d'éthylène est ajouté afin d'apporter la fonction alcool en bout de chaîne. Enfin, la réaction est stoppée par ajout de méthanol.
Le dibloc ainsi synthétisé est caractérisé par SEC et par RMN du proton. Les masses moléculaires du PS et du P2VP ont été déterminées par SEC de 21 et 24 kg.mol -1 respectivement. La fraction obtenue entre les blocs de PS et de P2VP a été confirmée par RMN du proton (PS : P2VP = 1 : 1.1).
Les terpolymères ABC linéaires de PS-b-P2VP-b-PI ont été synthétisé par couplage entre le dibloc PS-b-P2VP et les PI de différentes masses moléculaires. La méthode la plus reportée dans la littérature pour la synthèse de CPBs est la chimie « click » d'Huisgen. Cette méthode de couplage permet d'obtenir un rendement proche de 1. Néanmoins, elle met en jeu l'intervention de métaux (souvent du cuivre) en tant que catalyseur. Ces métaux peuvent être chélatés par la 2VP ce qui augmentent le nombre d'étapes de purification. De plus, cette méthode nécessite de nombreuses étapes de fonctionnalisation. Nous avons donc décidé d'utiliser l'estérification de Steglich qui a un rendement proche de 1 et nécessite peu d'étapes de fonctionnalisation et de purification. Le dibloc PS-b-P2VP fonctionnalisé par une fonction alcool en bout de chaîne a alors été couplé au PI possédant une fonction acide carboxylique terminale. Un schéma général de la synthèse est présenté ci-dessous (Fig. 3). Trois terpolymères ABC linéaires ont été synthétisés avec cette méthode en gardant la masse moléculaire de PS et de P2VP constante, mais en faisant varier uniquement la masse moléculaire de PI. Un tableau récapitulatif est présenté ci-dessus (Tableau 1).
Nous avons ensuite synthétisé des terpolymères ABC en étoile. Dans la littérature, trois méthodes principales ont été reportées pour la synthèse de terpolymères ABC en étoile. La première, développée par Hadjichristidis 5 consiste en la synthèse de trois bras par polymérisation vivante ou contrôlée, les trois étant ensuite liés à une molécule coeur de chlorosilane. Cette méthode appelée « arm-first » nécessite l'utilisation d'une verrerie spécifique. Elle n'a donc pas été envisagée dans cette thèse. La seconde méthode met en jeu 5 l'utilisation d'une molécule coeur multifonctionnelle. 6 Des polymérisations vivantes ou contrôlées sont alors réalisées à partir de cette molécule coeur. Cette méthode appelée « grafting from » est limitée dès lors que les monomères doivent être sélectifs. La dernière méthode appelée « hybrid approach » est une combinaison de la première et de la deuxième méthode. 7 Ainsi, la molécule coeur est un mélange de sites de terminaison et d'amorçage. Nous avons choisi de travailler avec cette méthode dans le cadre de cette thèse.
La première étape de la synthèse d'un terpolymère ABC en étoile avec la méthode « hybrid approach » consiste en la synthèse d'une molécule coeur comportant des sites de terminaison et d'amorçage. Ainsi, nous avons synthétisé un dérivé de la diphényléthylène portant une fonction alcool protégée.
A partir d'une 4-bromobenzophenone, nous avons réduit la fonction cétone par une réaction de Wittig afin d'obtenir une diphényléthylène substituée par une fonction bromure.
Nous avons ensuite réalisé un réactif de Grignard à partir de la fonction bromure capable d'ouvrir l'oxyde d'éthylène et ainsi obtenir une diphényléthylène substituée par une fonction alcool. La dernière étape de la synthèse consiste en la protection par un composé silylé de la fonction alcool. Le schéma réactionnel est montré ci-dessous (Fig. 4). tertbutyldiméthylsiloxy)éthyl)phényl-1-phenylethylène. 6 He, T., Li, D., Sheng, X. & Zhao, B. Synthesis of ABC 3-miktoarm star terpolymers from a trifunctional initiator by combining ring-opening polymerization, atom transfer radical polymerization, and nitroxide-mediated radical polymerization. Macromolecules 37, 3128-3135 (2004). 7 Fujimoto, T. et al. Preparation and characterization of novel star-shaped copolymers having three different branches. Polymer (Guildf). 33, 2208-2213 (1992).
Une fois la molécule coeur synthétisée, la synthèse d'un PS-b-P2VP fonctionnalisé en son coeur a été réalisée. Ainsi, le styrène a été polymérisé par polymérisation anionique dans le THF à -78°C amorcée par du sec-BuLi. Après consommation complète du monomère, la molécule coeur est ajoutée. Le polystyryllithium réagit alors sur la double liaison de la diphényléthylène substituée pour former un macroamorceur. La 2VP est ensuite ajoutée dans le milieu. Le macroamorceur vivant amorce alors la polymérisation anionique de la 2VP. Une fois le monomère consommé, du méthanol est ajouté afin de stopper la réaction. La fonction alcool portée par la molécule coeur est alors déprotégée par hydrolyse. Un schéma de la réaction est montré ci-dessous (Fig. 5). Le schéma de la réaction est présenté ci-dessous (Fig. 6). Les terpolymères ABC en étoile ainsi synthétisés ont été caractérisés par RMN, SEC et RMN DOSY confirmant ainsi la synthèse de quatre terpolymères ABC en étoile. Un tableau récapitulatif est présenté ci-dessous (Tableau 2).
Tableau 2: Tableau récapitulant les terpolymères ABC en étoile synthétisés par couplage d'un PS-b-P2VP fonctionnalisé en son coeur avec un polyisoprène fonctionnalisé en bout de chaîne.
Pour conclure sur cette première partie de cette thèse, nous avons mis au point une méthode de synthèse permettant d'obtenir des terpolymères ABC linéaires et en étoile ayant deux blocs (PS et P2VP) de taille identique, tandis que la taille du dernier bloc (PI) varie (composition symétrique par rapport aux autres blocs ou asymétriques).
Nous avons ensuite étudié les morphologies accessibles par l'auto-assemblage d'un terpolymère ABC linéaire frustré de type II dans une configuration de film mince. L'autoassemblage de ce type de terpolymère (frustré de type II) n'a jamais été décrit en film mince mais uniquement en masse dans la littérature. Ainsi, des études théoriques et expérimentales (uniquement en masse) ont montré que des sphères dans des sphères, des sphères sur des cylindres, des anneaux sur des cylindres, des cylindres dans des lamelles, des gyroïdes alternées, des lamelles alternées, ou encore des hélices sur des cylindres sont autant de morphologies accessibles avec des terpolymères ABC linéaires de type II. 8 Le comportement en film mince du PS21-b-P2VP24-b-PI9 (les chiffres en indice correspondent aux masses moléculaires des blocs en kg.mol -1 ) a été étudié. Pour cela, une solution de polymère à 2% en masse dans du toluène a été déposée à la tournette sur un substrat en silicium. L'épaisseur du film est contrôlée par la vitesse de rotation du substrat (1,5 krpm).
La mobilité des chaînes polymères a été apportée par un recuit des films dans une vapeur de chloroforme. Afin d'augmenter le contraste lors des analyses microscopiques, les films ont été traités par un plasma CF4/O2 et le bloc de P2VP a été marqué par des sels de platine. Des images de microscopie à force atomique (AFM) et microscopie électronique à balayage (MEB) ont alors été réalisées. Les images MEB de la morphologie obtenue sont montrées ci-dessous (Fig. 7). Lorsque l'on diminue l'épaisseur du film au-dessous de la dimension de la cellule unité (Fig. L'image AFM d'un film mince de 3 µ-S19P24I9 recuit dans une vapeur de THF est reportée ci-dessous (Fig. 8A). Pour un film de 80 nm, une morphologie hexagonale où le PS (en jaune) est entouré par 6 colonnes de PI (en noir) et 6 colonnes de P2VP (en marron) apparaît.
Les domaines de PS ont une période de 46 nm tandis que ceux de PI et de P2VP ont une période de 23 nm. Cette structure correspond au pavage d'Archimède (4.6.12). C'est la première fois que ce type de pavage est obtenu en film mince. Le film montre une morphologie colonnaire hors du plan où les colonnes de PI (noir) sont entourées par 6 colonnes de PS et 6 colonnes de P2VP. La période des domaines de PI est de 41 nm tandis que celle des domaines de PS et de P2VP est de 24 nm. Cette structure correspond elle aussi au pavage (4.6.12) précédemment décrit. On notera que les blocs occupent des places différentes dans la structure par rapport au pavage (4 .6.12) obtenu pour une composition en PI plus asymétrique.
Le bloc de PI occupant le coeur de la structure présente un ordre à plus longue distance que les domaines de PS et de P2VP. En effet, ces derniers ne forment pas toujours 6 colonnes autour du bloc PI. Les domaines de PI étant le plus organisés, on peut supposer que la séparation de phase se produit selon un mécanisme en deux étapes où l'auto-organisation du bloc PI est plus avancée que celle des blocs de PS et de P2VP.
Lorsque l'on augmente l'épaisseur du film, on obtient une maille carrée (Fig. 10). Dans cette étude, nous avons obtenus des pavages d'Archimède (4.6.12) en film mince.
Nous avons montré qu'un mécanisme en deux étapes régissait la micro-séparation de phase.
Les deux pavages (4.6.12) ont été obtenus pour des compositions en PI et des solvants de recuit différents. Nous avons ainsi montré que la position occupée par le centre de la structure était occupée par le polymère ayant la plus grande affinité avec le solvant de recuit.
General introduction
The microelectronic miniaturization becomes more and more difficult with classical technologies such as optical lithography coupled with etching processes. Therefore, the electronic industry is investigating alternative methodologies to fabricate discrete objects with nanoscale dimensions perfectly-ordered into dense arrays. For instance, the DRAM cell miniaturization race foresees a mid-distance of 5 nm between two memory points in 2020. Currently, nanofabrication techniques able to produce devices of this scale are based on serial processes (e-beam, Dip Pen Lithography,…), but those techniques are too many time-consuming for the industry. Block copolymers (BCPs) could be a great industrial alternative. Indeed they are compatible with the silicon technology already used on microelectronic tracks and BCPs have the ability to selfassemble into dense and regular arrays with small periods.
Nowadays, the scientific community has widely studied the self-assembly of diblock copolymers (AB-type BCP), which allows the formation of spheres, cylinders, gyroid, and lamellae. The self-assembly of AB-type BCP thin films is quiet well-understood. Adding another chemically different block leads to the formation of an ABC triblock terpolymer with a star-or linear-architecture. Unlike AB-type BCPs, ABC triblock terpolymers give access to hierarchical, core-shell and alternating morphologies as well as Archimedean tiling patterns. The parameters governing the self-assembly of ABC triblock terpolymer thin films are more numerous than those driving the AB-type BCP self-assembly leading to a more complex phase behavior. Another challenge to the use of ABC triblock terpolymers can be the difficulty of their synthesis as multistep reactions are needed for their formation.
In this PhD thesis, we will focus on the synthesis of linear and star ABC terpolymers and their self-assembly in thin films. This work will first be devoted to the synthesis and the macromolecular characterizations of ABC triblock terpolymers. The synthesis route should allow the formation of well-defined terpolymers with few purification steps. Then, we will focus our study on the self-assembly of linear and star ABC terpolymer thin films in order to better apprehend the structural diversity offered by these complex architectures.
Chapter I of this manuscript will describe the general context of this study. Afterwards, we will study the phenomenon leading to the phase-separation of the BCP chains, the existing synthetic methodologies for the production of complex BCP architectures, as well as the state of the art regarding the self-assembly of linear and star ABC terpolymers.
In chapter II we will describe the synthesis of linear and star terpolymers used in this PhD thesis, which consist of polystyrene, poly(2-vinylpyridine) and polyisoprene. Those ABC triblock terpolymers will also be characterized using analytical methods, such as nuclear magnetic resonance (NMR) and size-exclusion chromatography (SEC) characterizations in order to correlate their macromolecular characteristics with the observed self-assembly behavior.
Chapter III will describe the self-assembly of linear ABC terpolymer thin films. We will mainly focus on the self-assembly behavior of a double-core shell gyroid structure with a special attention on the different crystallographic planes oriented parallel to the air surface depending on the film thickness.
In chapter IV, we will finally, present the morphologies obtained from the self-assembly of star miktoarm ABC terpolymers (3 µ-ABCs).
I. General context
The component miniaturization which regulates the microelectronic industry evolution has led to a race toward the development of new lithographic techniques. Indeed, the lithographic resolution for a high density array is defined by the Rayleigh equation:
𝑅 = 𝑘 1 𝜆 𝑁𝐴
Where λ is the wavelength used for the lithographic process, NA the numerical aperture of the lens, and k1 is a constant inherent of the process with a minimal value of 0.25.
It is important to note that during the last decades, the reduction of k1 and λ revealed to be the main factor responsible for the large progress in photolithography. So far, the state of the art for the photolithography has been based on the 193nm lithography combined with an immersion process (NA = 1.35) 1 . Therefore, the microelectronic industry is nowadays able to produce components based on 32 nm node. These components are fabricated using the latest techniques of lithography using a double patterning process to further decrease the feature size.
In addition to the 32 nm node, others methodologies were considered, but nowadays, the industry focuses on the extreme UV lithography. 2 However, this methodology is far from complete, and serious problems inherent to the power of the sources remained to be tackled.
Moreover, the choice of the lithographic technique is also limited by its cost. For instance, the scanners used in the 193nm lithography cost 50.000.000 $, and 125.000.000 $ for extreme UV lithography. [3][4][5] Interestingly, the same observation was done by the data storage industry which faced up to the same problematic. 6 This is why other techniques like maskless lithography or the self-assembly of block copolymer (BCP) thin films are seriously considered. Diblock copolymers (AB-type BCPs) are indeed considered thanks to their intrinsic self-assembling properties which leads to the formation of regular structures having a period in the range of 5-100 nm. BCPs have also the advantage to be cheap compared to other current technologies.
However, the pattern symmetries of AB-type BCPs are limited. While nanotemplates derived from the self-assembly of AB-type BCP thin films enable to achieve well-ordered features (i.e. dots, holes, pillars) within a common p6mm symmetry pattern due to packing frustration, linear ABC terpolymers provide access to more rich and complex patterns having p2mm, p4mm, p3m1 or p2 symmetries. [7][8][9] For that purpose, in this PhD thesis, we were interested by other block copolymer macromolecular architectures to achieve a panoply of more complex pattern symmetries. To this aim, we have studied the self-assembly of linear and star miktoarm ABC terpolymers. Some theoretical and experimental studies have shown that star miktoarm ABC terpolymers (noted hereafter 3 µ-ABC) give access to hierarchical morphologies and Archimedean tiling patterns. [10][11][12][13][14] Archimedean tiling patterns as presented by Johannes Kepler in Harmonices Mundi (1619) consist of regular polygons arranged on a plane without any interstice (gap). From this criteria, Kepler counted 11 tiling patterns (see Figure 1). The selfassembly of 3 µ-ABCs gives access to four of the eleven possible Archimedean tiling patterns which are: (6.6.6), (4.8.8), (4.6.12) and (3.4.6.4). 10,15,16 12 These additional pattern symmetries formed by the microphase-separation of linear and star miktoarm ABC terpolymer thin films can be used to create surfaces with improved or new functionalities. Linear and star miktoarm terpolymers will be used in this thesis in order to generate "three-colored" hierarchical morphologies and Archimedean tiling patterns.
II. Block copolymer phase behavior
Microphase separation of block copolymers
In the past decades, extensive research have been devoted toward the development of a molecular level understanding of microphase separation in block copolymers. The incompatibility between the different blocks leads to the phase separation which length scale is restrained by the covalent bounding between the two blocks. Thermodynamically the phase separation in block copolymer systems can be easily explained in terms of Gibbs free energy of mixing as follow:
ΔGm = ΔHm -T ΔSm
Where the enthalpy change of the process, ΔHm, is largely determined by the Flory-Huggins parameter, , which relates to the interactions between the A and B blocks. The change in entropy of the process, ΔSm, mainly depends (inversely related) on the polymerization degree of the chain, N. Here, the self-assembly of the AB-type BCP (demixing) spontaneously occurs if ΔGm is positive and is (generally) accompanied by an entropy loss. When the temperature is increased, the self-assembly becomes progressively less effective as the magnitude of T ΔSm approaches the magnitude of ΔHm, and above a critical temperature will not occur. For an ABtype BCP, the incompatibility between the A and B blocks can be express by the product N, and the microphase-separation occurs when N > 10.5 (for a symmetric lamellar structure). 17 Linear AB-type BCPs have been studied for over 30 years, and various structures have been identified depending on the volume fractions of each component. A close agreement between the theoretical predictions and the experimental results has been observed over the years. It was proven that the AB-type BCP self-assembly gives access to different morphologies including spheres, cylinders, lamellae, metastable hexagonally perforated lamellae, and complex networks (Q 230 and O 70 ). 18 The theoretical phase diagram of a diblock copolymer is shown in Figure 3. Depending on the block volume fraction, f, and the incompatibility product, N, the morphology can be predicted from the following phase diagram. In the case of linear and star miktoarm ABC terpolymers, the equations are much more complicated and only few studies deal with the free energy of these systems. Nevertheless, the thermodynamic principles remain the same. Depending on the volume fractions of the A, B and C blocks, the different N products, and the chain topology/architecture, linear and star miktoarm ABC terpolymers adopt different morphologies which are more rich and complex than those presented in Figure 3. In the next part, we will discuss about the self-assembly of bulk and thin film linear and star miktoarm ABC terpolymers.
AB-type BCP phase diagram
III. Morphologies obtained from the self-assembly of linear and star miktoarm ABC terpolymers
In the literature, most of the works have been focused on the self-assembly of AB-type BCPs which is quite well understood now. Adding a C block to an AB-type BCP leads to the formation of linear or star miktoarm ABC terpolymers. In recent years, more interest was given to the self-assembly of linear or star miktoarm ABC terpolymer in bulk. These ABC-type terpolymers self-assemble into a wide spread of architectures compared with AB-type BCPs.
Bulk self-assembly of linear ABC terpolymers
In this part, we will describe the structures obtained from the self-assembly of bulk linear ABC terpolymers. After a brief general introduction, we will report on structures obtained depending on the chain topology.
a. Introduction
In 1980, Riess et al. 19 were the first to point out the possibility to achieve new morphologies with ABC-type terpolymers in comparison with those already observed from AB-type BCPs (see Fig. 3). In this study, they identified core-shell structures formed by a linear ABC terpolymer consisted of polystyrene (PS), polyisoprene (PI) and poly(methyl methacrylate) (PMMA) (PS-b-PI-b-PMMA). The microphase separation of linear ABC terpolymers is governed by the volume fractions of A, B, and C blocks (noted fa, fb, fc, respectively), their degree of polymerization (noted NA, NB, NC, respectively), the Flory-Huggins interaction parameters between the different pairs (noted AB, BC, AC), and the chain topology (i.e. ABC, ACB, or BAC).
Bates and Fredrickson 20 demonstrated from theoretical studies some morphologies formed by linear ABC terpolymers (see Figure 4). As for AB-type BCP, the microphase separation depends on the local segregation driven by unfavorable interfaces and the tendency to maximize configurational entropy (i.e. minimize the chain stretching). 21 Linear ABC terpolymers have two junction points (at the intersection of the mid-block).
Therefore, the morphology depends on the position of each block. Different frustration types related to the topology of the terpolymer chain are described.
Bailey suggested a division of linear ABC terpolymers in three parts depending on the interaction between the different blocks. 22 When AC is larger than the interaction parameters between the other blocks, it is so called a type 0 frustration. In contrast, if AB ≈ AC < BC, it is a type I frustration, while if AC is smaller than the other interaction parameters, it is a type II frustration (AC < BC ≈ AB). Because of the incompatibility between end-blocks, in the case of a non-frustrated terpolymer the A/C interface is limited, whereas, for frustrated block copolymer, the A/C interface is energetically favorable. In the literature, 23 it was reported that depending on the frustration state, different morphologies can be obtained from the selfassembly of a linear ABC terpolymer as shown in Figure 4. In this part, we will discuss the morphologies obtained from some type 0, type I, and type II linear ABC terpolymers.
b. Type 0 frustration
A type 0 frustration linear ABC terpolymer (AC > AB ≈ BC) exhibits an end-blocks Flory-Huggins parameter higher than other pairs. Consequently, interfaces between AB and BC blocks are promoted, which favor the formation of alternating morphologies. A few examples of morphologies experimentally produced from this kind of non-frustrated terpolymers will be described below.
In 1992, Mogi, Matshushita et al. [24][25][26][27] reported on the self-assembly in bulk of a non- 25 They highlighted that the shape of interfaces between I and S domains were similar to those between P and S domains because IS and SP are quasi-equivalent and the terpolymer chains are geometrically symmetric (i.e. equal volume of end-blocks). Some years later, Phan and Fredrickson 28 suggested that the morphology assigned to an OTDD network should be revisited. They pointed out that the alternative gyroid Q 214 phase has a lower free energy than the OTDD network, and so the equilibrium mesostructure described by Mogi et al. should be considered as a gyroid Q 214 phase.
Another non-frustrated linear ABC terpolymer consisting of polybutadiene, polystyrene and poly(methyl methacrylate) blocks (noted BSM) was studied by Jung and co-workers 29 . The non-frustrated BSM (B:S:M = 1:3:2), showed the formation of cylindrical domains of PB and PMMA dispersed in a matrix of PS (the major component). The PB and PMMA cylinders show a tetragonal packing. As short-range order was obtained leading them to conclude that a mismatch between the cylinder dimensions brings a bending of cylinders, thus reducing the long range ordering.
In 2000, Hückstädt et al. 30 30 In 2001, Bailey, Bates and co-workers [31][32][33] reported on the self-assembly of the nonfrustrated polyisoprene-b-polystyrene-b-poly(ethylene oxide) (ISO). They built a phase diagram where different phases are obtained depending on the volume fractions (see Figure 7).
In this paper, the authors showed that whatever the volume fraction of blocks the less incompatible blocks developed the largest interface (I/S and S/O interfaces). In 1993, Gido and co-workers 34 reported on the self-assembly of a linear ABC terpolymer composed of PS, PI, and P2VP as Mogi and co-workers (previously described) but in a type I frustrated configuration. The obtained morphologies are different as shown on the TEM micrograph presented in Figure 8. The frustrated P2VP-b-PI-b-PS (P:I:S = 1:1:1) shows a PS matrix perforated by hexagonally-packed cylindrical cores of P2VP surrounded by a shell of PI. They showed that a non-constant mean curvature (non-CMC) could be obtained using a linear ABC terpolymer. The non-CMC morphology of this structure was explained by Matsen 17 and Semenov 35 for an asymmetric AB-type BCP. They demonstrated that the interface tends to curve because of the packing frustration. In the case of linear ABC terpolymers, Gido illustrated that the asymmetric interactions between blocks in a frustrated terpolymer theoretically leads to the formation of non-CMC cylinders. 34 In 2000, Hückstädt and co-workers 8,30 described the morphology obtained from other type I frustrated PS-b-PB-b-P2VP (SBV) systems. For S45B32V23 and S44B27V29 (S:B:V = 1:0.7:0.5 and S:B:V = 1:0.6:0.7, respectively) they observed a core-shell double gyroid morphology with a P2VP core domain (the smallest block), a PB shell domain within a PS matrix, as shown in Figure 9. They highlighted that in the core-shell double gyroid, the smallest block (V) forms the core. This block is surrounded by a shell of the highest incompatible block (B). When the volume ratio of the P2VP was increased S25B17V58 (S:B:V = 1:0.7:2.3), a lamellae morphology was produced. In both cases, a large interface between the two highest incompatible blocks was observed because of the topology of the chain. The core-shell double gyroid structure was not observed for non-frustrated BSV, and only a lamellar morphology was obtained. 8,30 In the case of the type I frustrated PS-b-PI-b-PEO (SIO), 36 a core-shell morphology of hexagonally-packed cylinders and gyroid morphologies (Q 230 and Q 214 ) were observed, in addition to two lamellar structures (two-and three-domains), because of frustration conditions, as Bailey and co-workers expected.
Changing the order of polymers from a non-frustrated to a frustrated terpolymer led to a significantly different morphological behavior since in the case of frustrated terpolymer, the interface between two covalently bonded adjacent blocks in the chain is not energetically favorable.
d. Type II frustration
Linear ABC terpolymers with a type II frustration have a AC-parameter smaller than the other pairs (AC < AB ≈ BC). A type II frustrated polystyrene-b-polybutadiene-bpoly(methylmethacrylate) (S:B:M = 1:0.5:2.4) was studied by Krappe and co-workers 37 . Here PMMA is the major component and forms the matrix in which the PS phase forms cylinders surrounded by PB helices. Since the PS/PMMA interface is energetically favorable, this interface is promoted whereas the PS/PB and the PB/PMMA interfaces are unfavorable and tend to be reduced. In 1998, Brinkmann et al 38 studied the influence of the solvent casting on the bulk morphology of S23B57M20 (S:B:M = 1:0.4:1.2). In this study, they proved that using solvent casting could stabilize certain structures. They described a new structure consisted of a new morphological hexagonal array, where PS and PMMA form two kinds of cylindrical microdomains having a small and large diameter, respectively, in a PB matrix as shown in Figure 10. 39 . In this study, they reported core-shell cylinders, with a PB shell, a PCL core, and a PS matrix. They also discussed about the introduction of a semicrystalline block in the self-assembled system. The introduction of a semicrystalline block did not changed the basic morphological pattern, but enhanced a deformation from a circular shaped cylinders to an unusual polygonal shape. e. Theoretical studies Some theoretical studies reported on the self-assembly of frustrated and non-frustrated linear ABC terpolymers. Li and co-workers [START_REF] Liu | Theoretical study of phase behavior of frustrated ABC linear triblock copolymers[END_REF][START_REF] Li | Emergence and stability of helical superstructures in ABC triblock copolymers[END_REF] used a three-dimensional self-consistent field theory (SCFT) to predict morphologies adopted by frustrated linear ABC terpolymers. They demonstrated that frustrated ABC linear terpolymers can self-assemble into three-color lamellae (L3), core-shell cylinders (CSC), perforated lamellae (PL), cylinders-within-lamellae (LC), triple/quadruple cylinders-on-cylinders (C3/C4), double/triple helices-on-cylinders (H2C/H3C), and perforated circular lamellae-on-cylinders (PC) as shown in Figure 11. A, B, andC, respectively. (Adapted from Li and co-workers) [START_REF] Liu | Theoretical study of phase behavior of frustrated ABC linear triblock copolymers[END_REF][START_REF] Li | Emergence and stability of helical superstructures in ABC triblock copolymers[END_REF] These results were in accordance with experimentally obtained morphologies. The theoretical study of the self-assembly of non-frustrated linear ABC terpolymers was done in 2014 by Jiang et al. 42 In this work, they used the self-consistent field theory to predict the morphologies that could be obtained from the self-assembly of linear ABC terpolymers (see Figure 12). They showed that the three-color lamellar phase is predominant when the volume fraction of the three blocks are comparable. The lamellar phase region was found to be very large. Parallel lamellar phase with hexagonally packed pores at surfaces (LAM3 ll ), perpendicular lamellar phase with cylinders at the interface (LAM ⊥ ), and perpendicular hexagonally packed cylinders phase with rings at the interface (C2 ⊥ ) were predicted. No coreshell structures were demonstrated for non-frustrated linear ABC terpolymers.
Figure 11: Density isosurface plots of morphologies formed by ABC linear triblock copolymers: (a) three-color lamellae (L3), (b) cylinders-within-lamellae (LC), (c) knitting pattern (KP), (d) triple cylinders-on-cylinders (C3), (e) and (f) quadruple cylinders-on-cylinders (C4), (g) core-shell cylinders (CSC), (h) perforated lamellae (PL), (i) triple helices-on-cylinders (H3C), (j) double helices-on-cylinders (H2C), and (k) perforated circular layer-on cylinders (PC). The red, green, and blue colors denote the regions where the majority components are
f. Conclusion
To summarize, varying the topology of a linear ABC terpolymer leads to the formation of different structures. We can note that in non-frustrated state alternative microstructures are obtained, while in the frustrated state (I or II), more complexes structures are described as reported in the Table 1. In the type II frustrated state, some spheres-on-spheres, spheres-oncylinders, rings-on-cylinders, and cylinders-in-lamellae have been reported. In the case of type I frustrated linear ABC terpolymer, the interaction between end-blocks is of intermediate strength and structures are between those observed in the type II and type 0 frustration states.
The morphologies observed include core-shell cylinders, core-shell double gyroid and lamellae.
All the results are summarized in the Table 1.
Table 1: Morphologies obtained in bulk depending on the frustration state.
Morphology/frustration state Type 0 Type I Type II
Spheres-on-spheres X
Spheres-on-cylinders X
Rings-on-cylinders X
Cylinders-in-lamellae X X
Core-shell cylinders X
Core-shell double gyroid X
Alternating spheres X X
Alternating cylinders X
Alternating gyroid X X
Alternating lamellae X X X
Helices-on-cylinders X
Bulk self-assembly of star miktoarm ABC terpolymers This junction point is then localized along a line (one dimensional). This topological requirement allows the formation of morphologies that are not achievable with linear ABC terpolymers, such as Archimedean tiling patterns. Chen et al. [START_REF] Li | Morphologies and phase diagrams of ABC star triblock copolymers confined in a spherical cavity[END_REF] studied the theoretical phase diagram of star miktoarm ABC terpolymers depending on the block volume ratios (see Figure 13). They showed that hierarchical morphologies and Archimedean tiling patterns could be obtained. The nomenclature used in this thesis to define a star miktoarm ABC terpolymer is 3 µ-ABC with A, B, and C the three different arms of the chain.
Hadjichristidis was the first to report on the star miktoarm ABC terpolymer selfassembly in 1993. [START_REF] Hadjichristidis | Morphology and Miscibility of Miktoarm Styrene-Diene Copolymers and Terpolymers[END_REF] The star miktoarm ABC terpolymer was a polystyrene-arm-polyisoprenearm-polybutadiene (3 µ-SIB) with molar mass of 20.7, 15.6 and 12.2 kg.mol -1 , respectively.
The 3 µ-SIB morphology obtained was a hexagonally close-packed array of cylinders with a period of 30 nm. The authors demonstrated a two-colored pattern since the PI and PB blocks were mixed. This result is undoubtedly due to the weak interaction parameter between the PI and PB blocks. The morphology observed in this study has similarities with the structure of the AB-type or AB2-type linear/star miktoarm BCPs.
The first three-colored pattern produced from a star miktoarm terpolymer was demonstrated 5 years later. In 1998, Okamoto and co-workers [START_REF] Okamoto | Morphology of model three-component three-arm star-shaped copolymers[END_REF] studied the self-assembly of a star miktoarm ABC terpolymer composed of PS, PDMS and poly(tert-butyl methacrylate) (PtBMA) with approximatively the same volume ratio for each block. The large chemical nature differences of those three blocks led to the formation of three different microdomains.
The TEM images of the cast-film revealed a three-fold symmetry confirmed by SAXS analysis, but no conclusion on the exact morphology could be drawn due to the lack of available In this case, the interfaces between PI/PS and PS/PMMA had a rhombohedral shape, and the PI domain array did not have a p6mm symmetry but had a c2mm one (see Figure 14). In any case, no-interfaces were observed between PI and PMMA in their study. PS formed a protective annulus between those two blocks, and PMMA partially mixed into the PS annulus. PI and PMMA tended to reduce their interfaces because of their highest -parameter compared with the other pairs, whereas the weak -parameter between PS and PMMA allowed for a partial mixing of those two blocks.
A few months later, the same team studied two other star miktoarm ABC terpolymers [START_REF] Sioula | Direct Evidence for Confinement of Junctions to Lines in an 3 Miktoarm Star Terpolymer Microdomain Structure[END_REF] composed of the same polymers but with different molecular weights: SIM-72/77/109 and SIM-92/60/94. Even if the molecular weight of those two 3 µ-ABC terpolymers was different, their volume ratios were quite equivalent and the morphology obtained was the same. The morphology observed in TEM images consisted of PMMA cylinders surrounded by six triangular prism-shaped cylinders of PI and six hexagonal-shape cylinders of PS, alternatively.
The p6mm symmetry was confirmed by SAXS.
c. PS-arm-PI-arm-P2VP self-assembly
The star miktoarm ABC terpolymer composed of PS, PI, and P2VP have been extensively studied during the past decades. In 2004, Takano and co-workers 16 studied the selfassembly of this 3 µ-SIP by varying the volume ratio of the P2VP block, and keeping the volume ratios of PS and PI constant. The nomenclature used in this paper was 3 µ-SIP-x with
x the volume ratio of the P2VP block. The PI and PS blocks have both a volume ratio of 1.
Three star miktoarm ABC terpolymers were prepared with different P2VP volume ratios: 0.7, 1.2 and 1.9 (corresponding to 3 µ-SIP-0.7, 3 µ-SIP-1.2 and 3 µ-SIP-1.9, respectively). The 3 µ-SIP-0.7 showed a honeycomb-type microdomain structure with a (6.6.6) tiling pattern (see Figure 15a). The 3 µ-SIP-1.2 exhibited a cylindrical structure with a tetragonal symmetry. PI and PS formed octagonal domains and PS formed a square section. This structure was assigned to a (4.8.8) tiling pattern. The 3 µ-SIP-1.9 also exhibited a cylindrical structure forming an array with a hexagonal symmetry. The P2VP, PI and PS blocks formed dodecagonal-, hexagonaland square-shaped domains, respectively. This structure was assigned to a (4. A few years later, in 2006, Hayashida 10 described morphologies obtained from the selfassembly of 3 µ-SIP varying the volume ratio of P2VP and keeping the volume ratios of PI and PS constant as 1 and 1.8, respectively. TEM images revealed (6.6.6) and (4.6.12) Archimedean tiling patterns for the 3 µ-ISP-1 and 3 µ-ISP-2.9, respectively. We can note that the (4.6.12)
tiling pattern was already obtained by Takano in 2004 from a 3 µ-ISP-1.9. Other patterns, not assigned as Archimedean tiling patterns were observed for 3 µ-ISP-1.6 and 3 µ-ISP-2. For 3 µ-ISP-1.6, I and S domains consisted of one 4-fold and two 6-fold coordinated domains within the unit cell (see Figure 15b). Therefore, the average coordination number for I and S was 5.3 (=(4*1+6*2)/3) while P was 8-fold coordinated. Using the average coordination number (ACN), the tiling pattern for 3 µ-ISP-1.6 was described as (5.3, 5.3, 8). With the same methodology, the authors defined the morphology of the 3 µ-ISP-2 as a (4.5, 6, 9) tiling pattern (see Figure 15c).
The effect of the PS volume ratio variation was also studied keeping constant the PI and the P2VP volume ratios as 1 and 2, respectively. For 3 µ-ISP-1.3 and 3 µ-ISP-2.7 terpolymers, the (4.6.12) and (4.8.8) Archimedean tiling patterns were obtained, respectively, while the ACN nomenclature was used for 3 µ-ISP-1.6 and 3 µ-ISP-2.3 which formed a (5, 5, 10) and (4, 6.7, 10) morphology.
In 2007, Hayashida [START_REF] Hayashida | Hierarchical morphologies formed by ABC star-shaped terpolymers[END_REF] described the self-assembly of star miktoarm ABC terpolymers and their blends with either other star miktoarm ABC terpolymers or even with homopolymers.
The summary of the neat and mixed polymers is given in Table 2 and Table 3, respectively. A cylinders-in-lamella morphology was observed on TEM images for I1.0S1.8PX with 4.3 ≤ X ≤ 11. SAXS experiments confirmed the lamellar structure, and the lamellar d-spacing (48 nm) obtained by SAXS were in accordance with those measured on TEM images. Depending on the nature of the solvent casting, the morphology was a cylinders-in-lamella phase (with THF) or cylinders surrounded by as many as twenty I and S cylindrical domains (with toluene).
The difference in the resulting morphology was due to the difference of polymer affinity with the solvent. When the volume ratio of P was increased (12 ≤ X ≤ 32), a lamellae-in-cylinders structure was obtained. When X = 53, the morphology obtained from the solvent-casting was a lamellae-in-sphere structure. In this study, they showed that depending of the solvent-casting and the block volume ratios, a wide range of morphologies could be obtained as shown in Figure 16. In 2007, Hayashida described the self-assembly of 3 µ-ISP chains blended with a PS homopolymer (hPS) having a molecular weight of 3 kg/mol. [START_REF] Hayashida | Polymeric quasicrystal: Mesoscopic quasicrystalline tiling in ABC star polymers[END_REF] The hPS was used to tune the PS block volume ratio of the star miktoarm terpolymers. They blended 3 µ-I1S1.8P2 and 3 µ-I1S1.8P2.5 with hPS to produce equivalent 3 µ-I1S2.3P2 and 3 µ-I1S2.7P2.5, respectively. In both cases, a periodic pattern was observed with triangular and square arrangements of tiles. The square-triangle tiling superimposed upon the image were consistent with a dodecagonal symmetry. This result was in accordance with SAXS data. The morphology was assigned to the where the volume ratio of the P2VP block, x, was varied while the PS and PI volume ratios were equal to one. Results for x equals to 0.2, 0.4, 0.7, 1.2, 1.9, 3.0, 4.9, 7.9 and 10 are summarized in Table 4. were also reported is the literature. In 2000, Hückstädt, Göpfert and Abetz 50 described the selfassembly of a star miktoarm terpolymer composed of PS, PB and P2VP (see Figure 18).
Archimedean tiling patterns and hierarchical morphologies were observed. Except for the lamellar-like structure, the strong incompatibility between B and V blocks led to a small interface between those two polymers while the interfaces between other contents were larger. e. Theoretical studies
The bulk morphologies were also compared with simulation experiments. For instance, Gemma simulated the morphology of star miktoarm ABC terpolymers using Monte Carlo simulation (see Figure 20). [START_REF] Gemma | Monte Carlo simulations of the morphology of ABC star polymers using the diagonal bond method[END_REF] The volume ratios A and B blocks were kept constant and symmetric, while the volume fraction of the C block was changed (noted x). For 0.17 ≤ x ≤ 0.33, lamellae including spheres at their interfaces were obtained. When the volume ratio of the C block was increased (0.37 ≤ x ≤ 0.7), a (6.6.6) Archimedean tiling pattern was obtained. These simulated phases are in accordance with experimental results. For example, Takano observed the same structure for a 3 µ-ISP (S:I:P = 1:1:0.7). 13 When the volume ratio of the C block was close to two, different morphologies were close in term of energy. They reported the presence of a (4.6.12) tiling pattern which had been observed in bulk from a 3 µ-ISP (S:I:P = 1:1:1.9). 3 µ-ISP (S:I:P = 1:1:10) is another example of consistency between the experimental and simulated results since, in this case, a columnar piled disk structure was obtained. Some simulated predictions did not fully match with experimental data.
Asymmetry of the interaction parameters for experimental data could be one of the reason of such a mismatch.
IV. Self-assembly of linear and star miktoarm ABC terpolymer thin film
Only few examples of self-assembly in thin films exist in the literature for ABC terpolymers (linear and star miktoarm). New parameters to explain the BCP phase behavior appear in thin films that are negligible in bulk. In thin film, the surface energy of polymers with regard to the surface energy of the substrate, the thickness, and post-treatments are some parameters that lead to the formation of different morphologies than in bulk. BCP thin films were obtained by solubilizing the linear or star miktoarm ABC terpolymers in a good solvent followed by the spin-coating of the solution on a substrate. In this part, the self-assembly of linear and star miktoarm self-assembly in thin film will be described. To summarize, only few structures have been obtained in thin film configuration.
Self-assembly of linear ABC terpolymer thin films
Important parameters like the solvent used for the SVA process and the film thickness must be taken into account to control the long-range order of the structure and the domain orientation.
Self-assembly of star miktoarm ABC terpolymer thin films
Only few papers reported on the self-assembly of star miktoarm ABC terpolymer thin films. To the best of my knowledge, the first example of self-assembled star miktoarm ABC terpolymer thin film was reported by Aissou et al. in 2013. [START_REF] Aissou | Square and rectangular symmetry tiles from bulk and thin film 3-miktoarm star terpolymers[END_REF][START_REF] Aissou | Ordered nanoscale archimedean tilings of a templated 3-miktoarm star terpolymer[END_REF] A star miktoarm ABC terpolymer composed of polyisoprene, polystyrene and polyferrocenylethylmethylsilane (3 µ-ISF, I:S:F = 1:0.9:0.7) was used to achieve Archimedean tiling patterns. The star miktoarm terpolymers (unblended or blended with hPS) was spin-coated on a silicon wafer (treated or untreated) and solvent-annealed under a chloroform vapor. They reported a morphological change from a (4.8.8) to a (6.6.6) tiling pattern induced by a greater swelling of the PI achieved by increasing the vapor pressure during the solvent vapor annealing (SVA) process (see Figure 23). Both morphologies were used as nanolithographic masks to transfer square and triangle hole arrays into the substrate. The volume ratio of the PLA was increased (S:D:L=1:2.1:1.5), and two morphologies were produced. The most stable one was the (6.6.6) tiling, but the step between terraces locally stabilized the (4.8.8) tiling pattern when the film dewetted (see Figure 26). Indeed, in thin film, the presence of boundary surfaces and confinement due to the film thickness, renders more difficult to obtain well-defined morphologies. Moreover, for star miktoarm ABC terpolymers, only few structures have been achieved due to the difficulty to synthetize the star miktoarm shaped polymers composed of three incompatible arms.
In the next part, we will discuss the synthesis of linear and star miktoarm ABC terpolymers.
V. Linear and star miktoarm ABC terpolymers synthesis
Several methods have been developed to synthetize linear and star miktoarm ABC terpolymers. One important parameter for the synthesis of a block copolymer is the precise control of the chain-growth polymerization, in order to avoid polydispersed polymers that create defects in the self-assembled structures. Consequently, most of the time, linear or star miktoarm ABC terpolymers are synthetized by living polymerization (anionic [START_REF] Tsitsilianis | Diversity of nanostructured selfassemblies from a pH-responsive BC terpolymer in aqueous media[END_REF][START_REF] Koutalas | Micelles of poly(isoprene-b-2vinylpyridine-b-ethylene oxide) terpolymers in aqueous media and their interaction with surfactants[END_REF] or cationic [START_REF] Kwon | Synthesis and Characterization of ABC Block Copolymers with Glassy (α-Methylstyrene), Rubbery (Isobutylene), and Crystalline (Pivalolactone) Blocks[END_REF] or controlled polymerization (Nitroxide-Mediated Radical Polymerization, [START_REF] Hawker | Dual living free radical and ring opening polymerizations from a double-headed initiator[END_REF] Atom Transfer Radical Polymerization, [START_REF] Bernaerts | Synthesis of Poly(tetrahydrofuran)-b-Polystyrene Block Copolymers from Dual Initiators for Cationic Ring-Opening Polymerization and Atom Transfer Radical Polymerization[END_REF][START_REF] Tang | Solubilization and controlled release of a hydrophobic drug using novel micelle-forming ABC triblock copolymers[END_REF][START_REF] Kubowicz | Multicompartment micelles formed by self-assembly of linear ABC triblock copolymers in aqueous medium[END_REF] Reversible Addition-Fragmentation chain Transfer polymerization, [START_REF] Marsat | Self-assembly into multicompartment micelles and selective solubilization by hydrophilic-lipophilic-fluorophilic block copolymers[END_REF] …).
In this study, we will focus our attention on the synthesis of linear and star miktoarm ABC terpolymers by anionic polymerization. This method allows the control of the chain growth by a fast and effective initiation step, prevents chain termination and transfer reactions which are common in classical radical polymerization. One limitation of the anionic polymerization comes from the harsh conditions needed. The reaction should takes place in a moisture and oxygen free reactor with highly purified monomers and solvents, and most of the time at low temperature. Whatever the drawbacks inherent to the anionic polymerization, it remains one of the most effective synthesis method for linear or star miktoarm ABC terpolymer.
Linear ABC terpolymer synthesis
For the synthesis of linear ABC terpolymers, one very common method consists in the synthesis of the different blocks successively via living polymerization (a three step sequential anionic polymerization process) as shown in Figure 27. [START_REF] Hadjichristidis | Linear and non-linear triblock terpolymers. Synthesis, self-assembly in selective solvents and in bulk[END_REF] After initiation from an appropriate anionic initiator, monomers are sequentially added. Depending on their propagation rate, monomers should be added in a certain order: the initiation rate at each step should be faster than the corresponding propagation rate. 76 Using this sequential approach, Matsushita et al., [START_REF] Matsushita | Tricontinuous Double-Diamond Structure Formed by a Styrene-Isoprene-2-Vinylpyridine Triblock Copolymer[END_REF]26 synthetized a series of linear ABC terpolymers composed of PS, P2VP, and PI (SIP and ISP). Both polymerizations were done with cumylpotassium as an initiator in THF and monomers and solvents were highly purified.
Polymers were characterized by both SEC (size exclusion chromatography) as showed in Some examples of linear ABC terpolymer synthesis with a three-step-sequential-anionic polymerization described in the literature are reported in Table 5.
Table 5: Summary of the anionically polymerized linear ABC terpolymers. Regardless of the solvent polarity or the initiator, the obtained dispersity was close to one. This is a typical result observed in anionic polymerization due to the fast initiation and the absence of termination reactions.
Authors
Star miktoarm ABC terpolymer synthesis
The synthesis of star miktoarm ABC terpolymers is more complicated because each block needs to be attached to a core molecule. Herein, the three main methods for the synthesis of star miktoarm ABC terpolymers will be discussed.
a. Chlorosilane methology
One of the most well established methods for the synthesis of star miktoarm ABC terpolymers via anionic polymerization was reported for the first time in 1992 by Hadjichristidis. [START_REF] Iatrou | Synthesis of a Model 3-Miktoarm Star Terpolymer[END_REF] They synthetized a 3 µ-ABC composed of PI, PS and PB using a chlorosilane method. Polymers were anionically polymerized using sec-BuLi as an initiator and a trifunctional chlorosilane compound was used as a core molecule according to the procedure presented in Figure 29. The first step consisted in the addition of SiMeCl3 to a living PI block, then the PS living block was added, followed by the addition of a living PB block. Other star miktoarm ABC terpolymers such as PS-arm-PI-arm-PDMS, [START_REF] Bellas | Controlled anionic polymerization of hexamethylcyclotrisiloxane. Model linear and miktoarm star co-and terpolymers of dimethylsiloxane with styrene and isoprene[END_REF] PS-arm-PI-arm-P2VP, [START_REF] Tiatco | The possibilities and problems of entanglement in contemporary Manila theater: Pista as model, Rizal X as exemplar[END_REF] PS-arm-PI-arm-PMMA [START_REF] Sioula | Synthesis of Model 3-Miktoarm Star Terpolymers of Styrene, Isoprene, and Methyl Methacrylate[END_REF] have been synthetized by Hadjichristidis and co-workers using this method. Those reactions required a specific glassware, high vacuum and break-seal techniques, high purity compounds, and fractionation steps, making them difficult to reproduce in our laboratory.
b. Diphenylethylene methodology
Another synthesis procedure described by Fujimoto et al. [START_REF] Fujimoto | Preparation and characterization of novel star-shaped copolymers having three different branches[END_REF] in 1992 deals with anionic polymerization and the use of a modified diphenylethylene (DPE) as a core molecule. A poly(styryl) anion and an end-reactive PDMS were coupled, followed by anionic polymerization of MMA as shown in Figure 30 Figure 30: Synthesis of a star-shaped copolymer having three different arms, poly(styrene), poly(dimethylsiloxane) and poly(tert-butyl methacrylate).
Other groups reported on the preparation of star miktoarm ABC terpolymers using a modified DPE as the core molecule. Dumas et al. [START_REF] Nasser-Eddine | Synthesis of Polystyrene-Poly(tert-butyl methacrylate)-Poly(ethylene oxide) Triarm Star Block Copolymers[END_REF][START_REF] Lambert | Synthesis of amphiphilic triarm star block copolymers[END_REF] reported on the synthesis of PS-arm-PEOarm-PCL, PS-arm-PEO-arm-PLL and PS-arm-PMMA-arm-PEO. The synthesis involved a protected DPE (1-[4-(2-tert-butyldimethylsiloxy)ethyl]phenyl-1-phenylethylene) as the core molecule. The first arm was anionically polymerized, then, the hydroxyl protected DPE was added to the solution, followed by the polymerization of the second arm. Finally, the core molecule was deprotected, and the third arm was polymerized by ring opening polymerization from the hydroxyl reactive function. This method was very relevant, but the last step required a monomer that could be polymerized from a hydroxyl reactive function.
Stadler et al. [START_REF] Hückstädt | Synthesis of a polystyrene-arm-polybutadienearm-poly(methyl methacrylate) triarm star copolymer[END_REF] proposed another method using a modified DPE as core molecule. In this synthesis way, a living PS was end-capped with a bromo-subsituted DPE. Then, a living PB was added to the DPE-PS and reacted on the vinyl group of the DPE to produce a living macroinitiator that could initiate the anionic polymerization of the MMA. The same methodology was used to prepare a PS-arm-PB-arm-P2VP.
In 2012, Muller et al [START_REF] Hanisch | A modular route for the synthesis of ABC miktoarm star terpolymers via a new alkyne-substituted diphenylethylene derivative[END_REF]
VI. Conclusion
In this bibliographic chapter, we saw that the self-assembly of linear and star miktoarm ABC terpolymers have been mostly studied in bulk. They allow to obtain a wide range of morphologies that give access to pattern symmetries not available with AB-type BCPs. Only few studies reported on the self-assembly of linear and star miktoarm terpolymer thin films.
We saw that in the case of linear ABC terpolymer, morphologies obtained in thin films are in accordance with theoretical and bulk studies. For star miktoarm ABC terpolymers, very interesting structures including Archimedean tiling patterns have been identified both in bulk and in thin film configurations. Those patterns could open a broad array of applications in the nanoelectronic since Cartesian square arrays are accessible from self-assembled star miktoarm ABC terpolymers.
One limitation in the use of 3 µ-ABC systems is their synthesis. Actually, controlled polymerization and most of the time anionic polymerization are required to obtain well-defined terpolymers. Linear ABC terpolymers are mostly synthesized via a sequential anionic polymerization, but with this methodology, it is needed to take into account the reactivity of monomers in the sequence. Star miktoarm ABC terpolymers are synthesized using three main synthesis routes: the chlorosilane methodology, the diphenylethylene route, and a hybrid approach.
In the next chapters, an easy reproducible synthetic route will be described as well as the self-assembly of linear and star miktoarm ABC terpolymer thin films. As it was discussed in the bibliographic part, a key parameter to produce "three-colored" patterns is to design ABC triblock terpolymers with highly incompatible blocks in order to promote their microphase-separation. Herein, we chose to work with PS, P2VP and PI where the χ-parameters between the different pairs are high: the χSI 1 and the χSP 2 were determined experimentally to be 0.1 while χPI 3 was determined theoretically to be 0.8. Such an incompatibility between the different blocks should allow their phase separation even for low degrees of polymerization. Our aim was to find an easy methodology to synthesize a library of linear and star miktoarm ABC terpolymers where the molecular weights of the PS and P2VP blocks were kept constant while the PI block size was varied in order to achieve different morphologies. To easily tune the PI molecular weight, we chose to work with a coupling method. The PS-b-P2VP synthesis involved a functionalized diphenylethylene (DPE) [4][5][6][7][8] as a core molecule for star miktoarm ABC terpolymers and an end-functionalized AB-type BCP for linear ABC terpolymers. [9][10][11] Different PI blocks were prepared with an end-function to allow their coupling with the mid-and end-functionalized PS-b-P2VP BCPs.
The mid-and end-functionalized PS-b-P2VP BCPs as well as the PI homopolymers having different molecular weights were prepared via an anionic polymerization. 8,[12][13][14][15][16][17][18][19] Because of the living character of the anionic polymerization, the monomers and the solvents had to be highly purified, and vacuum techniques were also required. The order of monomers addition must be taken into account in the synthesis. It has been shown that monomers with the highest pKa must be added first. 18,20 Indeed, less-reactive chain-end anions derive from the more reactive monomer. Less stable monomers have higher pKa. Consequently, styrene had to be introduced first during the synthesis of the mid-and end-functionalized PS-b-P2VP BCPs followed by 2VP.
Once end-functionalized PI homopolymers were prepared, the next step consisted in their coupling with the mid-and end-functionalized PS-b-P2VP BCPs. For this purpose, we chose to work with a Steglich esterification 21,22 as a coupling method.
In this chapter, we will first describe the synthesis of the end-functionalized PI block, then we will present the synthesis of mid-and end-functionalized PS-b-P2VP BCPs. The last part of this chapter will be dedicated to the Steglich esterification.
presented in Figure 2. The1 H NMR spectrum shows the characteristic peaks of the 1.2 ( = 5.5 -6 ppm) and 3.4 ( = 4.4 -5 ppm) units of polyisoprene. According to the integration areas relative to the 1.2 and 3.4 units (5 and 1, respectively), the hPI-COOH contains 3.4 and 1.2 inchain units in a 7/3 ratio. 16,17 This ratio is typical of the anionic polymerization of isoprene in THF. As THF is a polar solvent, the 1.4 addition is prevented and only 3.4 and 1.2 units are incorporated within the living PI chains. The hPI-COOH was also characterized by size exclusion chromatography (SEC) with universal calibration in THF. The SEC trace is presented in Figure 3. The SEC trace of the PI homopolymer shows an intense and narrow peak. This peak is attributed to the endfunctionalized polyisoprene homopolymer. The molecular weight of the hPI-COOH is determined to be 9 kg.mol -1 (PI9), and the dispersity about 1.06. A small shoulder appears in the SEC trace of the hPI-COOH. This shoulder is attributed to the coupling between two living PI chains. At the end of the polymerization, the functionalization of the living PI chains was done by bubbling CO2 in the media. Even after purging the tube between the gas bottle and the syringe, some traces of humidity or oxygen could still be present. The coupling between the living PI chains is probably due to those impurities. It is important to note that the coupled chains of PI are not carboxyl-endfunctionalized. Coupled PI chains are terminated on both side by the butyl group of the initiator and therefore will not be involve in the Steglich esterification reaction.
The concentration of carboxylic acid chain ends was determined by titrating a solution of 0.2 g of polymer in 20 mL of toluene with 0.01 M KOH in methanol with phenolphtalein.
The colorimetric titration of the PI chains confirms the presence of a carboxyl function. The colorimetric titration is not precise, but a functionalization yield higher than 90% was achieved.
Synthesis of carboxyl-end-functionalized polyisoprene homopolymers with different molecular weights
Three other carboxyl-end-functionalized polyisoprene chains were synthesized using the procedure described in the previous part. We only varied the molecular weight of the different PI homopolymers. For that purpose, the volume of sec-BuLi used to initialize the anionic polymerization was modified whereas the volume of isoprene introduced in the reactor was kept constant. PI homopolymers having different molecular weight were characterized by 1 H NMR (400 MHz, CD2Cl2), SEC chromatography and titration as previously. 1 H NMR
Conclusion
In this part, the anionic polymerization of several PI homopolymers (9, 13, 16 and 28 kg.mol -1 ) in THF is described. The end-functionalization of the polyisoprene, performed with carbon dioxide, revealed to be quantitative since more than 90% of hPI chains were functionalized with a carboxyl end-group as checked by titration using phenolphtalein as a colorant indicator.
This synthesis route is very convenient since the PI homopolymers have a narrow dispersity (< 1.1) and the carboxyl end-functionalization of hPI does not required any protection/deprotection step.
III. Synthesis of end-and mid-functionalized polystyreneblock-poly(2-vinylpyridine)
In this part, the anionic polymerization of the mid-and end-functionalized PS-b-P2VP chains will be described as they will be used for the synthesis of star miktoarm and linear ABC terpolymers, respectively. For this purpose, the PS-b-P2VP BCPs were prepared via a sequential anionic polymerization. 26 The end-functionalization of PS-b-P2VP was achieved by adding ethylene oxide on the living BCP chains at the end of the reaction. 18,27 The midfunctionalized PS-b-P2VP was synthesized using a core molecule inserted between the PS and P2VP blocks. The core molecule used for this synthesis is a diphenylethylene bearing a hydroxyl protected function. 6,9,28,29 Monomers and solvents were distilled as previously described in the literature. 25,30 The 2VP monomer was cryo-distilled under vacuum over CaH2 twice. The styrene was cryodistilled over CaH2 and stirred with dibutylmagnesium (1.0 M in heptane, Sigma-Aldrich).
Tetrahydrofuran (THF, Sigma-Aldrich) was dried over Braun MB-SPS-800 solvent purification system, stored over sodium benzophenone ketyl under dry nitrogen and cryo-distilled prior to use.
As discussed in the bibliographic part, all the anionic polymerization must be realized in an inert atmosphere (here argon). To avoid the introduction of oxygen in the media, the reaction was carried out in the closed reactor. Monomers and solvents were put in a burette after their cryo-distillation, and the sec-BuLi initiator was introduced with a seal syringe through a septum surrounded of parafilm. The glassware was purified with three cycles of vacuum (flame)/argon before the beginning of the reaction, and then it was kept closed under argon pressure.
End-functionalized PS-b-P2VP synthesis
The hydroxyl-terminated PS-b-P2VP chains were synthesized using sequential living anionic polymerization. The general scheme of the reaction is presented in Figure 6. Tetrahydrofuran (THF, 40mL) was introduced in a flame dried round 250-mL-flask equipped with magnetic stirrer. The solution was then cooled down to -78°C. The synthesis was prepared at -78°C 8 to reduce the reactivity of monomers, and so to prepare PS-b-P2VP chains with a low dispersity. Sec-butyllithium (Sec-BuLi, 0.08mL, ~1.2M) was charged, which was followed by the addition of styrene (S, 2mL). The orange reaction was stirred for 1 hour. An aliquot of PS was taken before the addition of the second monomer and was analyzed by SEC with a universal calibration to check the PS molecular weight and its dispersity. The aliquot of the polystyrene homopolymer (hPS) was characterized by SEC in THF and 1 H NMR (400 MHz, CD2Cl2) (see Fig. 8). The SEC trace shows an intense narrow peak with a small shoulder (see Fig. 8a). The peak having the largest intensity has a dispersity of 1.02. This peak attributed to the hPS corresponds to a molecular weight of 21 kg.mol -1 . We can note the presence of a small shoulder in higher molecular weight region (lower retention volume). This shoulder corresponds to the coupling between the two living hPS chains. The coupling occurs because the methanol used to quench the reaction was not degassed properly.
The NMR spectrum of PS chains (1.3-2.3 ppm (m, 3H), 6.2-7.4 ppm (m, 5H)) shows the characteristic peaks of the styrenic protons (see Fig. 8b). We can note the presence of a small shoulder on the SEC trace in low molecular weight region. This shoulder is observable on the SEC trace even after purification. This shoulder corresponds to some hPS that does not initiate the polymerization of 2VP. This hPS probably appears because some impurities are introduced in the reaction media by the syringe during the aliquot extraction. The hPS peak has a small intensity compared with the one of the peak assigned to the PS-b-P2VP-OH BCP and could be considered as negligible. 31 and all arms prepared via an anionic polymerization are subsequently linked to this chlorosilane compound. This method requires specific glassware with a break-seal technology 25 which is too complicated to set-up in our laboratory. The second method consists in the use of a core molecule designed as a multifunctional initiator site with orthogonal reactive functions. 32,33 This method is easier than the previous one but it is limited since the different monomers must be selective. The last method involves a modified diphenylethylene as a core molecule which consists of initiating and terminating sites. 4,5,9,34,35 We choose to work with the last method by using a diphenylethylene bearing a tert-butyldimethylsilyl-protected hydroxyl functionality as a core molecule. The alkene function of the diphenylethylene allows to perform a sequential anionic polymerization to obtain a mid-functional AB-type BCP. The protected hydroxyl function will be use to attach the third arm after the deprotection step. 36 In this part, we will describe the Figure 14 shows the general scheme for the mid-functionalized PS-b-P2VP synthesis.
The PS-b-P2VP chains were synthesized using a sequential living anionic polymerization.
Tetrahydrofuran (THF, 40mL) was introduced in a 250 mL flame dried round flask equipped with magnetic stirrer. The hPS aliquot and the PS-b-P2VP BCP were characterized by SEC with universal calibration in THF (see Fig. 16a). The SEC trace of the hPS exhibits a high intensity narrow peak. This peak corresponds to a PS molecular weight of 19 kg.mol -1 while the dispersity is determined to be 1.03. A small shoulder appears for a molecular weight that is the double of the one attributed to hPS. The living PS was quenched in a non-dry and undegassed methanol, which explains the coupling between two hPS chains. However, this coupling does not appear in the reactive media, and therefore will not interfere in with the reaction.
After the addition of the core molecule (DPE-Si) and the polymerization of the 2VP to the living PS, the SEC trace of the PS-b-P2VP exhibited one monomodal and narrow peak. The mid-functionalized PS-b-P2VP has a dispersity of 1.05, and the molecular weight was determined to be 43 kg.mol -1 . The P2VP/PS volume ratio determined to be 1.2 from SEC traces was also confirmed by 1 H NMR (see Fig. 16b). Here, the coupling between the mid/end-functionalized AB-type BCPs and the endfunctionalized PI homopolymers will be discussed.
In most of cases, the coupling reaction involves the presence of a metal catalyst. 37,38 Here, it is better to avoid metal catalysts since the metallic salt could be chelated by the P2VP.
For this purpose, the coupling between the different PS-b-P2VP BCPs and the PI homopolymers was realized via a Steglich esterification to produce linear and star miktoarm ABC terpolymers. 21,22 This synthesis route was chosen because it is a metal free coupling reaction but also because an efficient coupling can be achieved.
A general scheme of the Steglich esterification mechanism is proposed in Figure 18. 22 The coupling reaction was catalyzed by 4-(Dimethylamino)pyridinium-4-toluene (DPTS) and N,N'-diisopropylcarbodiimide (DIC). Since DPTS is not soluble in THF, the reaction was performed in dichloromethane. DIC and the carboxylic acid beared by the diblock copolymers are able to form an O-acylisourea intermediate, which offers reactivity similar to the corresponding carboxylic acid anhydride. DPTS is a stronger nucleophile than the alcohol, and reacts with O-acylisourea leading to a reactive amide ("activated ester"). This intermediate cannot form intramolecular side products but reacts rapidly with alcohols. DPTS acts as an acyl transfer reagent in this way, and subsequent reaction with the alcohol gives the ester. Previously cryo-dried hydroxyl-mid/end-functionalized PS-b-P2VP (1 eq.) and hPI-COOH (3 eq.) were solubilized in dried dichloromethane. The N,N-Dimethyl-4pyridinaminium 4-methylbenzenesulfonate (DPTS, 10 eq.) was added and stirred for 15 minutes at 40°C. Then, the N,N′-Diisopropylcarbodiimide (DIC, 10 eq.) was added, and the mixture was stirred at 35°C. 39 After 3 days, the solution was concentrated and the DPTS was precipitated in THF and recovered. The filtrate was then concentrated and the star miktoarm/linear ABC terpolymer chains were precipitated in heptane to remove the hPI-COOH excess. The peaks were integrated, and the volume ratios between blocks were determined by NMR.
Synthesis of PS
The block volume ratios obtained by SEC and calculated 1 H NMR were similar, which confirmed the coupling between blocks.
A further evidence of the coupling between the hPI-COOH and PS-b-P2VP-OH chains was obtained by Diffusion Ordered Spectroscopy (2D-DOSY-NMR). From the 2D-DOSY spectrum presented in Figure 21 We successfully synthesized three well-defined linear ABC terpolymers with different PI volume ratios. The molecular weights of the linear ABC terpolymers are summarized in the Table 1.
Synthesis of the PS-arm-P2VP-arm-PI terpolymer
The star miktoarm ABC terpolymer was synthetized via an esterification between the hydroxyl-mid-functionalized PS-b-P2VP BCP and the carboxyl-terminated PI homopolymer according to Figure 23. First, we will describe the coupling reaction between the PS-b-P2VP-OH (43 kg.mol -1 ) and hPI-COOH (16 kg.mol -1 ) chains as a reaction model. After we will generalize this coupling reaction the other PI homopolymers. The mid-functionalized PS-b-P2VP BCP (1 eq, 43 kg.mol -1 ) and the hPI-COOH (3 eq, 16 kg.mol -1 ) were solubilized in dried dichloromethane. The N,N-Dimethyl-4pyridinaminium 4-methylbenzenesulfonate (DPTS, 10 eq) was added and let stirred for 15 minutes at 40°C. Then, the N,N′-Diisopropylcarbodiimide (DIC, 10 eq) was added, the mixture was stirred at 35°C. After 3 days, the solution was concentrated and precipitated in THF to recover the DPTS. The filtrate was concentrated and the desired 3 µ-SPI was precipitated in heptane to remove the excess of hPI-COOH.
The 3 µ-SPI was characterized by 1 H NMR ( (ppm), 400 MHz, CD2Cl2), 2D DOSY NMR ( (ppm), 400 MHz, THF) and SEC in THF.
Figure 24 shows the SEC traces with universal calibration of the hPI, hPS, PS-b-P2VP and 3 µ-SPI chains. Compared to the SEC traces of hPI, hPS, and PS-b-P2VP chains the SEC trace of the 3 µ-SPI terpolymer is shifted to low retention volumes corresponding to a higher molecular weight. The molecular weight of the 3 µ-SPI was found to be 59 kg.mol -1 which exactly corresponds to the addition of the PS-b-P2VP (43 kg.mol -1 ) and hPI (16 kg.mol -1 ) molecular weights. For this well-defined 3 µ-SPI, the volume ratios of the different blocks were determined to be S:P:I = 1:1.2:0.9. These ratios were also confirmed by the proton NMR characterization. narrow and monomodal shape implying a low dispersity which is of about 1.1. The results are summarized in Table 2.
V. Conclusion
In this part, we synthesized two AB-type BCPs having a quasi-symmetric composition and bearing a mid-or an end-hydroxyl functionality. Four end-carboxyl-functionalized PI homopolymers with different molecular weights were also synthesized. The anionically polymerized PS-b-P2VP and hPI chains with a low dispersity revealed to be functionalized in high yields.
The mid-and end-functionalized PS-b-P2VP BCPs and the carboxyl-terminated PI homopolymers were coupled through a Steglich esterification to produce linear and star miktoarm ABC terpolymers. This efficient coupling method requiring with few purifications steps leads to well-defined terpolymers.
The advantage of this synthesis route is that the molecular weight of the nearly symmetric diblock copolymer PS-b-P2VP is kept constant and only the molecular weight of the hPI-COOH varies. This could be useful to achieve different phases resulted from the microphase separation of linear and star miktoarm ABC terpolymers. The phase behavior of microphaseseparated linear and star miktoarm ABC terpolymers will be studied in the next chapters. Introduction
The phase behavior of linear ABC terpolymer thin films is studied in this chapter. The aim of this work was to achieve different structures in thin films than the regular phases formed from AB-type or ABA-type BCPs. As reported in chapter II, the synthesis of the linear ABC terpolymers was done by combining the anionic polymerization with a coupling reaction. PS, P2VP and PI blocks were used for their intrinsic properties. Firstly, the Flory-Huggins parameters between the different pairs are high which allow a "full" microphase-separation of the PS-b-P2VP-b-PI chains, resulting in the formation of three colored patterns. 1 Another advantage of those polymers is that they have a different etching resistance under a CF4/O2 RIE plasma 2,3 as it was evidenced from PM-IRRAS 4,5 measurements. Indeed, PI domains were etched more quickly than the P2VP and PS ones. The selectivity of blocks towards a fluorine riche plasma is an important parameter to obtain contrast for the microscopy analysis.
As the χ-parameters between the different pairs are χSI ≈ χSP ≈ 0.1 << χPI ≈ 0.8, [6][7][8] the PSb-P2VP-b-PI chains are in a type II frustrated state. Although the self-assembly of linear ABC terpolymers in a type II frustration state has been reported in bulk 9,10 or theoretically, 11 there has been no investigation of their thin film behavior.
A thermal treatment of those kinds of polymers did not appear relevant to achieve welldefined structures. So, the effect of the thermal annealing process will not be studied in this thesis. The self-assembly of linear ABC terpolymers was achieved by exposing the spin-coated thin films to a chloroform vapor. The morphologies obtained will be described in the following part. structures (see Fig. 2). 16,18,19 The cubic Q 214 phase is a single-gyroid morphology which refers to an alternating gyroid network. The O 70 structure is an orthorhombic alternating network structure. For cubic Q 230 phase, each of the two interpenetrating and independent lattices receives three segregated domains, resulting in five independent, triply periodic regions. In order to describe the gyroid structure in details, the notion of crystallographic planes is introduced. The planes described in this part correspond to the (hkl) crystallographic planes. 20 For instance, the plane (211) of the gyroid structure is a cross section along the (211) crystallographic plane with h = 2, k = 1 and l = 1. Other plane examples are reported in Figure 3. The (211) plane of the Q 230 structure described above is present only in the thicker part of the film. In contrast, a p6mm symmetry pattern with a period of 52 nm is produced in regions of the film having a thickness of about 90 nm as revealed by the AFM topographic view presented in Figure 4a. Here, the period was determined from the 2D-FFT showed in Figure 4b. This p6mm symmetry pattern is attributed to the (111) 15,20,21 plane of the core-shell gyroid structure. In this core-shell double gyroid phase, the PI and P2VP domains form the core and the shell of the structure, respectively, while PS is the matrix. 17 PI core (black), ordered into a hexagonal array with a period of 100 nm, are surrounded by a H2PtCl6-stained P2VP shell (light) including PS matrix (gray). Importantly, this pattern is stabilized in peculiar regions of the film where the film thickness is between the upper and lower terraces occupied by the double-wave and wagon-wheel patterns, respectively, as well as regions where the film thickness is between the upper and lower terraces occupied by the wagon-wheel pattern and the small amplitude wavy structure. Decreasing the film thickness well below the unit cell dimension (t 75 nm < aG) leads to the formation of an ordered structure which resembles to the "zigzagging lamellar" pattern formed along the (100) of the Q 230 structure (see Fig. 7b). 20,22 This pattern with a period of 50 nm consists of zigzagging PS (gray) and H2PtCl6-stained P2VP lamellar domains (light) with PI domains (black) not perfectly distributed within the P2VP lamellae. SEM images corresponding to the (111) and the (211) planes of the core-shell double gyroid already observed from AFM topographic images (see Fig. 5) are also displayed in Figures 7c-d. They proved that the film thickness is related to the area fraction of the matrix phase which is correlated to the area fraction of other blocks. Since the PI block has the smallest surface energy, the area fraction of PI at the air/surface interface is increased when the area fraction of the PS matrix is minimal. This is satisfied when the (211) plane is oriented parallel to the air surface (see Fig. 8) which explains why such a plane orientation is observed for the thicker film. As the film thickness decreases, the incommensurability between the gyroid unit cell dimension and the polymeric layer thickness drives the self-assembly of the PS-b-P2VP-b-PI chains, and other planes are oriented parallel to the air/surface interface. From the Figure 9, the PI volume faction for the other planes decreases as follows at x0 = 0: (110) > ( 111) > (100).
II
This phenomenon is fully in accordance with order of the plane observed parallel to the air/surface interface when the film thickness is decreased (see Table 1).
Table 1: Results summary of the crystallographic planes observed for a gyroid structure depending on the area fraction of matrix phase, the PI phase, the period, and the thickness of the film.
The self-assembly of a frustrated type II linear SPI terpolymer with volume fractions of 1, 1.1 and 0.5 for S, P and I, respectively, gives access to a core-shell double gyroid structure.
Depending on the film thickness, different crystallographic planes of the Q 230 structure are observed. For a film thicker than the unit cell of the gyroid network, the (211) plane is observed at the air-surface interface. Decreasing the film thickness closer to the unit cell dimension induces the stabilization of the (111) plane. If the thickness is well below the unit cell dimension, the (100) plane is observed. The different crystallographic plane formation is driven by the commensurability of the structure period with the film thickness and the preferential segregation of the PI block at the free-surface.
In this part, we will study the self-assembly of star miktoarm ABC terpolymers. In contrast to linear ABC terpolymer, star miktoarm ABC terpolymers have only one junction point. This unique junction point imposes interfaces between the three different blocks (A/B, B/C and C/A interfaces), and is located along a line. 1 This topographical requirement induces new phases not accessible from linear ABC terpolymers such as Archimedean tiling patterns. 2- 6 It is important to define the nomenclature used to distinguish the different Archimedean tiling patterns. They are identify by symbols based on the orders of the polygons meeting at a given vertex. 7 According to Kepler, only 11 tilings can fill the plane without gaps and are denoted (m1 n1 .m2 n2 . ...) where mi refers to the number of sides of each polygon, and the superscript ni denotes the number of adjacent identical polygons around a vertex. For instance, the two-dimensional honeycomb pattern consisting of regular hexagons is denoted (6.6.6) because three hexagons meet at each vertex.
Only few papers deal with the self-assembly of star miktoarm ABC terpolymer thin films.
The (6.6.6) and (4.8.8) Archimedean tiling patterns have been reported in thin films by Aissou et al.. 3,8,9 Hierarchical morphologies consisting of lamellae separated by an alternation of cylinders have also been reported by Aissou et al. and Choi et al.. 3,10 In this chapter, the thin film morphologies obtained from the self-assembly of two star miktoarm ABC terpolymers composed of PI, PS and P2VP will be discussed. The volume ratio of PI varies whereas the volume ratios of PS and P2VP are constant. One star miktoarm ABC terpolymer has a nearly-symmetric composition, whereas the other one has a smaller PI volume ratio. Importantly, the (3.4.6.4) Archimedean tiling pattern, accessible from microphase separated 3 µ-ABC chains (see chapter I), has also a p6mm symmetry and a similar domain distribution surrounding the main domain. However, it was not retained as a possible model for the self-assembly of 3 µ-SPI thin films since it favors a large interfacial area between the P2VP and PI domains, which have the highest incompatibility.
We can note that the PI domains are well-ordered into a hexagonal array although these domains are not all the time surrounded by exactly 12 domains made of PS and P2VP (6 PS + 6 P2VP) as expected for a well-developed (4.6.12) Archimedean tilling pattern. This phenomenon implies that the long-range ordered PI domains are formed first, and the PS and P2VP chains are then microphase-separated in a second step around the PI domains. These results indicate that the microphase-separation occurs in two-steps mechanism as already observed by Aissou et al. 9 for self-assembled polyisoprene-arm-polystyrene-armpolyferrocenylethylmethylsilane chains into a c2mm pattern symmetry.
The (4.6.12) tiling pattern has already been described in theoretical studies. In 2002
Gemma et al. 13 reported on this tiling pattern using Monte Carlo simulations for star miktoarm ABC terpolymers with arm-length ratio of 1:1:x and a -parameter equal for each pair as shown in Figure 2. The (4.6.12) tiling pattern was found for x = 2.5. Even if the volume ratio of the PI block is increased during the SVA process because of CHCl3 swells more the PI domains, the result obtained from the Monte Carlo simulations does not match exactly with our experimental result.
This slight difference can be explained by the fact that they considered that all the interaction parameters between blocks are equal which is not the case in our study (PI-P2VP>> PS-P2VP =
PI-PS).
A different morphology was observed in the substrate corner (where the film is over 110 nm thick). A square array is observed in the thicker part of the film, as shown on the AFM topographic image presented in Figure 3. This morphology is stable only when the film thickness is over 110 nm. According to the 2D-FFT corresponding to the inset AFM topographic image, domains are ordered into a square array (four first-order spots are clearly visible) with a period of 35 nm. This p4mm symmetry structure could correspond to a (4.8.8) structure or a tetragonally perforated lamellae (TPL) morphology. According to the theoretical SCFT predictions reported by Jiang et al., 14 the (4.8.8) and the TPL phases have their free energy close to that of the (4.6.12) tiling pattern (see Fig. 4). Although a "three-colored" pattern cannot be clearly observed in Figure 3, the presence of discrete domains on film free-surface rather than a continuous, uniform matrix supports the formation of the (4.8.8) tiling pattern. Further experiments are required to confirm this conclusion. 2) Solvent annealed 3 µ-SPI (S:P: I = 1:1.2:0.6) under a THF vapor
We also studied the self-assembly of 3 -SPI thin films with volume fractions of 1, 1.2, and 0.6 for PS, P2VP and PI, respectively. The thickness of the film was determined to be 80 nm. The film was exposed to a THF vapor for 2 hours. The corresponding AFM topographic image is presented in Figure 5. The thin film exhibits an out-of-plane columnar morphology arranged into a hexagonal array. As PI domains are preferentially etched under a CF4/O2 plasma treatment, they appear in dark brown on the AFM topographic image while the P2VP and PS domains correspond to the white brown and yellow regions, respectively. The PS domains form the inner part of the structure and are surrounded by twelve columns (6 P2VP + 6 PI). PS domains have a period of 46 nm according to the 2D-FFT associated to the Figure 5 inset while the PI and P2VP domain pitches are about 23 nm. The 2D-FFT confirms the formation of p6mm symmetry pattern since six first-order spots can be observed. The high contrast between all domains allows to conclude that the flower-shape morphology corresponds to the (4.6.12) Archimedean tiling pattern. A schematic representation of the morphology is depicted in the right bottom corner of the Figure 5 where the PS, P2VP and PI domains appear in the blue, red and yellow colors, respectively. This (4.6.12) Archimedean tiling pattern is similar than the morphology achieved by a solvent-annealed (CHCl3, 2h) 3 µ-SPI thin film with a nearly-symmetrical composition (S:P:I = 1:1.2:1) (see Fig. 1). The main difference resides in the fact that all the blocks do not occupy the same positions on the two (4.6.12) patterns consisting of different domain sizes. This phenomenon is due to a change in the volume fraction of each block for the different 3 µ-SPI systems under their respective swelling conditions. Indeed, the largest block under swelling condition would occupy the inner part of the morphology where column diameter is the largest.
We also studied the morphology in the substrate corner where the film thickness is increased by side effects occurring during the spin-coating process, and found that a different phase behavior happened in the region where the film thickness was about 100 nm. Indeed, a mixture of in-plane and out-of-plane cylindrical domains can be observed on the AFM topographic image presented in Figure 6. The period of the structure is determined to be 50 nm, which approximatively corresponds to the pitch of out-of-plane PS columns arranged within the (4.6.12) tilling pattern presented in Figure 5. Here, PS (yellow) forms large in-plane cylinders separated by smaller in-plane PI columns (dark brown) whereas the P2VP domains do not appear on the film free surface. It is well-known that polymers can adopt in-plane or out-of-plane morphologies depending on the film thickness. 15 In thicker films, the elastic strain energy can be more readily accommodated, which decreases the thickness dependence. For example, in the case of AB-type BCP, it was showed an in-plane orientation of the domains is promoted when the film thickness is increased. 16 We assume that this in-plane columns corresponds to the plane that cut the (4.6.12) pattern along PS (yellow) and PI (dark brown) domains (black line on the schema) as shown in the inset of the Figure 6. This cutting axis along the PS and PI domains corresponds to the most stable plane since (i) it coincides with the plane incorporating cutting cylindrical domains in their middle and (ii) it allows the low surface energy PI block to easily form a wetting layer.
To summarize, the 3 -SPI chains with volume fraction of 1, 1.2, and 0.6 for PS, P2VP, and PI, respectively, treated in a THF vapor leads to the formation of a (4.6.12) Archimedean tiling pattern. Depending on the film thickness, different orientations of the columns can be observed. The in-plane structure is only observed on the corner substrate where the layer thickness is increased.
Thin film (4.6.12) Archimedean tilling patterns were produced from the self-assembly of two 3 µ-SPI systems. Depending on the solvent used for the SVA process and the composition of 3 µ-SPI chains, the largest domains of the (4.6.12) Archimedean were occupied by either the PS block or the PI one. Indeed, the PI block was located in the inner part of the morphology of a solvent-annealed (CHCl3, 2h) 3 µ-SPI thin film having a nearly-symmetrical composition (S:P:I = 1:1.2:1) while it was replaced by the PS block for a solvent-annealed (THF, 2h) 3 µ-SPI thin film having an asymmetrical composition (S:P:I = 1:1.2:0.6). To the best of our knowledge, it is the first time that thin film (4.6.12) Archimedean tiling patterns formed by star miktoarm ABC terpolymers are demonstrated.
Interestingly, morphology changes were demonstrated when the film thickness was increased from 80 nm to 110 nm. A (4.8.8) Archimedean tiling pattern was observed for a solvent-annealed (CHCl3, 2h) 3 µ-SPI thin film (S:P:I = 1:1.2:1) while a mixture of in-plane and out-of-plane columnar domains was achieved for a solvent-annealed (THF, 2h) 3 µ-SPI thin film having an asymmetrical composition.
Conclusion
In this thesis, we developed an effective method for the synthesis of linear and star miktoarm ABC terpolymers composed of PS, P2VP and PI. A library of linear and star miktoarm ABC terpolymer was done by keeping constant the PS and P2VP molecular weights and by varying the PI block size. The synthesis method developed in this thesis revealed to be interesting since functionalization steps were facilitate by few purification steps and the coupling method did not involve metal catalyst to achieve well-defined linear and star miktoarm ABC terpolymers.
The self-assembly of linear and star miktoarm terpolymers was demonstrated in thin film configuration. For that purpose, a solvent-vapor annealing process was used to promote the mobility of polymeric chains then a plasma treatment was performed on the film freesurface to improve the contrast between the different blocks for the microscopy imaging. The effect of a fluorine-rich plasma on the etching rate of the different block was characterized by PM-IRRAS experiments, which revealed that the PI domains are removed more easily than the P2VP and PS ones.
We demonstrated that the self-assembly of type II linear ABC terpolymers enable the formation of a thin film core-shell double gyroid structure when the volume fractions of PI, PS and P2VP are 0.5, 1.1 and 1, respectively. Depending on the film thickness, four different planes were obtained. The (211), ( 111) and (100) planes were observed for 190 nm, 90 nm and 75 nm thick films, respectively, whereas the (111) plane was only observed between terraces.
A (4.6.12) Archimedean tiling pattern was also demonstrated from the self-assembly of two 3 µ-SPI terpolymers. Interestingly, changing the volume ratio of one block and the solvent annealing conditions allowed to obtain the same structure with a different block located in the inner part of the (4.6.12) tiling pattern. The PI block is located within the inner part domains of the morphology when 3 µ-SPI chains with a nearly symmetrical composition are placed under a chloroform vapor. In contrast, solvent-annealed asymmetric 3 µ-SPI chains under a THF vapor give access to a (4.6.12) tiling pattern where the inner part domains are occupied by the PS block.
APPENDIX
The spin coating process is one of the most commonly used techniques for the deposition of polymeric thin films on substrates. It is used in a wide variety of industries and technology sectors. The advantage of spin coating is its ability to quickly and easily produce uniform films with thicknesses of a few nanometers. 1 First the substrate is coated with the solution containing the polymer dissolved in a solvent. The substrate starts to rotate at a constant acceleration rate until the desired rotation speed is reached (1500 -3000 rpm), and the majority of the solvent is evaporated during this process. Varying the rotation speed or the polymer concentration in the starting solution allows to control the thickness of the film and so, different film thicknesses can be prepared. [1][2][3] Figure 1: Schematic representation of the spin-coating process.
To produce uniform polymeric thin films, the solvent used to solubilize the PS-b-P2VPb-PI and 3µ-SPI chains must be a good solvent for all the blocks. Therefore, we choose to work with toluene. The affinities between the different blocks and the solvent can be evaluated with the Hildebrand parameters reported in Table 1. 4 A solvent is considered as a good solvent for a given polymer when the difference between the Hildebrand solubility parameters of the solvent and the polymer is low. 5 The solubility parameter of PS, P2VP and PI are 18.5, 20.6 and 16.3 the three different blocks is low, this solvent will be used as a non-selective good solvent.
As the boiling point of toluene is 111°C, its evaporation is not too fast and allows to obtain homogeneous thin film for a speed coating rate between 1500 and 4000 rpm. Going below 1500 rpm leads to a heterogeneous film since it involves a slower evaporation of toluene. Several methods can be used to promote the self-assembly of BCP thin films. The thermal annealing [7][8][9] is the most used methodology because of it is easy to set up, but also because it does no present any toxicity problems. Generally, in this process, the applied temperature is above the glass transition temperature (Tg) of the different blocks in order to increase the chain mobility and thus promote the self-assembly. Other technics have been developed, such as mechanic flow field 10,11 , electronic or magnetic fields [12][13][14] and solvent annealing [15][16][17][18][19][20] . In this study, we will use a solvent vapor annealing (SVA) process to promote the ABC terpolymer chain mobility.
Once the polymeric films are deposited on the silicon wafer, they are treated with a SVA process. Some studies showed that it is possible to go from a disordered structure to a wellorganized phase with SVA. [19][20][21] The SVA process is composed of two steps. The first one consists in the solvent molecules adsorption at the film surface. The wettability of the film is stable when the chemical potential of the solvent contained in the film is equal to that of the one contained in the vapor phase. The Tg of the film is therefore decreased and the chain mobility is increased. Thus, the interactions between polymer chains, the volume fraction of blocks, and the period between domains are affected by this adsorption and can lead to different structures. The second step occurs during the solvent removing. A fast evaporation of the solvent can freeze the BCP morphology. During the SVA process, it is important to control the temperature and the vapor pressure into the chamber. For this purpose, the SVA process was all the time performed at 22°C (clean room temperature), and the vapor pressure was controlled with a mass flow. 16 Samples were exposed to a continuous stream of solvent vapor produced by bubbling nitrogen gas through the liquid solvent as shown in the Figure 2. Different solvents were used to promote the self-assembly of BCPs and, we mainly worked with tetrahydrofuran (THF) and chloroform (CHCl3). The affinity of each blocks with the solvent is different, therefore their swelling ratio is changed. In order to characterize this phenomenon, swelling ratio measurements of each block were done. Homopolymer thin films were deposited from a solution of 2 wt. % in toluene. First, one can notice that despite the use of the same polymer concentration in toluene, different film thicknesses were obtained.
In-situ variations of the swelling ratio was followed using a Filmetrics spectroscopic white light reflectometer apparatus. The solvent chamber lid of the SVA set-up was made of quartz, which allowed the in-situ measurements of the film thickness variation during the SVA process. The homopolymer film thickness was evaluated before and every 5 minutes after the beginning of the SVA process. Because of the difference of the initial film thicknesses, each layer was normalized, and the variations of the film thicknesses over the SVA time were compared. The curves are displayed in Figure 3. In the case of a chloroform vapor, hPI reaches a plateau for a thickness variation of 90%, whereas hPS and hP2VP reached a plateau for a 35% thickness variation. Therefore, hPI has a larger affinity with chloroform than hPS and hP2VP. In other words, hPI is more swollen by CHCl3 than hPS and hP2VP, making the hPI chains more mobile in chloroform.
hPI and hP2VP behave similarly in THF with a slowly increase of the film thickness of about 20%. In contrast, the hPS film thickness increases strongly for low swelling times, and eventually reaches a plateau of about 65%. Therefore, hPS has a larger affinity with THF making the hPS chain mobility is more important than that of the hP2VP and hPI ones.
It is noteworthy that the layer thickness variations is more important when a CHCl3 vapor is used. CHCl3 has a larger saturated vapor pressure than THF, which leads to a larger swelling of polymers under a CHCl3 vapor.
In this part, we have shown that the SVA process was done under controlled parameters (temperature and vapor pressure in the chamber). Moreover, chloroform and tetrahydrofuran have been shown to swell preferentially hPI and hPS, respectively.
III. Reactive ion etching plasma
After the spin-coating and solvent annealing processes, some contrast between polymers must be established in order to precisely characterize the self-assembled structure by microscopy.
Plasma etching has proven to be a great tool for surface modification of polymeric materials. 23,24 Gas plasma (glow discharge) is a partially ionized gas, which can be generated by an electrical discharge (see Figure 4a). A plasma can etch in three different manners. The material can be chemically etched by reactive species of the plasma (e.g. radicals or ions created in the plasma). In this case, we talk about chemical etching. The ion bombardment on a polymer surface, which causes sputtering of the surface, is referred to as physical process. UV radiation from the plasma can also lead to dissociation of chemical bonds, which leads to the formation of low molecular weight materials. Generally, those three mechanisms simultaneously occur during a plasma treatment and induce the formation of volatiles products in the plasma chamber.
The main advantage of this technique is its selectivity regarding the chemical structure of the polymers. The plasma etcher used in this study is the PE-100 Benchtop Plasma System (Figure 4b). In this study a CF4/O2 plasma (40W, 17sccn CF4 and 3sccm O2, 45s) was currently used to reveal the BCP self-assembled structure as different etching behaviors were observed between PS, PI and P2VP. In order to gain more insights into the etching behaviors of the various materials, phase modulation infrared reflection absorption spectroscopy (PM-IRRAS) experiments were performed.
IV. Phase Modulation Infrared Reflection Absorption Spectroscopy (PM-IRRAS) experiments
The Phase Modulation Infrared Reflection Absorption Spectroscopy (PM-IRRAS) method benefits from the IRRAS advantages of electric field enhancement and surface selection, but also presents the tremendous advantage of having high sensitivity in surface absorption detection (see Figure 5).
Herein, the PM-IRRAS experiment was used to characterize the effect of the plasma etching on polymers. The polymers were spin-coated on a gold coated substrates. The thin films were treated with a solvent annealing process and etched as described previously. Their absolute reflectance was over 98% in the 1.2-12 µm spectral range. The PM-IRRAS spectra were recorded on a ThermoNicolet Nexus 670 FTIR spectrometer at a resolution of 4 cm -1 , by coadding several blocks of 1500 scans (30 minutes acquisition time). Generally, eight blocks (4 hours acquisition time) were necessary to obtain PM-IRRAS spectra of ultra-thin films with good signal-to-noise ratios. All spectra were collected in a dry-air atmosphere after a 30-min-incubation in the chamber. Experiments were performed at an incidence angle of 75° using an external homemade goniometer reflection attachment. 25 The infrared parallel beam (modulated in intensity at frequency ωi lower than 5 , where and stand for the p-polarized reflectance of the film/substrate and bare substrate systems, respectively). 26,27 Homopolymers of PS, P2VP, and PI as well as a star ABC terpolymer with volume ratios of PS:P2VP:PI = 1:1.2:1 were spin-coated on gold coated substrates. They were exposed to a CHCl3 solvent vapor annealing for two hours. Then, PM-IRRAS spectra were recorded before and after the plasma treatment (plasma conditions: 40 W, 17 sccm CF4 and 3 sccm O2, 45 s). The PM-IRRAS spectra are presented in Figure 6. To investigate the effect of the plasma, the characteristic peak at a given wavenumber for a given homopolymer was chosen to not overlap with the signal of the other homopolymers.
We choose the peak located at 1493 cm -1 , at 887 cm -1 , and at 1567 cm -1 , for hPS, hPI and hP2VP, respectively. After the plasma etching treatment, the area under all the peaks decreased.
The peak integrals were calculated from the spectrum of the self-assembled star ABC terpolymer before and after the plasma etching. The polymer loss percentage was calculated for each block. After RIE treatment, 65% of the PI domains were etched while 41% and 26% of the P2VP and PS domains were also removed. These results confirm that the CF4/O2 plasma used in this study preferentially etches the PI block than the PS and P2VP ones.
resonances. The general operation is illustrated in Figure 7b. It also allows to work with softer cantilever than in the TM mode leading to an improved measurement sensitivity.
To prove the efficiency and the requirement to use the PFT instead of the TM AFM, the same sample was imaged using both modes. The AFM topographic views are presented in Figure 8. The AFM image produced with the TM exhibits only two colors, the PI is in black, but it is not possible to distinguish the PS from the P2VP (see Fig. 8a). On the other hand, the AFM topographic view obtained with PFT shows a three colored pattern where PI, PS and P2VP appear in dark brown, yellow, and brown, respectively (see Fig. 8b). The PI, PS and P2VP domains are well resolved under these imaging conditions.
Figure 1 :
1 Figure 1: Schéma réactionnel de la synthèse du polyisoprène fonctionnalisé en bout de chaîne avec une fonction acide carboxylique.
Figure 2 :
2 Figure 2: Schéma réactionnel de la polymérisation anionique séquentielle du PS-b-P2VP fonctionnalisé en bout de chaîne avec une fonction alcool.
Figure 3 :
3 Figure 3: Schéma réactionnel du couplage entre PS-b-P2VP et PI par estérification de Steglich.
Tableau 1 :
1 Tableau récapitulant les terpolymères ABC linéaires synthétisés par la combinaison de polymérisations anioniques avec un couplage par estérification de Steglich.
Figure 4 :
4 Figure 4: Schéma réactionnel de la synthèse de la molécule coeur (1-(4-(2tertbutyldiméthylsiloxy)éthyl)phényl-1-phényléthylène) à partir du 4-bromobenzophénone A chaque étape de la synthèse, les produits ont été purifiés par chromatographie flash et caractérisés par RMN du proton. L'intégration des pics a confirmé la synthèse d'une 1-(4-(2-
Figure 5 :
5 Figure 5: Schéma réactionnel de la synthèse d'un dibloc (PS-b-P2VP) fonctionnalisé en son coeur. Le dibloc fonctionnalisé en son coeur a été caractérisé par RMN du proton et par SEC. Le spectre RMN présente des signaux caractéristiques des protons portés par la 2VP et le styrène confirmant ainsi la synthèse du dibloc. La courbe obtenue par SEC du dibloc ne présente un seul pic fin et monomodal confirmant ainsi la pureté du dibloc. Les masses moléculaires ont été déterminées par SEC de 19 et 24 kg.mol -1 , respectivement pour le PS et la P2VP. Les masses moléculaires obtenues par SEC sont en accord avec la fraction de PS et de P2VP obtenue par RMN.
Figure 6 :
6 Figure 6: Schéma réactionnel présentant le couplage par estérification de Steglich de PS-b-P2VP et PI afin d'obtenir un terpolymère ABC en étoile.
Figure 7 :
7 Figure 7: Images MEB (2 x 2 m 2 ; encart: 0,5 x 0,5 m 2 ) d'un film mince de PS21-b-P2VP24-b-PI9 recuit deux heures dans des vapeurs de chloroforme, traité avec un plasma CF4/02 qui révèle quatre plans de la structure Q 230 (A : (211) ; B : (111) ; C : (110) ; D : (100)) Sur les images MEB, le PI apparait en noir (car le PI est le polymère le plus gravé), le P2VP apparait en blanc (du fait de son marquage au platine) et le PS en gris. Lorsque le film présente une épaisseur de 190 nm, on observe un motif présentant deux vagues ayant des amplitudes différentes (Fig 7. A). Les vagues de grandes amplitudes sont séparées par des vagues de petites amplitudes avec une période de l'ordre de 120 nm. Ce motif correspond au plan (211) d'une structure double gyroïde coeur-écorce aussi appelée structure Q 230 . La période du motif correspond alors environ à la dimension de la cellule unité de la structure Q 230 . Le PS occupe la matrice, le PI le coeur et le P2VP l'écorce de la structure.
Figure 8 :Figure 9 :
89 Figure 8: Images AFM (1 x 1 m 2 ) d'un film mince de 3 µ-S19P24I9 recuit deux heures dans une vapeur de THF et traité par un plasma CF4/O2. A : pavage d'Archimède (4.6.12) orienté perpendiculairement à la surface libre du film (épaisseur du film : 80nm) et B : pavage d'Archimède (4.6.12) orienté parallèlement à la surface libre du film (épaisseur du film : 100nm)Si on augmente l'épaisseur du film (Fig.8B), on obtient une orientation parallèle à la surface libre du film de la structure (4.6.12) selon le plan coupant les domaines de PI et de PS au centre des colonnes. L'orientation parallèle à la surface libre du film est favorisée en film épais car le THF gonfle plus le bloc PS que les autres blocs (celui-ci va donc remonter à la surface libre du film) et le PI, qui a une faible énergie de surface, va donc lui aussi se positionné à l'interface film-air. On remarque ici une forte dépendance entre l'orientation de la structure et l'épaisseur du film.Un terpolymère ABC en étoile ayant une composition symétrique (3 µ-S19P24I16) entre les trois blocs a lui aussi été auto-assemblé dans une configuration de film mince. Le film a été recuit par vapeur de chloroforme. L'image AFM ainsi obtenue est présentée sur la figure9.
Figure 10 :
10 Figure 10: Image AFM (2 x 2 m 2 ) d'un film de 3 µ-S19P24I16 (épaisseur = 110nm) recuit deux heures dans une vapeur de CHCl3 et traité par un plasma CF4/O2 faisant apparaître un pavage d'Archimède (4.8.8) où PI (en noir) est entouré par 4 colonnes de PS (jaune clair) et 4 colonnes de P2VP (marron).
Figure 1 :
1 Figure 1: 11 possible Archimedean tilings. The red and black symbols give the notations of accessible and unaccesible Archimedean tilings using 3 µ-ABC chains, respectively. The first three tilings are usually called Platonic tilings or regular tilings because they use only one type of regular tiles. (Adapted from Ouyang et al.)12
Block copolymer architectureBCPs belong to one of the class of polymer in the wide class of soft matter. They consist of at least two chemically different polymers linked together with a covalent bond. Lots of BCP architectures exist such as linear, grafted, and star miktoarm. A schematic representation of some possible architectures for AB-and ABC-type BCPs is shown in Figure2.
Figure 2 :
2 Figure 2: Schematic of some possible BCP architectures where each color represents a block: (a) linear AB-type BCP, (b) branched AB-type BCP, (c) star miktoarm ABC terpolymer and (d) linear ABC terpolymer.
Figure 3 :
3 Figure 3: AB-type BCP phase diagram and schematic representation of some accesible morphologies. (Adapted from Matsen et al.)17
Figure 4 :
4 Figure 4: Morphologies for linear ABC terpolymers. A combination of block sequence (ABC, ACB, BAC), compositions and block molecular weights provides a wide parameter space for the creation of new morphologies. Microdomains are colored as shown on the copolymer strand at the top, with monomer types A, B and C confined to regions colored blue, red and green, respectively. (Adapted from Zheng et al.)21
frustrated polyisoprene-b-polystyrene-b-poly(2-vinylpyridine) (PI-b-PS-b-P2VP, ISP). In this study, they demonstrated the influence of the mid-block volume fraction on the attainable morphology. They reported a lamellar structure for symmetric volume fractions of PI, PS and P2VP blocks (I:S:P = 1:1:1), an ordered tricontinuous double-diamond (OTDD) phase for a composition of I:S:P = 1:2:1, an alternating cylindrical morphology arranged into a tetragonal array for I:S:P = 1:4:1, and an alternating spherical phase for I:S:P=1:8:1 as shown in Figure 5.
Figure 5 :
5 Figure 5: Typical examples of electron micrographs for different morphologies of microphaseseparated structures of ISP triblock copolymers: (a) spherical (I:S:P = 1:8:1), (b) cylindrical (I:S:P = 1:4:1), (c and d) OTDD (I:S:P = 1:2:1), and (e) lamellar (I:S:P=1:1:1) structures. (Adapted from Mogi et al.)25
studied the self-assembly of polybutadiene-b-polystyrene-bpoly(2-vinylpyridine) (BSV) chains (B:S:V = 1:2:7 and B:S:V = 1:2:8). TEM images revealed a lamellar morphology as shown in Figure6. The interfacial energies between the middle and the end-blocks are similar which lead to the promotion of a lamellar morphology even if the BCP composition was highly asymmetric.
Figure 6 :
6 Figure 6: TEM micrographs of bulk BSV films stained with OsO4 and CH3I which have different compositions: (A) B:S:V = 1:2:7 (B) B:S:V = 1:2:8. (Adapted from Hückstädt et al.)30
Figure 7 :
7 Figure 7: Phase map for poly(isoprene-b-styrene-b-ethylene oxide). The axes identify volume fractions of each block. Six stable ordered phases such as LAM (lamellar), HEX (hexagonal), Q 230 (double gyroid), Q 214 (alternating gyroid), BCC (body-centered cubic), O 70 (Fddd orthorhombic network), the disordered state, and one metastable phase, HPL, are included.
Figure 8 :
8 Figure 8: (a) Axial TEM projection of hexagonally-packed structural units. The darkest regions correspond to the OsO4-stained PI domains, while the gray regions are CH3I-stained P2VP domains. The non-CMC interface between this PI microdomain and the PS matrix phase has a hexagonal shape with corners. (b) Transverse TEM projection. The light, gray, and dark regions correspond to projections through the PS matrix, the P2VP core, and the PI annulus, respectively. (Adapted from Gido et al.)34
Figure 9 :
9 Figure 9: TEM micrographs of the gyroid structure obtained from the self-assembly in bulk of S45B32V76: (a) stained with OsO4; (b) stained with I2. (Adapted from Hückstädt et al.)8,30
Figure 10 :
10 Figure 10: Schematic representation of the S23B57M20 morphology. (Adapted from Brinkmann et al)38
Figure 12 :
12 Figure 12: Phase diagram of ABC triblock terolymers with χAB = χBC = 13 and χAC = 35 at grafting density σ = 0.2. Dis represents the disordered phase. The red, blue, or black icons showing the parallel lamellar phases discern the different arrangement styles of the triblock terpolymer with block A, block C, or block B adjacent to the brush layers, respectively. (Adapted from Jiang et al.)42
for linear ABC-type BCPs, the star miktoarm ABC terpolymer self-assembly is driven by the Flory-Huggins parameters of the different pairs (AB, BC, AC), the volume fraction of blocks (fa, fb, fc) and their degree of polymerization (NA, NB, NC). For a star miktoarm architecture, the main difference resides in the fact that all blocks are connected at only one junction point. Because of this, A/B, B/C and C/A interfaces exist in the microstructure as described in Figure13.[START_REF] Okamoto | Morphology of model three-component three-arm star-shaped copolymers[END_REF]
Figure 13 :
13 Figure 13: Schematic illustrations of the arrangement of copolymer chains. (a) AB-type BCP: the chemical junction points is confined to the AB interface, (b) linear ABC triblock copolymer: the chemical junction points are confined to the AB and B/C interfaces and (c) ABC star miktoarm: the chemical junction point is confined to a line. (Adapted from Okamoto et al.)[START_REF] Okamoto | Morphology of model three-component three-arm star-shaped copolymers[END_REF]
information. b. PS-arm-PI-arm-PMMA self-assembly Star miktoarm ABC terpolymers composed of PS, PI and PMMA (3 µ-SIM) were widely studied. For instance, Sioula and co-workers 46 studied the effect of the composition on the morphology by keeping constant the molecular weight of the PI and PS blocks whereas they varied the PMMA block size. The nomenclature used in this paper is SIM-x/y/z with x, y, z the respective block molecular weight in kg/mol. TEM pictures of 3 µ-SIM chains (S:I:M = 1:1.1:2.6) revealed a hexagonal arrangement where PI cylinders were surrounded by a PS annulus in a PMMA matrix. The hexagonal arrangement was confirmed by SAXS. When the volume ratio of PMMA was decreased, the 3 µ-SIM (S:I:M = 1:1.1:2) and the 3 µ-SIM (S:I:M = 1:1.5:2.8) exhibited a inner core region of PI surrounded by a PS annulus in a matrix of PMMA.
Figure 14 :
14 Figure 14: Bright field TEM images of (a) 3 µ-SIM (S:I:M = 1:1.1:2.6) stained with OsO4, (b) 3 µ-SIM (S:I:M = 1:1.1:2.6) stained with RuO4, (c) 3 µ-SIM (S:I:M = 1:1.1:2.6) stained with OsO4, and (d) 3 µ-SIM (S:I:M = 1:1.1:2.6) stained with RuO4. (Adapted from Sioula et al.)[START_REF] Sioula | Novel 2-dimensionally periodic nonconstant mean curvature morphologies of 3-miktoarm star terpolymers of styrene, isoprene, and methyl methacrylate[END_REF]
6.12) tiling pattern. In 2005, Takano and coworkers 11 studied the self-assembly of a blend of two 3 µ-SIP terpolymers. They blended 3 µ-SIP-1.2 with 3 µ-SIP-1.9 (weight ratio: 0.85/0.15 for 3 µ-SIP-1.2 and 3 µ-SIP-1.9, respectively) to obtain a 3 µ-SIP-1.3. The TEM pictures of the selfassembled 3 µ-SIP blend revealed a mesoscopic (3 2 .4.3.4) Archimedean tiling pattern.
Figure 15 :
15 Figure 15: (top) TEM images of the four 3 µ-ISP star-shaped terpolymer samples and (bottom) their corresponding schematic tiling patterns: (a) I1.0S1.8P1.0, (b) I1.0S1.8P1.6, (c) I1.0S1.8P2.0, (d) I1.0S1.8P2.9. . (Adapted from Takano et al)11
Figure 16 :
16 Figure 16: Schematic representation of morphologies obtained from the self-assembly of 3 µ-ISP terpolymers when the P block size is varied. (Adapted from Hayashida et al.)[START_REF] Hayashida | Hierarchical morphologies formed by ABC star-shaped terpolymers[END_REF]
( 3 . 3 . 4 . 3 . 4 )
33434 Archimedean tiling pattern.The(3.3.4.3.4) tiling pattern was also described by Takano15 in 2007 for a selfassembled 3 µ-ISP (S:I:P = 1:1:1.3). In this paper, they studied the self-assembly of 3 µ-ISP-x
Many star ISP terpolymers have been studied. The microphase-separated morphologies obtained from different volume ratios of polystyrene, polyisoprene and poly(2-vinylpyridine) are summarized in the ternary phase diagram showed in Figure17.
Figure 17 :
17 Figure 17: Ternary phase diagram of microphase-separated morphologies produced from a polystyrene-arm-polyisoprene-arm-poly(2-vinylpyridine) series. (Adapted from Takano et al.)15
Figure 18 :
18 Figure 18: Ternary phase diagram of microphase-separated morphologies produced from a polystyrene-arm-polybutadiene-arm-poly(2-vinylpyridine) series. (Adapted from Abetz)[START_REF] Hückstädt | Synthesis and morphology of ABC heteroarm star terpolymers of polystyrene, polybutadiene and poly(2-vinylpyridine)[END_REF]
Figure 19 :
19 Figure 19: (a) Schematic illustration of the model for the microdomain structure of the (PI)(PS)(PDMS) 3-miktoarm star terpolymer consisting of dark (PI), gray (PDMS), and bright (PS) cylinders with characteristic shapes. (b) A cross-sectional view of the cylinders in (a). The junction points of the star miktoarm terpolymer are confined on the curved lines and designated by the filled circles in (b). (Adapted fromYamauchi) [START_REF] Yamauchi | Microdomain morphology in an ABC 3-miktoarm star terpolymer: A study by energy-filtering TEM and 3D electron tomography[END_REF]
Figure 20 :
20 Figure 20: Phase diagram of ABC star miktoarm terpolymer systems with arm-length ratio 1:1:x and with symmetric interactions between three components. A (light gray), B (medium gray), and C (dark gray) are displayed. Junction point lines are drawn by thick lines or solid circles where three kinds of interfaces meet. (Adapted from Gemma et al.)[START_REF] Gemma | Monte Carlo simulations of the morphology of ABC star polymers using the diagonal bond method[END_REF]
Elbs
et al. reported on the self-assembly of polystyrene-b-poly(2vinylpyridine)-bpoly(t-butyl methacrylate)[START_REF] Fukunaga | Large-scale alignment of ABC block copolymer microdomains via solvent vapor treatment[END_REF][START_REF] Fukunaga | Self-assembly of a lamellar ABC triblock copolymer thin film[END_REF][START_REF] Fukunaga | Self-assembly of a lamellar ABC triblock terpolymer thin film. Effect of substrates[END_REF][START_REF] Elbs | Thin film morphologies of ABC triblock copolymers prepared from solution[END_REF][START_REF] Elbs | Antiferromagnetic ordering in a helical triblock copolymer mesostructure[END_REF][START_REF] Elbs | Microdomain Morphology of Thin ABC Triblock Copolymer Films[END_REF] (PS-b-P2VP-b-PtBMA) (S1V1.5T3.4) thin films. Some interesting morphologies were obtained in thin film like core-shell cylinders, spheres-in-cylinders, heliceson-cylinders, core-shell double gyroid. In all cases PtBMA formed the matrix while P2VP formed the shell (cylinders), and PS the core. Depending on the solvent vapor annealing (SVA) post-treatment and the thickness of the film, morphologies differed for a given volume ratio between blocks. Thin films treated with tetrahydrofuran (THF) vapor showed a core-shell cylindrical morphology when the film was thick and helices-on-cylinders when the film was thinner. The helical structure was only obtained when the film thickness corresponded to a single layer of the cylinders. From solvent annealed thin films under a chloroform vapor, a coreshell double gyroid structure was obtained.Deng et al.[START_REF] Deng | Bicontinuous mesoporous carbon thin films via an order-order transition[END_REF] also reported on a core-shell double gyroid morphology for a poly(ethylene oxide)-b-poly(ethyl acrylate)-b-polystyrene (E:A:S = 1:5.9:5.6) mixed with 25% of phenolic resin oligomers (resol). Herein, the PEO had the smallest volume ratio. The core of the network structure was once again formed of the polymer exhibiting the smallest volume ratio of the linear ABC terpolymer. The obtained structures are shown in Figure21.
Figure 21 :
21 Figure 21: AFM phase images of films containing (a) 0 wt%; (b) 25 wt%; (c) 40 wt%; (d) 50 wt%; (e) 60 wt% and (f) 67 wt% resol after exposure to MEK using single SVA process. (Adapted from Deng et al.)[START_REF] Deng | Bicontinuous mesoporous carbon thin films via an order-order transition[END_REF]
Figure 22 :
22 Figure 22: (1.25 × 1.25 µm) AFM phase views of templated DSM thin layers with different film thicknesses: a) ttrench ≈ 31 nm, b) ttrench ≈ 47 nm and tmesa ≈ 29 nm, and c) ttrench ≈ 65 nm and tmesa ≈ 47 nm. DSM thin films were annealed under CHCl3 vapor for 3 h then etched by a CF4/O2 RIE plasma prior to be imaged by AFM. Positions of trenches and mesas are indicated on the left of the figure. (Adapted from Aissou et al.)[START_REF] Aissou | Highly Ordered Nanoring Arrays Formed by Templated Si-Containing Triblock Terpolymer Thin Films[END_REF]
Figure 23 :
23 Figure 23: (a) TEM image of an unblended bulk 3 μ-ISF film and (b-d) SEM images of 3 μ-ISF/hS15 thin films on untreated Si wafer at (b) 0.7P0, (c) 0.75P0, and (d) 0.85 P0. (e,f) 3 μ-ISF/hS15 thin films on a P2VP-coated surface at (e) 0.75P0 and (f) 0.8P0. Samples were stained with OsO4 to enhance contrast for TEM and SEM imaging. Thin films (b-f) were etched by O2 RIE before SEM observation, so PFS appears bright and regions formerly occupied by PS appear dark. (Adapted from Aissou et al. in 2013)[START_REF] Aissou | Square and rectangular symmetry tiles from bulk and thin film 3-miktoarm star terpolymers[END_REF][START_REF] Aissou | Ordered nanoscale archimedean tilings of a templated 3-miktoarm star terpolymer[END_REF]
Figure 24 :
24 Figure 24: TEM images of 3μ-ISF thin films (a) before staining (b) after OsO4 staining. PFS microdomains appear darkest in the unstained sample while PI microdomains appear darkest in the stained sample. (c) SEM image of bottom interface of 3μ-ISF thin film. The continuous bright regions indicate PFS and bright dots indicate PI. Scale bars in (a-c) are 300 nm. (d) Schematic description of thin film knitting pattern, and its chain conformation within the microdomains (e). (Adapted from Choi et al)[START_REF] Choi | Thin film knitting pattern morphology from a miktoarm star terpolymer[END_REF]
Figure 25 :
25 Figure 25: AFM topographic views of solvent-annealed 3 μ-DSL (D:S:L = 27:56:17) thin films deposited on topographical substrates followed by a CF4/O2 RIE treatment. Sample thicknesses are (a) ttrench ∼ 25 nm and (b) tmesa ∼ 20 nm and ttrench ∼ 60 nm. Scale bars: 250 nm. (Adapted from Aissou et al.)[START_REF] Aissou | Archimedean Tilings and Hierarchical Lamellar Morphology Formed by Semicrystalline Miktoarm Star Terpolymer Thin Films[END_REF]
Figure 26 :
26 Figure 26: AFM topographic view of a solvent-annealed 3 μ-DSL (D:S:L = 22:46:32) thin film (t ∼ 45 nm) treated with a CF4/O2 RIE plasma which includes two columnar morphologies indexed with a p6mm or p4mm symmetry. The dashed line delimits the regions occupied by the different phases. Schematic models showing (top) the p4mm and (bottom) the p6mm microstructures corresponding to the[4.8.8] and[6.6.6] tilings, respectively. (Adapted from Aissou et al.)[START_REF] Aissou | Archimedean Tilings and Hierarchical Lamellar Morphology Formed by Semicrystalline Miktoarm Star Terpolymer Thin Films[END_REF]
Figure 27 :
27 Figure 27: Synthesis of a PS-b-PI-b-P2VP triblock terpolymer by anionic polymerization.
Figure 28
28 Figure 28 and proton NMR. Final linear ABC terpolymers exhibited a low dispersity, and the molecular weights obtained were closed to the ones expected from the stoichiometry.
Figure 28 :
28 Figure 28: SEC traces of a triblock copolymer and its precursors.
-b-PFS-b-PDMS n-BuLi THF 1.1
Figure 29 :
29 Figure 29: Synthesis of a PS-arm-PI-arm-PB by anionic polymerization using a chlorosilane method.
introduced a new synthesis route combining the use of DPE as a core molecule and click chemistry. In this route, the DPE bear a protected alkyne function. The two first arms were sequentially prepared by anionic polymerization using the modified DPE as core molecule. The third arm was then "clicked" on the alkyne function of the DPE after deprotection by Huisgen click chemistry 99 as shown in Figure 31.
Figure 31 :
31 Figure 31: Synthetic route for the synthesis of an ABC miktoarm star terpolymer consisting of polybutadiene, poly(tert-butyl methacrylate), and polystyrene.
Figure 32 :
32 Figure 32: Core molecule was the succinic Acid 3-(2-bromo-2-methylpropionyloxy)-2-methyl-2-[2-phenyl-2-(2,2,6,6-tetramethyl piperidin-1-yloxy)-ethoxycarbonyl]-propyl ester prop-2ynyl ester used for the synthesis of star miktoarm ABC terpolymer.
Figure 2 : 1 H
21 Figure 2: 1 H NMR spectrum (400 MHz, CD2Cl2) of the carboxyl-end-functionalized hPI.
Figure 3 :
3 Figure 3: SEC trace of the hPI-COOH (Mn,PI = 9 kg.mol -1 ) in THF.
characteristic peaks of the 1.2 ( = 5.5 -6 ppm) and 3.4 ( = 4.4 -5 ppm) units of polyisoprene as for the hPI-COOH having a molecular weight of 9 kg.mol -1 . The three hPI-COOH also contain 3.4 and 1.2 in-chain units in a 7/3 ratio.
Figure 4 : 1 H
41 Figure 4: 1 H NMR (400 MHz) in CD2Cl2 of three carboxyl-end-functionalized hPI chains synthesized by anionic polymerization in THF.
Figure 5 :
5 Figure 5: SEC traces of anionically synthetized PI homopolymers having different molecular weights (13, 16 and 28 kg.mol -1 ).
Figure 6 :
6 Figure 6: Synthesis route of hydroxyl end-functionalized PS-b-P2VP via anionic polymerization using sec-BuLi as an initiator in THF.
The 2 -
2 vinylpyridine monomer (1.7mL) was then added in the reactive mild. After 20 minutes, ethylene oxide was added in the solution. The colorless mixture was kept under stirring. After 10 minutes, the reaction was terminated by the addition of degassed methanol. The mixture was concentrated, precipitated in cyclohexane and dried in an oven at 35°C for 12 hours. The PS-b-P2VP-OH was characterized by1 H NMR ( (ppm), 400MHz, CD2Cl2), 2D DOSY NMR ( (ppm), 400MHz, THF), and SEC with a universal calibration in THF.
Figure 7 :
7 Figure 7: (a) SEC trace of the hPS aliquot in THF and (b) its corresponding 1 H NMR (400 MHz, CD2Cl2) spectrum.
Figure 8 :
8 Figure 8: (a) 1 H NMR (400 MHz, CD2Cl2) spectrum of the PS-b-P2VP-OH chains and (b) their corresponding SEC trace in THF.
The 1 H
1 NMR and SEC characterizations confirmed the synthesis of well-defined PS-b-P2VP-OH chains. A 2D DOSY NMR (400 MHz, THF) was also performed to bring another proof of the PS-b-P2VP BCP formation. The 2D DOSY NMR spectrum is showed in Figure 9. The proton signals assigned to the PS and P2VP blocks are aligned along the same line, which confirms this two blocks are linked together. The 2D DOSY NMR spectrum exhibits a single diffusion coefficient of about 1.7 x 10 -10 m 2 .s -1 . The others spots detected were assigned to solvents.
Figure 9 : 1 H
91 Figure 9: 1 H 2D DOSY NMR (400 MHz, THF) of the end-hydroxyl-functionalized PS-b-P2VP.
synthesis of the core molecule and the preparation of the mid-functionalized PS-b-P2VP BCP. a. Synthesis of the core molecule (DPE-Si) The first step of the mid functionalized PS-b-P2VP synthesis requires the synthesis of the core molecule: 1-(4-(2-tert-Butyldimethylsiloxy)ethyl)phenyl-1-phenylethylene (DPE-Si) (see Figure 10).
Figure 10 : 1 -
101 Figure 10: 1-(4-(2-tert-Butyldimethylsiloxy)ethyl)phenyl-1-phenylethylene (DPE-Si) core molecule used in this work.
Figure 11 :
11 Figure 11: Reaction schema and 1 H NMR (400 MHz, CDCl3) spectrum of the 1-(4bromophenyl)-1-phenylethylene.
Figure 12 : 1 H
121 Figure 12: Reaction schema and 1 H NMR (400 MHz, CDCl3) spectrum of the 1-(4-(2hydroxyethyl)phenyl-1-phenylethylene).
Figure 13 :
13 Figure 13: Schema of the reaction and 1 H NMR (400 MHz, CDCl3) spectrum of the 1-(4-(2tert-Butyldimethylsiloxy)ethyl)phenyl-1-phenylethylene.
Figure 14 :
14 Figure 14: Synthesis route of hydroxyl mid-functionalized PS-b-P2VP via an anionic polymerization using sec-BuLi as an initiator and a modified DPE as a core molecule.
Figure 15 :
15 Figure 15: (a) SEC traces in THF of the hPS aliquot (red curve) and PS-b-P2VP (black curve) and (b) 1 H NMR spectrum (400 MHz, CD2Cl2) of the mid-functionalized PS-b-P2VP BCP.
functionalized PS-b-P2VP. The 2D DOSY NMR spectrum is shown in Figure 16. The proton signals assigned to PS and P2VP are aligned along the same line, which confirms that the two blocks are linked together. The 2D DOSY NMR spectrum exhibited a single diffusion coefficient of about 1.7 x 10 -10 m 2 .s -1 while the other spots are assigned to solvents.
Figure 16 : 1 H
161 Figure 16: 1 H 2D DOSY NMR (400 MHz, THF) spectrum of the hydroxyl-mid-functionalized PS-b-P2VP BCP.
Figure 17 : 1 H
171 Figure 17: 1 H NMR spectra (400 MHz, CD2Cl2) of the mid-functionalized PS-b-P2VP before (green spectrum) and after (red spectrum) the deprotection step.
Figure 18 :
18 Figure 18: General scheme for the Steglich esterification mechanism.
-b-P2VP-b-PI linear terpolymers a. Synthesis of a linear ABC terpolymers: PS-b-P2VP-b-PI (PI = 9 kg.mol -1 ) Herein, we will first detail the coupling between the PS-b-P2VP-OH (45 kg.mol -1 ) with the hPI-COOH (9 kg.mol -1 ) as model synthesis before giving the results regarding the coupling of this AB-type BCP with different size PI homopolymers.The general scheme to prepare the linear ABC terpolymer is presented in Figure20. The esterification between the PS-b-P2VP-OH and the hPI-COOH was performed in dichloromethane with DIC as coupling reagent and DPTS as catalyst.
Figure
Figure20ashows the SEC traces (universal calibration in THF) of the hPI-COOH (9
Figure 19 :
19 Figure 19: Steglich esterification between a hydroxyl end-functionalized PS-b-P2VP and a carboxyl end-functionalized PI in CD2Cl2.
Figure 20 :
20 Figure 20: (a) THF-SEC traces (RI signal) of hPS, hPI-COOH, PS-b-P2VP-OH, PS-b-P2VPb-PI. (b) 1 H NMR (400 MHz) spectra of hPI-COOH in CDCl3, PS-b-P2VP-OH in CD2Cl2, and PS-b-P2VP-b-PI in CD2Cl2.
, the diffusion coefficients of the PS-b-P2VP-OH BCP (D = 1.7 x 10 -10 m 2 .s -1 ) and the PI homopolymer (D = 7.42 x 10 -11 m 2 .s -1 ) are clearly different. Indeed, after the coupling reaction, the 2D-DOSY-NMR spectrum revealed that the proton resonances belonging to PS, P2VP and PI were aligned on the same horizontal line for the linear ABC terpolymer. This implies that all these signals are due to the same macromolecule which has a higher diffusion coefficient (D = 4.48 x 10 -10 m 2 .s -1 ) than that of the hPI-COOH and PS-b-P2VP-OH chains. Signals which are not aligned with the horizontal line are due to solvents.
Figure 21 : 1 H
211 Figure 21: 1 H 2D DOSY NMR (400 MHz, THF) spectrum of the PS-b-P2VP-b-PI after purification.
Figure 23
23 Figure 23 shows the SEC traces of the four linear ABC terpolymers after the Steglich esterification step. As previously observed, the SEC traces of the different PS-b-P2VP-b-PI terpolymers shift towards lower retention volumes as that of the PS-b-P2VP-OH and hPI-COOH chains. All the PS-b-P2VP-b-PI terpolymers exhibit mainly one peak (with a small shoulder) confirming the efficiency of the coupling reaction while the dispersity of the different linear ABC terpolymers is found to be below 1.1.
Figure 22 :
22 Figure 22: SEC traces in THF of (a) S21P24I9 (black curve), S21P24 (red curve) and I9 (blue curve); (b) S21P24I13 (black curve), S21P24 (red curve) and I13 (blue curve); (c) S21P24I16 (black curve), S21P24 (red curve) and I16 (blue curve).
Figure 23 :
23 Figure 23: Scheme of the Steglich esterification between a hydroxyl mid-functionalized PS-b-P2VP BCP and a carboxyl end-functionalized PI homopolymer in CH2Cl2 to achieve a 3 µ-SPI.
Figure 24 :
24 Figure 24: (a) SEC traces of hPS (pink), hPI (blue), PS-b-P2VP (red) and 3-SPI (black) chains. (b) 1 H NMR spectra (400 MHz) of hPI-COOH (blue), PS-b-P2VP (red,) and 3-SPI (black) chains.
Figure 25 : 1 H
251 Figure 25: 1 H 2D DOSY NMR (400 MHz, THF) spectrum of 3 µ-SPI synthesized by combining the anionic polymerization with the Steglich coupling reaction.
Figure 26 :
26 Figure 26: SEC curves in THF of (a) 3 -S19P24I9 (black curve), S19P24 (red curve) and I9 (blue curve); (b) 3 -S19P24I13 (black curve), S19P24 (red curve) and I13 (blue curve); (c) 3 -S19P24I16 (black curve), S19P24 (red curve) and I16 (blue curve); (d) 3 -S19P24I28 (black curve), S19P24 (red curve) and I28 (blue curve).
Figure 1 :
1 Fig. 1b)
Figure 2 :
2 Figure 2: Lattice structures for the three network phases identified for linear ABC terpolymers. Cubic phases Q 230 (far left) and Q 214 (far right) are double-and single-gyroid networks, respectively, while phase O 70 (center) is a single-orthorhombic network. The colored insets illustrate how triblock copolymer is added to these lattices, resulting in pentacontinuous (Q 230 ) and tricontinuous (Q 214 and O 70 ) morphologies.
Figure 3 :
3 Figure 3: Four examples of crystallographic planes.
Figure 4 :
4 Figure 4: (a) (2 x 2 µm²) AFM topographic view of a solvent-annealed linear SPI terpolymer (S:P:I = 1:1.1:0.5) thin films treated with a CF4/O2 RIE plasma which reveals the (111) plane of the core-shell double gyroid structure and (b) its corresponding 2D-FFT.
Figure 5 :
5 Figure 5: (2 x 2 µm²) AFM topographic views of solvent-annealed linear SPI terpolymer (S: P:I= 1:1.1:0.5) thin films treated with a CF4/O2 RIE plasma which reveals two planes of a H2PtCl6-stained core-shell double gyroid structure: (a) the (211) plane and (b) the (111) plane.
Figure 6
6 shows a simulation of the (211) projection of a core-shell double gyroid morphology. In this figure, the P2VP shell and the PI core would correspond to the black and white domains drown in a PS gray matrix. This simulation is fully in accordance with the (211) plane demonstrated in this study.
Figure 6 :
6 Figure 6: Simulation (211) projection of a core-shell double gyroid morphology. Reproduced from Goldacker et al.16
Figure 7 :
7 Figure 7: (2 x 2 µm 2 , inset = 0.5 x 0.5 µm 2 ) SEM images of H2PtCl6-stained SPI (S:P:I = 1:1.1:0.5) thin films treated with a CF4/O2 RIE plasma which reveals four planes of a coreshell double gyroid structure: (a) (110), (b) (100), (c) (211), and (d) (111) planes.
Figure 8 :
8 Figure 8: Calculated area fraction of the matrix phase S(x0) cut along the planes parallel to various crystallographic planes, as shown by three digits hkl in the legend, at various positions x0.
Figure 1 :
1 Figure 1: (1 x 1 µm², inset: 150 x 150 nm²) AFM topographic views of a solvent annealed 3 µ-SPI (S:P:I=1:1.2:1) thin film under a CHCl3 vapor for 2 hours and etched with a CF4/O2 plasma (scale bar: 200 nm). The 2D-FFT corresponding to the low magnified AFM image and the schematic representation of the (4.6.12) tiling pattern are presented on the upper left and bottom right corners of the figure, respectively.
Figure 2 :
2 Figure 2: Schematic phase diagram for 3-miktoarm star terpolymers under the constraint of the A and B components occupying equal volume fractions and invoking symmetric interactions between all unlike components. The different phases are placed at their approximate compositional positions quantified by the parameter x, the volume ratio of the C and A components. (Adapted from Gemma et al.)13
Figure 3 :
3 Figure 3: (2 x 2 µm², inset: 150 x 150 nm 2 ) AFM topographic views of the thicker part of a solvent annealed 3 µ-SPI film (S:P:I = 1:1.2:1) thin film placed under a CHCl3 vapor for 2 hours and etched with a CF4/O2 plasma. The 2D-FFT corresponding to high magnified AFM image is presented on the upper left corner of the figure.
Figure 4 :
4 Figure 4: (a) Free energy differences from the value of the homogeneous phase as a function of the volume fraction of C composition for ABC star triblock copolymers with symmetric A and B arms. (b) Phase stability regions as a function of the arm-length ratio of x = fC/fA (fA = fB). Note that in the [8.8.4] 1 phase, the minority C blocks forms the 4coordinated domains, and blocks A and B alternatively form 8-coordinated microdomains. In the [8.8.4] 2 morphology, the A and C blocks form the 8-coordinated polygons, and B blocks form the domains with 4-coordinations.
Figure 5 :
5 Figure 5: (2 x 2 m 2 , inset: 150 x 150 nm 2 ) AFM topographic views of 3 -SPI (S:P:I = 1:1.2:0.6) thin film treated by a THF vapor and etched with a CF4/O2 plasma (scale bars: 400 nm). The 2D-FFT corresponding to the high magnified AFM image and the schematic representation of the (4.6.12) tiling pattern are presented on the upper left and bottom right corners of the figure, respectively.
Figure 6 :
6 Figure 6: (1 x 1 µm²) AFM topographic view of the thicker part of 3 -SPI film (S:P:I = 1:1.2:0.6) thin film treated by a THF vapor and etched with a CF4/O2 plasma (scale bar: 200 nm). The schematic representation of the (4.6.12) tiling pattern having an in-plane orientation is presented on the bottom right corner of the figure.
Figure 2 :
2 Figure 2: Schematic representation of the solvent vapor annealing set-up.
Figure 3 :
3 Figure 3: Film thickness variations versus the swelling time: (a) in THF and (b) in CHCl3.
Figure 4 :
4 Figure 4: (a) Schematic representation of the plasma etching process and (b) plasma reactor chamber used in this work.
Figure 5 :
5 Figure 5: Scheme of the optical PM-IRRAS set-up. The first polarizer creates a p-polarized incident beam on the sample that is polarization-modulated by the PEM.
KHz) was directed
out of the spectrometer with an optional flipper mirror and made slightly convergent with a first BaF2 lens (191 mm focal length). The IR beam passed through a BaF2 wire grid polarizer (Specac) to select the p-polarized radiation and a ZnSe photoelastic modulator (PEM, Hinds Instruments, type III). The PEM modulated the polarization of the beam at a high fixed frequency, ωm =74 KHz, between the parallel and perpendicular linear states. After reflection on the sample, the double modulated (in intensity and in polarization) infrared beam was focused with a second ZnSe lens (38.1 mm focal length) onto a photovoltaic MCT detector (Kolmar Technologies, Model KV104) cooled at 77 K. The polarization modulated signal was separated from the low frequency signal (ωi between 500 and 5000 Hz) with a 40 KHz high pass filter and then demodulated with a lock-in amplifier (Stanford Model SR 830). The output time constant was set to 1 ms. The two interferograms were high-pass and low-pass filtered (Stanford Model SR 650) and simultaneously sampled in the dual channel electronics of the spectrometer. In all the experiments, the PEM was adjusted for a maximum efficiency at 2500 cm -1 to cover the mid-IR range in only one spectrum. For calibration measurements, a second linear polarizer (oriented parallel or perpendicular to the first preceding the PEM) was inserted between the sample and the second ZnSe lens. This procedure was used to calibrate and convert the PM-IRRAS signal in terms of the IRRAS signal (i.e.
Figure 6 :
6 Figure 6: PM-IRRAS spectra of PS, PI and P2VP homopolymer thin films (A) before and (B) after the plasma treatment. PM-IRRAS spectra of 3 µ-ISP thin film (C) before and (D) after the plasma treatment.
Figure 8 :
8 Figure 8: (1 x 1 µm²) AFM topographic view of a star ABC terpolymer composed of PS, PI and P2VP self-assembled in thin film (a) recorded using TappingMode (TM); (b) recorded using PeakForce Tapping (PFT).
Iatrou, H. & Hadjichristidis, N. Synthesis of a Model 3-Miktoarm Star Terpolymer. Macromolecules 25, 4649-
PS P2VP PI volume fraction/PS
sample M n vol. frac. M n vol. frac. M n vol. frac. S P I
(kg/mol) (kg/mol) (kg/mol)
S 21 P 24 I 9 21 0.39 24 0.41 9 0.20 1 1.1 0.5
S 21 P 24 I 13 21 0.36 24 0.38 13 0.26 1 1.1 0.7
S 21 P 24 I 16 21 0.34 24 0.36 16 0.30 1 1.1 0.9
4651 (1992).
Ces observations ont permis de conclure que l'orientation des plans de la structure double gyroïde coeur-écorce est contrôlée par l'épaisseur du film. Afin d'en savoir plus sur cette relation entre épaisseur du film et orientation des plans de la structure, nous nous sommes intéressés à une étude réalisée par Hashimoto et son équipe.9 Dans cette étude, ces auteurs montrent qu'il existe un lien direct entre l'aire occupée par la matrice à la surface du film et les différents plans de la gyroïde. Ils ont mis en relief le fait que le plan (211) minimise l'aire de la matrice (PS) à la surface du film. Ceci permet ainsi d'optimiser la présence de PI à la surface du film. Contenu de la faible énergie de surface du PI, celui-ci minimise l'énergie libre du système en déplétant à la surface du film. On peut noter que ceci est rendu possible uniquement car l'épaisseur du film laissant apparaître le plan (211) de la structure Q 230 est supérieure à la dimension de la cellule unité. En effet, pour des raisons de commensurabilité, lorsque l'on
A B
100nm 100nm
(211) plane
Double waves pattern (t ≈ 190 nm) 400nm 400nm Wagon-wheel pattern (t ≈ 90 nm)
C D
100nm 100nm
(100) plane
Doughnut pattern (step of terraces) 400nm 400nm Wavy lamellae pattern
(t ≈ 75 nm)
7 B, C et D), le motif obtenu diffère du plan (211) de la structure Q 230 . En effet, lorsque
l'épaisseur du film est de l'ordre de 90 nm, un motif en roue de charrette caractéristique du plan
(111) de la structure Q 230 apparaît. Si l'épaisseur du film est réduite jusqu'à 75 nm, on obtient
le plan (100) de la structure Q 230 et entre les terrasses formées par le film, on observe le plan (110) de la structure Q 230 présentant un motif de donuts. (110) plane (111) plane diminue l'épaisseur du film, on va vers des périodicités de plus en plus petite en passant par des plans où la fraction volumique occupée par le PI en surface décroit de plus en plus. L'ordre des plans observés par Hashimoto et al. est en accord avec les résultats obtenus expérimentalement dans notre étude. Pour conclure, quatre plans différents de la structure Q 230 ont été démontrés. Si l'épaisseur du film est supérieure à la dimension de la cellule unité, le plan (211) est le plan thermodynamiquement le plus stable dès lors que l'aire de PI (bloc ayant l'énergie de surface la plus faible) à la surface du film est maximisé. Si l'épaisseur du film est inférieure à la dimension de la cellule unité, l'orientation du plan est dirigée par la commensurabilité entre l'épaisseur di film et la période du motif. Dans un dernier temps, nous nous sommes intéressé à l'auto-assemblage en film mince de deux terpolymères ABC en étoile. Dans la littérature, peu d'études reportent l'autoassemblage en film mince de terpolymères ABC en étoile. Les pavages d'Archimède (4.8.8), (6.6.6) ainsi que des morphologies hiérarchiques sont les seules morphologies démontrées en film mince. Dans cette thèse, nous avons étudié l'auto-assemblage de deux terpolymères ABC en étoile (3 µ-S19P24I9 et 3 µ-S19P24I16) dans une configuration de type film mince. Pour cela, les terpolymères ont été solubilisés à 2% en masse dans le toluène et déposés à la tournette (1,5 krpm) sur un substrat en silicium. Le film obtenu a ensuite été recuit dans différentes vapeurs de solvant puis traité par un procédé plasma.
Un plasma et parfois un marquage aux atomes lourds ont ensuite été réalisés afin d'obtenir du contraste entre les blocs lors des analyses microscopiques. Nous avons démontré que l'auto-assemblage d'un terpolymère ABC linéaire frustré de type II permettait d'obtenir une structure double gyroïde coeur-écorce en film mince. Différents
plans cristallographiques de la structure Q 230 ont ensuite pu être observés suivant l'épaisseur du
film.
Un pavage d'Archimède (4.6.12) a été obtenus par auto-assemblage de deux
terpolymères ABC en étoile ayant des fractions volumiques pour le bloc C (PI) différentes.
Nous avons mis en évidence la dépendance entre l'affinité du solvant de recuit avec les blocs
et la nature du domaine occupant le centre de la structure du pavage d'Archimède (4.6.12).
Augmenter l'épaisseur du film nous a, par la suite, permis d'obtenir un pavage d'Archimède
(4.8.8).
En augmentant l'épaisseur des films, nous avons montré que nous pouvions passer d'un pavage (4.6.12) à un pavage (4.8.8), ou encore d'une structure colonnaire perpendiculaire à l'interface air/film à une structure parallèle à l'interface air/film.
Pour conclure, dans cette thèse, nous avons développé une méthode efficace pour la synthèse de terpolymères ABC linéaires et en étoile composés de PS, P2VP et PI. Une bibliothèque de terpolymères ABC linéaires et en étoile a été synthétisé en gardant la fraction volumique des blocs de PS et de P2VP constante mais en faisant varier la fraction volumique du bloc PI. La méthode de synthèse développée dans cette thèse s'est révélée intéressante car les étapes de fonctionnalisations sont simples et quantitatives. De plus, peu d'étapes de purifications sont nécessaires à l'obtention de terpolymères purs et la méthode de couplage utilisée ne met pas en jeu l'utilisation de métaux en tant que catalyseur. L'auto-assemblage en film mince de terpolymères ABC linéaires et en étoile a été démontré. Pour cela, un procédé de recuit par vapeur de solvant a été utilisé pour promouvoir la mobilité des chaines polymères.
CHAPTER 4: SELF-ASSEMBLED PS-arm-P2VP-arm-PI THIN FILMS….………………109
H2C: Double helices-on-cylinders PCEMA: Poly(2-cinnamoylethyl methacrylate)
H3C: Triple helices-on-cylinders PCL: Poly(ε-caprolactone) sec-BuLi : sec-butyllithium ………………………………………………………………………….…………………98
hP2VP: P2VP homopolymer PDMA: Poly(N,N-dimethylacrylamide) SPM: Scanning probe microscopy III. Conclusion………………………………………………………………………………….106
hPI: PI homopolymer PDMS: Polydimethylsiloxane SVA: Solvent-Vapor Annealing
hPS: PS homopolymer PDMSB: Poly(1,1-dimethylsilacyclobutane) TBAF: Tetra-n-butylammonium fluoride
IS: International System PEA: Poly(ethyl acrylate) TEM: Transmission Electronic Microscope
Ka: Acid dissociation constant PEM : PhotoElastic Modulator TEMPO: 2,2,6,6-tetramethylpiperidinyloxyl I. Introduction…………..…………………………………………………………………..111
: Flory-Huggins interaction parameter PEO: Poly(ethylene oxide) Tg: Glass transition temperature II. Thin film (4.6.12) Archimedean tiling pattern……….……….…………………………112
KP: Knitting pattern PFS: Polyferrocenylsilane THF: Tetrahydrofuran 1. Solvent annealed 3 µ-SPI (S:P:I = 1:1.2:1) under a CHCl3 vapor….…..……112
L3: Three-color lamellae PFT: PeakForce Tapping TM: Tappingmode 2. Solvent annealed 3 µ-SPI (S:P:I = 1:1.2:0.6) under a THF vapor…….…..…116
PI: Polyisoprene LC: Cylinders-within-lamellae IV. Conclusion………………………………………………………………………………120
MEK: Methyl ethyl ketone PI-COOH: Carboxyl-end-functionalized polyisoprene
MgSO4: Magnesium sulfate PL: Perforated Lamellae
Mn: Number average molecular weight PLA: Pol(D,L-lactide acid)
Mw: Mass average molar mass PLL: Poly-L-lysine
N: Polymerization degree PM-IRRAS : Phase-Modulation InfraRed Reflection Absorption Spectroscopy
Na2SO4: Sodium sulfate PMMA: Poly(methyl methacrylate)
n-BuLi: n-butyllithium PnBMA: Poly(n-butyl methacrylate) Two 3 µ-ABCs with different molecular weights
and compositions will be presented. We will demonstrate for the first time thin film (4.6.12) Archimedean tiling patterns. NMR: Nuclear Magnetic Resonance ppm: Parts-per-million
O2: Dioxygen PS: Polystyrene
OTDD: Ordered Tricontinuous Double-Diamond PS-b-P2VP-OH: Hydroxyl-end-functionalized PS-b-P2VP
P2VP: Poly(2-vinylpyridine) PtBA: Poly(tert-butyl acrylate)
P4VP: Poly(4-vinylpyridine) PtBMA: Poly(tert-butyl methacrylate)
PAA: Polyacrylic acid RIE: Reactive-Ion Etching
PAN: Polyacrylonitrile ROP: Ring Opening Polymerization
PB: Polybutadiene rpm: Round per minute
PC: Perforated Circular lamella-on-cylinders SAXS: Small-Angle X-Ray Scattering
GENERAL CONCLUSION………………...…………..………………….………………….123 APPENDIX……………………………...…..…….………...…………..……………………….125
CHAPTER 1. BIBLIOGRAPHIC STUDY
I. Thin film process…………………………………………………………………………127
II. Solvent vapor annealing process…………………………………………………………129
III. Reactive ion etching plasma………………………..……….…………………………...132
IV. Phase Modulation Infrared Reflection Absorption Spectroscopy (PM-IRRAS) experiments
…………………………………………………………….……………………………..133
V. AFM Characterization……………..……………………………………………………..136
Table 2 :
2 Characteristics of the ISP Star-Shaped Terpolymers
Sample Mn (10 3 ) Mw/Mn b I:S:P d
I 13.3 a 1.04 -
S 27.4 b 1.01 -
I1.0-b-S1.8 40.7 c 1.03 1:1.82:0
I1.0S1.8P4.3 110 c 1.01 1:1.8:4.3
I1.0S1.8P6.4 147 c 1.02 1:1.8:6.4
I1.0S1.8P12 241 c 1.03 1:1.8:12.2
I1.0S1.8P22 401 c 1.04 1:1.8:22.0
a Determined by 1 H-NMR. b Determined by SEC using polystyrene standard samples. c Estimated by 1 H-NMR based on Mn of the S precursor. d Volume ratios of I:S:P calculated using bulk densities of the components, i.e., 0.926, 1.05, and 1.14 g/cm
3
for the I, S, and P components, respectively.
Table 3 :
3 Blend Ratios of the ISP Star-Shaped Terpolymers and homopolymer to obtain the I1.0S1.8PX Series
Sample Formulation Weight fraction I:S:P a
I1.0S1.8P9.6 I1.0S1.8P6.4/I1.0S1.8P12 0.35/0.65 1:1.8:9.6
I1.0S1.8P11 I1.0S1.8P6.4/I1.0S1.8P12 0.16/0.84 1:1.8:11
I1.0S1.8P32 I1.0S1.8P22/hP 0.70/0.30 1:1.8:32
I1.0S1.8P53 I1.0S1.8P22/hP 0.44/0.56 1:1.8:53
a Volume ratios of I:S:P calculated using bulk densities of the components.
Table 4 :
4 Summary of the structures obtained from the self-assembly of 3 µ-ISP-x
Morphology (4.6.12) (6.6.6) (4.8.8) (3.3.4.3.4) Cylinders-in-lamella Columnar piled disk cylinder
1:1:0.2;
I:S:P volume ratio 1:1:0.4; 1:1:1.9; 1:1:0.7 1:1:1; 1:1:1.2 1:1:1.3 1:1:3; 1:1:4.9 1:1:7.9; 1:1:10
1:1:1.5
Table 1 :
1 Summary of the synthesized linear ABC terpolymers composed of PS, P2VP and PI (SPI) synthesized by combining the anionic polymerization with the Steglich coupling reaction.
PS P2VP PI
SAMPLE Mn vol. Mn vol. frac. Mn vol. frac. S P I
(kg/mol) frac. (kg/mol) (kg/mol)
S21P24I9 21 0.39 24 0.41 9 0.20 1 1.1 0.5
S21P24I13 21 0.36 24 0.38 13 0.26 1 1.1 0.7
S21P24I16 21 0.34 24 0.36 16 0.30 1 1.1 0.9
Table 2 :
2 Table summarizing the star miktoarm ABC terpolymer obtained from the after the coupling reaction.
PS P2VP PI FRACTION/S
SAMPLES Mn vol. Mn vol. Mn vol. S P I
(kg.mol -1 ) frac. (kg.mol -1 ) frac. (kg.mol -1 ) frac.
3 µ-S19P24I9 19 0.37 24 0.43 9 0.20 1 1.2 0.6
3 µ-S19P24I13 19 0.34 24 0.39 13 0.27 1 1.2 0.8
3 µ-S19P24I16 19 0.32 24 0.37 16 0.31 1 1.2 1.0
3 µ-S19P24I28 19 0,26 24 0.30 28 0.44 1 1.2 1.7
Table 1 :
1 Hildebrand parameters for several solvents and the polymers used in this study.
Hildebrand solubility parameter (MPa 1/2 )
Chloroform 17,8
Tetrahydrofuran 19,5
Toluene 18,1
Polystyrene 18,5
Poly(2-vinylpyridine) 20,6
Polyisoprene 16,3
Hashimoto, T., Nishikawa, Y. & Tsutsumi, K. Identification of the 'voided double-gyroid-channel': A new morphology in block copolymers. Macromolecules 40,
1066-1072 (2007).
H NMR spectrum of the carboxyl-end-functionalized hPI-COOH in CD2Cl2 is
Remerciements
II. End-functionalized polyisoprene homopolymer
The end-functionalized homopolyisoprene (hPI) chains were synthesized by anionic polymerization without any protection/deprotection step. For that purpose, a carboxylation was performed at the end of the synthesis of the living polyisoprene. 23,24 The general route for the synthesis of the end-functionalized hPI is presented in Figure 1. PI homopolymers with different molecular weights (9, 13, 16, and 28 kg.mol -1 ) were prepared using the same synthesis route. In the next part, the synthesis of the hPI having a molecular weight of 9 kg.mol -1 will be first described in detail. Results for the other PI homopolymers will be then presented.
Synthesis of a carboxyl-end-functionalized polyisoprene homopolymer model
The anionic polymerization of the PI homopolymer are conducted in THF at -30°C using sec-BuLi as initiator. For that purpose, THF (40mL) was introduced in a flame dried round 250 mL flask equipped with a magnetic stirrer. 25 The solution was cooled down to -30°C. Secbutyllithium (0.45 mL, 0.0007 mol) was charged followed by the addition of isoprene (10 mL, 0.1 mol). 16,17 The slightly yellow reaction mixture was stirred for 3 hours. After complete conversion, the living polyisoprene was end-capped with carbon dioxide. The solution became colorless immediately. To ensure a complete end-capping of the polyisoprenyl anion, the solution was stirred at room temperature for 10 minutes. The reaction was then terminated by adding dried methanol in the flask. After concentration of the mixture, the hPI chains were precipitated in methanol and dried in an oven at 35°C. The concentration of carboxylic acid chain ends was determined by titrating a solution of 0.2 g of polymer in 20 mL of toluene with 0.01 M KOH in methanol with phenolphtalein. The resulted hPI-COOH was characterized by 1 H NMR (400 MHz, CD2Cl2). II. Thin film (4.6.12) Archimedean tiling patterns Thin film morphologies of 3 µ-SPI (S:P:I = 1:1.2:0.6) and 3 µ-SPI (S:P:I = 1: 1.2: 1) were studied. For that purpose, a 2 wt. % polymer solution in toluene was spin-coated on smooth silicon substrates. The 3 -SPI self-assembly was promoted by exposing samples to a continuous stream of a CHCl3 or THF vapor produced by bubbling nitrogen gas through the liquid solvent as described previously. 11,12 The morphology of the solvent-annealed 3 -SPI thin films was frozen by quickly removing the lid of the chamber. The solvent annealing process under a vapor of either CHCl3 or even THF favors the formation of a PI top surface layer. A fluorine-rich plasma was therefore applied to remove this low surface energy layer which revealed the thin film morphology (plasma conditions are 40 W, 17 sccm CF4 and 3 sccm O2,
s).
In this chapter, we will first described the morphologies obtained from the self-assembled 3 µ-SPI (S:P:I = 1:1.2:1) thin films under a chloroform vapor. The microphase-separation of 3 µ-SPI (S:P:I = 1:1.2:0.6) thin films placed under a THF vapor will be then discussed.
1) Solvent annealed 3 µ-SPI (S:P:I = 1:1.2:1) under a CHCl 3 vapor In this part, the self-assembly of the 3 µ-SIP chains having a quasi-symmetric composition (S: P:I = 1:1.2:1) will be studied. In this case, the thin film was treated by a CHCl3 vapor for 2 hours. The thickness of the film was determined to be 80 nm by light reflection using a Filmetrics apparatus. The AFM topographic image of the resulting structure is presented in Figure 1. Here, PI (dark brown) occupying the inner part of the structure is surrounded by an alternation of PS (yellow) and P2VP (brown) domains. The structure exhibits an out-of-plane columnar structure ordered into a hexagonal array. The 2D-FFT presented in Figure 1 confirms the p6mm symmetry of the array having a single grain orientation since six first-order spots are clearly visible. PI domains have a period of 41 nm according to the 2D-FFT while PS and P2VP domains have a similar period of 24 nm. Both the array symmetry and the distribution of periodicities are in accordance with the formation of (4.6.12) Archimedean tiling pattern as illustrated by the schematic representation of the structure showed on the bottom right corner of the AFM image. From this representation, PS, PI and P2VP occupy the blue, orange and red regions, respectively, of the (4.6.12) Archimedean tiling pattern.
V. AFM Characterization
To characterize the structure of the self-assembled polymeric thin films, the atomic force microscopy (AFM) was mainly used. The AFM is a high-resolution type of scanning probe microscopy (SPM), with demonstrated resolution in the nanometer scale. Until now, TM AFM (Tappingmode or intermittent contact mode atomic force microscopy) has been the most often applied direct imaging technique to characterize the thin film morphology. In Tappingmode, the cantilever is oscillated up and down at or near its resonance frequency, in a direction normal to the sample surface. The frequency and amplitude of the driving signal are kept constant, leading to a constant amplitude of the cantilever oscillation as long as there is no drift or interaction with the surface. Typical amplitudes of oscillation are in the range of tens of nanometers. The Tappingmode does not measure a direct force, but short range repulsive and long range repulsive forces. The general operation is shown in Figure 7a. In our case, the TM allows for a good contrast between PI and other blocks, but does not allow enough contrast between the PS and P2VP blocks to determine their relative position in the structure. To overcome this lack of contrast, we tried another mode called PeakForce Tapping (PFT). Similar to TM, in PFT, the AFM tip is brought intermittently into contact with the sample surface. In contrast with Tappingmode, the PFT operates in a non-resonant mode.
The PFT oscillations are performed at frequencies well below the cantilever resonance.
Moreover, the z-position is modulated (by a sine wave or triangular one) to avoid unwanted
Synthesis of linear and star miktoarm ABC terpolymers and their self-assembly in thin films
The first objective of this work was to develop a synthesis method enabling the preparation of linear and star ABC terpolymers. The molecular weights of the A and B (PS and P2VP) blocks were kept constant while the size of the C (PI) block was varied to achieve different morphologies. The second objective of this work was devoted to the study of the self-assembly of linear and star ABC terpolymer thin films. A synthesis route combining the anionic polymerization with a coupling method was developed. The PS and P2VP blocks were synthesized by a sequential anionic polymerization. The PI block separately synthesized by anionic polymerization was then coupled to the PS-b-P2VP diblock via a Steglich esterification. This method revealed to be efficient since it is a catalyst metal-free reaction enabling to achieve well-defined terpolymers with a dispersity below 1.1
The study of star and linear ABC terpolymer self-assembly led to new morphologies in thin film. A solvent vapor annealing treatment was used to promote the mobility of the polymeric chains. A core-shell double gyroid structure was produced from the self-assembly of linear PS-b-P2VP-b-PI thin films. Four different crystallographic planes were observed depending on the film thickness. Moreover, the self-assembly of star ABC terpolymer chains into a thin film (4.6.12) Archimedean tilling pattern was demonstrated for the first time. Here, the PS and PI blocks occupied different places within the (4.6.12) tiling pattern depending on the PI volume ratio and the solvent selected to swell the film.
Synthèse de terpolymères ABC linéaires et en étoile et étude de leur auto-organisation en films minces
L'objectif premier de ce travail a été de trouver une méthode de synthèse permettant de préparer des terpolymères ABC linéaires et en étoile en gardant la masse molaire des blocs A et B (PS et P2VP) constantes, tout en faisant varier la masse molaire du bloc C (PI) de sorte à avoir accès à différentes morphologies. Le deuxième objectif consistait en l'auto-assemblage des terpolymères synthétisés sous forme de films minces.
Afin de répondre au premier objectif de cette thèse, une voie de synthèse, combinant la polymérisation anionique avec une méthode de couplage, a été mise au point. La polymérisation anionique séquentielle des blocs PS et P2VP a donné lieu à des chaînes PS-b-P2VP fonctionnalisées qui ont été ensuite couplées à différents blocs PI via une estérification de Steglich. Cette méthode de synthèse s'est révélée pertinente car des terpolymères ABC linéaires et en étoile très bien définis (c-à-d ayant une dispersité inférieure à 1.1) ont pu être synthétisés. De plus, la méthode de couplage, ayant un rendement proche de 100%, ne met pas en jeu l'utilisation de métal en tant que catalyseur.
Dans un deuxième temps, l'auto-organisation des terpolymères a permis d'obtenir de nouvelles morphologies sous forme de films minces. Un recuit par vapeur de solvant a été utilisé pour apporter de la mobilité aux chaînes terpolymères. Ainsi, nous avons montré que l'autoorganisation de chaînes terpolymères linéaires (PS-b-P2VP-b-PI) permettait la formation d'une phase double gyroid coeur-écorce en film mince. De plus, l'auto-organisation des terpolymères en étoile (3 µ-ISP) a permis d'obtenir un pavage d'Archimède de type (4.6.12) pour la première fois en film mince. Dans ce cas, nous avons aussi montré que varier la masse molaire du bloc PI ainsi que la nature du solvant de recuit permettait une rotation des domaines au sein la structure. Typiquement le coeur de la structure peut être occupé soit par le PI ou bien le PS. |
01767226 | en | [
"sdv.mp.vir"
] | 2024/03/05 22:32:15 | 2017 | https://theses.hal.science/tel-01767226/file/TH2017ENGUEHARDMargot.pdf | Marco Vignuzzi
Pierre Roques
Dorothée Missé
Dominique Pontier
Barbara G Maxime
Marie Pierre
Marie Wilhelm
Marlène, Maryline Margot Lucie
De Nova-Ocampo
A Novel System for the Launch of Alphavirus RNA Synthesis Reveals a Role for the Imd Pathway in Arthropod Antiviral Response
Mes premiers remerciements vont à Dimitri, pour m'avoir donné l'opportunité tout d'abord de faire cette thèse, mais également d'avoir eu la chance de découvrir la Chine, l'Institut Pasteur de Shanghai et de pouvoir développer de magnifiques collaborations. Le parcours n'a pas été un long fleuve tranquil, mais on voit enfin le bout !! Merci à Catherine d'avoir accepté d'être ma co-directrice après le départ de Dimitri à Shanghai. Merci pour ta présence, tes nombreux conseils constructifs, nos réunions, ton soutien. Merci aux membres de mon jury, d'avoir accepté de participer à ma soutenance, à Marco Vignuzzi, Pierre Roques et Jin Zhong d'avoir accepté d'être rapporteurs, et à Dorothée Missé, et Dominique Pontier d'avoir examiné mon travail. Au cours de cette thèse, j'ai eu la chance d'être accueillie au sein de plusieurs laboratoires et équipes, ce qui a été riche en rencontres. Je remercie tout d'abord Yvan Moenne Loccoz de m'avoir permis d'intégrer le laboratoire d'Ecologie Microbienne. Merci à Patrick et Claire pour l'accueil dans l'équipe 7. A Yoann, on a commencé notre thèse en même temps, Van, pour toutes les longues commandes, Stéphanie, pour nos fous rires pendant les captures de moustiques, Guillaume, pour m'avoir montré comment trouver des larves. Une pensée toute particulière pour Flo, partie trop tôt, tu restes dans ma mémoire comme une personne aimante, soucieuse du bien-être de chacun. Merci également à tous les membres de l'unité Eco Mic, qui m'ont toujours accueilli avec le sourire. A l'unité de Gerland (Infections Virales et Pathologie Comparée), j'ai passé 5 ans mémorables, merci à tous pour votre soutien, nos nombreuses discussions et conseils. A Fabienne, pour diriger l'unité à merveille, depuis le départ (trop tôt) de Christophe. Aux membres des AlphaGirls, Carine, Marie, Céline et Catherine, merci pour votre bonne humeur, votre soutien, votre aide aussi, pour les qPCR et les titrations en grand nombre. Merci aussi à la team P3 : Maxime, Barbara G, Barbara V, Marie Pierre, merci d'avoir fait en sorte que je travaille en sécurité et dans de bonnes conditions dans ma deuxième maison : mon très cher et adoré P3, sans qui je n'aurai rien fait. Merci aux étudiants lyonnais pour nos discussions scientifiques ou pas : Marie, Wilhelm, Lucie, Marlène, Maryline et Margaux (Tea time), Nico et Nico de la Tour. Je n'oublie pas les anciens, Sarah, Franck, Claire et Najate. Pardon à ceux que
I also thank the Collaborators of the cytometry: Sébastien and Thibault, for every hour spent in the LSR, our discussions, and rich in precious advices.
My friends, who were of a big support during all these years. Charlotte, Apo, Virginie, Anna, Stephan, Val, Lucas, and the others. In spite of the years which pass and our often-fascinating lives, we are always be there for each other.
Thank you also to my family, My parents, my brother who always supported me, in spite of this "intrusive" thesis. My grandparents, I hope to make you proud. My cousin, a small thought for you, who just begins your thesis, I wish you the best for you, so many enjoyments and adventures, the good ones as the least good (it is also a part of the thesis). My family in-law also, which always had a thought for me, for your encouragements, Beatrice, Jacques, Emilie, Laurent, Mémie, Loen, Tom and the others.
The best for the end … My biggest support, Bastien, thank you for having been there during these years, to and against me. For your patience during my long weeks spent in Shanghai, or my Parisian internship, even if I know that you prefer keeping me at home. You have an incredible patience. Thank you for your support, your presence; even during the moments of doubts. This thesis is for you, who always believed in me and who pushed me to give the best. Also activated after the DENV infection, the RNA interference pathway is counteract by the action of DENV NS4, acting as an RNAi suppressor [START_REF] Kakumani | Role of RNA Interference (RNAi) in Dengue Virus Replication and Identification of NS4B as an RNAi Suppressor[END_REF]. A common pathway could be an enhancer for DEN viruses. Mutations on NS4 to affect the RNA interference regulation could reduce differences of permissiveness and viral production observed in mono and dual-infection. |
01767230 | en | [
"info.info-cv",
"info.info-ti"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01767230/file/QCGA2018.pdf | Stéphane Breuils
Vincent Nozick
Akihiro Sugimoto
Eckhard Hitzer
Quadric Conformal Geometric Algebra of R 9,6
Keywords: Mathematics Subject Classification (2010). Primary 99Z99; Secondary 00A00 Quadrics, Geometric Algebra, Conformal Geometric Algebra, Clifford Algebra
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Geometric Algebra provides useful and, more importantly, intuitively understandable tools to represent, construct and manipulate geometric objects. Intensively explored by physicists, Geometric Algebra has been applied in quantum mechanics and electromagnetism [START_REF] Doran | Geometric algebra for physicists[END_REF]. Geometric Algebra has also found some interesting applications in data manipulation for Geographic Information Systems (GIS) [START_REF] Luo | A Hierarchical Representation and Computation Scheme of Arbitrary-dimensional Geometrical Primitives Based on CGA[END_REF]. More recently, it turns out that Geometric Algebra can be applied even in computer graphics, either to basic geometric primitive manipulations [START_REF] Vince | Geometric algebra for computer graphics[END_REF] or to more complex illumination processes as in [START_REF] Papaefthymiou | Real-time rendering under distant illumination with conformal geometric algebra[END_REF] where spherical harmonics are substituted by Geometric Algebra entities.
This paper presents a Geometric Algebra framework to handle quadric surfaces which can be applied to detect collision in computer graphics, and to calibrate omnidirectional cameras, usually embedding a mirror with a quadric surface, in computer vision. Handling quadric surfaces in Geometric Algebra requires a high dimensional space to work as seen in subsequent sections. Nevertheless, no low-dimensional Geometric Algebra framework is yet introduced that handles general orientation quadric surfaces and their construction using contact points.
High dimensional Geometric Algebras
Following Conformal Geometric Algebra (CGA) and its well-defined properties [START_REF] Dorst | Geometric algebra for computer science, an object-oriented approach to geometry[END_REF], the Geometric Algebra community recently started to explore new frameworks that work in higher dimensional spaces. The motivation for this direction is to increase the dimension of the relevant Euclidean space (R n with n > 3) and/or to investigate more complex geometric objects.
The Geometric Algebra of R p,q is denoted by G p,q where p is the number of basis vectors squared to +1 and q is that of basis vectors squared to -1. Then, the CGA of R 3 is denoted by G 4,1 . Extending from dimension 5 to 6 leads to G 3,3 defining either 3D projective geometry (see Dorst [START_REF] Dorst | 3d oriented projective geometry through versors of R 3,3[END_REF]) or line geometry (see Klawitter [START_REF] Klawitter | A Clifford algebraic approach to line geometry[END_REF]). Conics in R 2 are represented by the conic space of Perwass [START_REF] Perwass | Geometric algebra with applications in engineering[END_REF] with G 5,3 . Conics in R 2 are also defined by the Double Conformal Geometric Algebra (DCGA) with G 6,2 introduced by Easter and Hitzer [START_REF] Easter | Double conformal geometric algebra[END_REF]. DCGA is extended to handle cubic curves (and some other even higher order curves) in the Triple Conformal Geometric Algebra with G 9,3 [START_REF] Easter | Triple conformal geometric algebra for cubic plane curves[END_REF] and in the Double Conformal Space-Time Algebra with G 4,8 [START_REF] Benjamin | Double conformal space-time algebra[END_REF]. We note that the dimension of the algebras generated by any n-dimensional vector spaces (n = p + q) grows exponentially as they have 2 n basis elements. Although most multivectors are extremely sparse, very few implementations exist that can handle high dimensional Geometric Algebras. This problem is discussed further in Section 7.2.
Geometric Algebra and quadric surfaces
A framework to handle quadric surfaces was introduced by Zamora [START_REF] Zamora-Esquivel | G 6,3 geometric algebra; description and implementation[END_REF] for the first time. Though this framework constructs a quadric surface from control points, it supports only axis-aligned quadric surfaces.
There exist two main Geometric Algebra frameworks to manipulate general quadric surfaces.
On one hand, DCGA with G 8,2 , defined by Easter and Hitzer [START_REF] Easter | Double conformal geometric algebra[END_REF], constructs quadric and more general surfaces from their implicit equation coefficients specified by the user. A quadric (torus, Dupin-or Darboux cyclide) is represented by a bivector containing 15 coefficients that are required to construct the implicit equation of the surface. This framework preserves many properties of CGA and thus supports not only object transformations using versors but also differential operators. However, it is incapable of handling the intersection between two general quadrics and, to our best knowledge, cannot construct general quadric surfaces from control points.
On the other hand, quadric surfaces are also represented in a framework of G 4,4 as first introduced by Parkin [START_REF] Spencer | A model for quadric surfaces using geometric algebra[END_REF] and developed further by Du et al. [START_REF] Du | Modeling 3D Geometry in the Clifford Algebra R 4,4[END_REF]. Since this framework is based on a duplication of the projective geometry of R 3 , it is referred to as Double Perspective Geometric Algebra (DPGA) hereafter. DPGA represents quadric surfaces by bivector entities. The quadric expression, however, comes from a so-called "sandwiching" duplication of the product. DPGA handles quadric intersection and conics. It also handles versors transformations. However, to our best knowledge, it cannot construct general quadric surfaces from control points. This incapability seems true because, for example, wedging 9 control points together in this space results in 0 due to its vector space dimension.
Contributions
Our proposed framework, referred to as Quadric Conformal Geometric Algebra (QCGA) hereafter, is a new type of CGA, specifically dedicated to quadric surfaces. Through generalizing the conic construction in R 2 by Perwass [START_REF] Perwass | Geometric algebra with applications in engineering[END_REF], QCGA is capable of constructing quadric surfaces using either control points or implicit equations. Moreover, QCGA can compute the intersection of quadric surfaces, the surface tangent, and normal vectors for a quadric surface point.
Notation
We use the following notation throughout the paper. Lower-case bold letters denote basis blades and multivectors (multivector a). Italic lower-case letters refer to multivector components (a 1 , x, y 2 , • • • ). For example, a i is the i th coordinate of the multivector a. Constant scalars are denoted using lowercase default text font (constant radius r). The superscript star used in x * represents the dualization of the multivector x. Finally, subscript on x refers to the Euclidean vector associated with the point x of QCGA.
Note that in geometric algebra, the inner product, contractions and outer product have priority over the full geometric product. For instance, a ∧ bI = (a ∧ b)I.
QCGA definition
This section introduces QCGA. We specify its basis vectors and give the definition of a point.
QCGA basis and metric
The QCGA G 9,6 is defined over a 15-dimensional vector space. The base vectors of the space R 9,6 are basically divided into three groups: {e 1 , e 2 , e 3 } (corresponding to the Euclidean vectors in R 3 ), {e o1 , e o2 , e o3 , e o4 , e o5 , e o6 }, and {e ∞1 , e ∞2 , e ∞3 , e ∞4 , e ∞5 , e ∞6 }. The inner products between them are as defined in Table 1.
For some computation constraints, a diagonal metric matrix may be required. The orthonormal vector basis of R
• • • • • • • • • • • • e 2 0 1 0 • • • • • • • • • • • • e 3 0 0 1 • • • • • • • • • • • • e o1 • • • 0 -1 • • • • • • • • • • e ∞1 • • • -1 0 • • • • • • • • • • e o2 • • • • • 0 -1 • • • • • • • • e ∞2 • • • • • -1 0 • • • • • • • • e o3 • • • • • • • 0 -1 • • • • • • e ∞3 • • • • • • • -1 0 • • • • • • e o4 • • • • • • • • • 0 -1 • • • • e ∞4 • • • • • • • • • -1 0 • • • • e o5 • • • • • • • • • • • 0 -1 • • e ∞5 • • • • • • • • • • • -1 0 • • e o6 • • • • • • • • • • • • • 0 -1 e ∞6 • • • • • • • • • • • • • -1 0
squares to +1 along with six other basis vectors {e -1 , e -2 , e -1 , e -4 , e -5 , e -6 } each of which squares to -1 corresponds to a diagonal metric matrix. The transformation from the original basis to this new basis (with diagonal metric) can be defined as follows:
e ∞i = e +i + e -i , e oi = 1 2 (e -i -e +i ), i ∈ {1, • • • , 6}. (2.1)
For clarity, we also define the 6-blades
I ∞ = e ∞1 ∧ e ∞2 ∧ e ∞3 ∧ e ∞4 ∧ e ∞5 ∧ e ∞6 , I o = e o1 ∧ e o2 ∧ e o3 ∧ e o4 ∧ e o5 ∧ e o6 , (2.2)
the 5-blades
I ∞ = (e ∞1 -e ∞2 ) ∧ (e ∞2 -e ∞3 ) ∧ e ∞4 ∧ e ∞5 ∧ e ∞6 , I o = (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ e o4 ∧ e o5 ∧ e o6 , (2.3)
the pseudo-scalar of
R 3 I = e 1 ∧ e 2 ∧ e 3 , (2.4)
and the pseudo-scalar
I = I ∧ I ∞ ∧ I o . (2.5)
The inverse of the pseudo-scalar results in
I -1 = -I. (2.6)
The dual of a multivector indicates division by the pseudo-scalar, e.g., a * = -aI, a = a * I. From eq. (1.19) in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], we have the useful duality between outer and inner products of non-scalar blades a and b in Geometric Algebra:
(a ∧ b) * = a • b * , a ∧ (b * ) = (a • b) * , a ∧ (bI) = (a • b)I, (2.7)
which indicates that
a ∧ b = 0 ⇔ a • b * = 0, a • b = 0 ⇔ a ∧ b * = 0. (2.8)
Point in QCGA
The point x of QCGA corresponding to the Euclidean point x = xe 1 +ye 2 + ze 3 ∈ R 3 is defined as
x = x + 1 2 (
x 2 e ∞1 +y 2 e ∞2 +z 2 e ∞3 )+xye ∞4 +xze ∞5 +yze ∞6 +e o1 +e o2 +e o3 .
(2.9) Note that the null vectors e o4 , e o5 , e o6 are not present in the definition of the point. This is merely to keep the convenient properties of CGA points, namely, the inner product between two points is identical with the squared distance between them. Let x 1 and x 2 be two points, their inner product is from which together with Table 1, it follows that
x 1 • x 2 = x 1 e 1 + y 1 e 2 + z 1 e 3 + 1 2 x 2 1 e ∞1 + 1 2 y 2 1 e ∞2 + 1 2 z 2 1 e ∞3 + x 1 y 1 e ∞4 + x 1 z 1 e ∞5 +
x 1 • x 2 = x 1 x 2 + y 1 y 2 + z 1 z 2 - 1 2 x 2 1 - 1 2 x 2 2 - 1 2 y 2 1 - 1 2 y 2 2 - 1 2 z 2 1 - 1 2 z 2 2 = - 1 2 x 1 -x 2 2 .
(2.11)
We see that the inner product is equivalent to minus half the squared Euclidean distance between x 1 and x 2 .
QCGA objects
QCGA is an extension of CGA, thus the objects defined in CGA are also defined in QCGA. The following sections explore the plane, the line, and the sphere to show their definitions in QCGA, and similarity between these objects in CGA and their counterparts in QCGA.
3.1. Plane 3.1.1. Primal plane. As in CGA, a plane π in QCGA is computed using the wedge of three linearly independent points x 1 , x 2 , and x 3 on the plane:
π = x 1 ∧ x 2 ∧ x 3 ∧ I ∞ ∧ I o . (3.1)
The multivector π corresponds to the primal form of a plane in QCGA, with grade 14, composed of six components. The e o2o3 , e o1o3 , e o1o2 components have the same coefficient and can thus be factorized, resulting in a form defined with only four coefficients x n , y n , z n and h: Using distributivity and anticommutativity of the outer product, we obtain
π = x
x ∧ π = xx n + yy n + zz n - 1 3 h(1 + 1 + 1) I = xx n + yy n + zz n -h I = (x • n -h) I, (3.4)
which corresponds to the Hessian form of the plane with Euclidean normal n = x n e 1 + y n e 2 + z n e 3 and with orthogonal distance h from the origin. Proof. Consequence of (2.8).
Because of (2.11), a plane can also be obtained as the bisection plane of two points x 1 and x 2 in a similar way as in CGA.
Proposition 3.3. The dual plane π * = x 1 -x 2 is the dual orthogonal bisecting plane between the points x 1 and x 2 .
Proof. From Proposition 3.2, every point x on π * satisfies x • π * = 0,
x • (x 1 -x 2 ) = x • x 1 -x • x 2 = 0. (3.6)
As seen in (2.11), the inner product between two points results in the squared Euclidean distance between the two points. We thus have
x • (x 1 -x 2 ) = 0 ⇔ x -x 1 2 = x -x 2 2 . (3.7)
This corresponds to the equation of the orthogonal bisecting dual plane between x 1 and x 2 .
3.2. Line 3.2.1. Primal line. A primal line l is a 13-vector constructed from two linearly independent points x 1 and x 2 as follows:
l = x 1 ∧ x 2 ∧ I ∞ ∧ I o . (3.8)
The outer product between the 6-vector I ∞ and the two points x 1 and x 2 removes all their e ∞i components (i ∈ {1, • • • , 6}). Accordingly, they can be reduced to x 1 = (e o1 + e o2 + e o3 + x 1 ) and x 2 = (e o1 + e o2 + e o3 + x 2 ) respectively. For clarity, (3.8) is simplified "in advance" as
l = x 1 ∧ (e o1 + e o2 + e o3 + x 2 ) ∧ I ∞ ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ e o4o5o6 = x 1 ∧ (x 2 ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) + 3e o1 ∧ e o2 ∧ e o3 ) ∧ I ∞ ∧ e o4o5o6 = 3e o1 ∧ e o2 ∧ e o3 ∧ (x 2 -x 1 ) + x 1 ∧ x 2 ∧ (e o1 -e o2 ) ∧ (e o2 -e o3 ) ∧ I ∞ ∧ e o4o5o6 .
(3.9)
Setting u = x 2 -x 1 and v = x 1 ∧ x 2 gives l = 3e o1 ∧ e o2 ∧ e o3 ∧ u + v ∧ (e o1o2 -e o1o3 + e o2o3 ) ∧ I ∞ ∧ e o4o5o6 = -3 u I ∞ ∧ I o + v I ∞ ∧ I o . (3.10)
Note that u and v correspond to the 6 Plücker coefficients of a line in R 3 . More precisely, u is the support vector of the line and v is its moment.
Proposition 3.4. A point x with Euclidean coordinates x lies on the line l iff x ∧ l = 0.
Proof.
x ∧ l = (x + e o1 + e o2 + e o3 ) ∧ (-
3 u I ∞ ∧ I o + v I ∞ ∧ I o ) = -3x ∧ u I ∞ ∧ I o + x ∧ v I ∞ ∧ I o + v I ∞ ∧ (e o1 + e o2 + e o3 ) ∧ I o = -3(x ∧ u -v ) I ∞ ∧ I o + x ∧ v I ∞ ∧ I o . (3.11)
The 6-blade I ∞ ∧ I o and the 5-blade I ∞ ∧ I o are linearly independent. Therefore, x ∧ l = 0 yields
x ∧ l = 0 ⇔ x ∧ u = v , x ∧ v = 0. (3.12)
As x , u and v are Euclidean entities, (3.12) corresponds to the Plücker equations of a line [START_REF] Kanatani | Understanding geometric algebra: Hamilton, Grassmann, and Clifford for Computer Vision and Graphics[END_REF].
Dual line.
Dualizing the entity l consists in computing with duals: Proof. Consequence of (2.8).
l * = (-3 u I ∞ ∧ I o + v I ∞ ∧ I o )(-I) = 3 u I + (e ∞3 + e ∞2 + e ∞1 ) ∧ v I . ( 3
Note that a dual line l * can also be constructed from the intersection of two dual planes as follows:
l * = π * 1 ∧ π * 2 .
(3.14)
3.3. Sphere 3.3.1. Primal sphere. We define a sphere s using four points as the 14-blade
s = x 1 ∧ x 2 ∧ x 3 ∧ x 4 ∧ I ∞ ∧ I o . (3.15)
The outer product of the points with I ∞ removes all e ∞4 , e ∞5 , e ∞6 components of these points, i.e., the cross terms (xy, xz, and yz). The same remark holds for I o and e o4 , e o5 , e o6 . For clarity, we omit these terms below. We thus have
s = x 1 ∧x 2 ∧x 3 ∧ 1
s =x 1 ∧ x 2 ∧ (x 3 ∧ x 4 I o ∧ I ∞ + 3(x 3 -x 4 )I o ∧ I ∞ + 1 2 x 4 2 x 3 I ∞ ∧ I o - 1 2 x 3 2 x 4 I ∞ ∧ I o (3.17) + 3 2 ( x 4 2 -x 3 2 )I o ∧ I ∞ .
Again we remark that the resulting entity has striking similarities with a point pair of CGA. More precisely, let c be the Euclidean midpoint between the two entities x 3 and x 4 , d be the unit vector from x 3 to x 4 , and r be half of the Euclidean distance between the two points in exactly the same way as Hitzer et al in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], namely
2r = |x 3 -x 4 | , d = x 3 -x 4 2r , c = x 3 + x 4 2 . (3.18)
Then, (3.17) can be rewritten by
s =x 1 ∧ x 2 ∧ 2r d ∧ c I o ∧ I ∞ (3.19) + 3d I o ∧ I ∞ + 1 2 (c 2 + r 2 )d -2c c • d I ∞ ∧ I o .
The bottom part corresponds to a point pair, as defined in [START_REF] Hitzer | Carrier method for the general evaluation and control of pose, molecular conformation, tracking, and the like[END_REF], that belongs to the round object family. Applying the same development to the two points x 1 and x 2 again results in round objects:
s = - 1 6 x c 2 -r 2 I ∧ I ∞ ∧ I o + e 123 ∧ I ∞ ∧ I o + (x c I ) ∧ I ∞ ∧ I o . (3.20)
Note that x c corresponds to the center point of the sphere and r to its radius. It can be further simplified into
s = x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ) I, (3.21)
which is dualized to
s * = x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ), (3.22)
where x c corresponds to x c without the cross terms xy, xz, yz. Since a QCGA point has no e o4 , e o5 , e o6 components, building a sphere with these cross terms is also valid. However, inserting these cross terms (that actually do not appear in the primal form) raises some issues in computing intersections with other objects.
Proposition 3.6. A point x lies on the sphere s iff x ∧ s = 0.
Proof. Since the components e ∞4 , e ∞5 and e ∞6 of x are removed by the outer product with s of (3.17), we ignore them to obtain
x ∧ s = x ∧ (s * I) = x • s * I (3.23) = x + e o1 + e o2 + e o3 + 1 2 x 2 e ∞1 + 1 2 y 2 e ∞2 + 1 2 z 2 e ∞3 (3.24) • x c - 1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ) I,
which can be rewritten by
x ∧ s = xx c + yy c + zz c - 1 2 x 2 c - 1 6 r 2 - 1 2 y 2 c - 1 6 r 2 - 1 2 z 2 c - 1 6 r 2 - 1 2 x 2 - 1 2 y 2 - 1 2 z 2 I = 0. (3.25)
This can take a more compact form defining a sphere Proof. Consequence of (2.8).
(x -x c ) 2 + (y -y c ) 2 + (z -z c ) 2 = r 2 . ( 3
Quadric surfaces
This section describes how QCGA handles quadric surfaces. All QCGA objects defined in Section 3 become thus part of a more general framework.
Primal quadric surfaces
The implicit formula of a quadric surface in R 3 is F (x, y, z) = ax 2 + by 2 + cz 2 + dxy + exz + fyz + gx + hy + iz + j = 0. (4.1)
A quadric surface is constructed by wedging 9 points together with 5 null basis vectors as follows
q = x 1 ∧ x 2 ∧ • • • ∧ x 9 ∧ I o . (4.2)
The multivector q corresponds to the primal form of a quadric surface with grade 14 and 12 components. Again 3 of these components have the same coefficient and can be combined together into the form defined by 10 coefficients a, b, . . . , j, as in q = e 123 2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6 where in the second equality we used the duality property. The expression for the dual quadric vector is therefore q * = -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6
+ ge 1 + he 2 + ie 3 -j 3 (e ∞1 + e ∞2 + e ∞3 ). (4.4)
Proposition 4.1. A point x lies on the quadric surface q iff x ∧ q = 0.
Proof.
x ∧ q = x ∧ (q * I) = (x • q * )I = x • -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6
+ ge 1 + he 2 + ie 3 -j 3 (e ∞1 + e ∞2 + e ∞3 ) I = ax 2 + by 2 + cz 2 + dxy + exz + fyz + gx + hy + iz + j I. (
This corresponds to the formula (4.1) representing a general quadric surface.
Dual quadric surfaces
The dualization of a primal quadric surface leads to the 1-vector dual quadric surface q * of (4.4). We have the following proposition whose proof is a consequence of (2.8).
Proposition 4.2. A point x lies on the dual quadric surface q * iff x • q * = 0.
Normals and tangents
This section presents the computation of the normal Euclidean vector n and the tangent plane π * of a point x (associated to the Euclidean point x = xe 1 + ye 2 + ye 3 ) on a dual quadric surface q * . The implicit formula of the dual quadric surface is considered as the following scalar field
F (x, y, z) = x • q * . ( 5.1)
The normal vector n of a point x is computed as the gradient of the implicit surface (scalar field) at x:
n = ∇F (x, y, z) = ∂F (x, y, z) ∂x e 1 + ∂F (x, y, z) ∂y e 2 + ∂F (x, y, z) ∂z e 3 . (5.2)
Since the partial derivative with respect to the x component is defined by
∂F (x, y, z) ∂x = lim h →0 F (x + h, y, z) -F (x, y, z) h , (5.3)
we have
∂F (x, y, z) ∂x = lim h →0 x 2 • q * -x • q * h = lim h →0 x 2 -x h • q * , (5.4)
where x 2 is the point obtained by translating x along the x-axis by the value h. Note that x 2 -x represents the dual orthogonal bisecting plane spanned by x 2 and x (see Proposition 3.3). Accordingly, we have
lim h →0 x 2 -x h = xe ∞1 + ye ∞4 + ze ∞5 + e 1 = (x • e 1 )e ∞1 + (x • e 2 )e ∞4 + (x • e 3 )e ∞5 + e 1 . (5.5)
This argument can also be applied to the partial derivative with respect to the y and z components. Therefore, we obtain
n = (x • e 1 )e ∞1 + (x • e 2 )e ∞4 + (x • e 3 )e ∞5 + e 1 • q * e 1 + (x • e 2 )e ∞2 + (x • e 1 )e ∞4 + (x • e 3 )e ∞6 + e 2 • q * e 2 + (x • e 3 )e ∞3 + (x • e 1 )e ∞5 + (x • e 2 )e ∞6 + e 3 • q * e 3 . (5.6)
On the other hand, the tangent plane at a surface point x can be computed from the Euclidean normal vector n and the point x. Since the plane orthogonal distance from the origin is -2(e o1 + e o3 + e o3 ) • x, the tangent plane π * is obtained as
π * = n + 1 3
e ∞1 + e ∞2 + e ∞3 -2(e o1 + e o3 + e o3 ) • x.
(5.7)
Intersections
Let us consider two geometric objects corresponding to dual quadrics1 a * and b * . Assuming that the two objects are linearly independent, i.e., a * and b * are linearly independent, we consider the outer product c * of these two objects c * = a * ∧ b * . (6.1) If a point x lies on c * , then
x • c * = x • (a * ∧ b * ) = 0. (6.2)
The inner product computation of (6.2) leads to
x • c * = (x • a * )b * -(x • b * )a * = 0. (6.3)
Our assumption of linear independence between a * and b * indicates that (6.3) holds if and only if x • a * = 0 and x • b * = 0, i.e. the point x lies on both quadrics. Thus, c * = a * ∧ b * represents the intersection of the linearly independent quadrics a * and b * , and a point x lies on this intersection if and only if x • c * = 0.
6.1. Quadric-Line intersection For example, in computer graphics, making a Geometric Algebra compatible with a raytracer requires only to be able to compute a surface normal and a line-object intersection. This section defines the line-quadric intersection.
Similarly to (6.1), the intersection x ± between a dual line l * and a dual quadric q * is computed by l * ∧ q * . Any point x lying on the line l defined by two points x 1 and x 2 can be represented by the parametric formula x = α(x 1 -x 2 ) + x 2 = αu + x 2 . Note that u could also be computed directly from the dual line l * (see (3.13)). Any point x 2 ∈ l can be used, especially the closest point of l from the origin, i.e. x 2 = v •u -1 . Accordingly, computing the intersection between the dual line l * and the dual quadric q * becomes equivalent to finding α such that x lies on the dual quadric, i.e., x • q * = 0, leading to a second degree equation in α, as shown in (4.1). In this situation, the problem is reduced to computing the roots of this equation. However, we have to consider four cases: the case where the line is tangent to the quadric, the case where the intersection is empty, the case where the line intersects the quadric into two points, and the case where one of the two points exists at infinity. To identify each case, we use the discriminant δ defined as:
δ = β 2 -4(x 2 • q * ) 6 i=1 (u • e oi )(q * • e ∞i ), (6.4)
where
β = 2u • (a(x 2 • e 1 )e 1 + b(x 2 • e 2 )e 2 + c(x 2 • e 3 )e 3 )+ d (u ∧ e 1 ) • (x 2 ∧ e 2 ) + (x 2 ∧ e 1 ) • (u ∧ e 2 ) + e (u ∧ e 1 ) • (x 2 ∧ e 3 ) + (x 2 ∧ e 1 ) • (u ∧ e 3 ) + f (u ∧ e 2 ) • (x 2 ∧ e 3 ) + (x 2 ∧ e 2 ) • (u ∧ e 3 ) + q * • u . (6.5)
If δ < 0, the line does not intersect the quadric (the solutions are complex). If δ = 0, the line and the quadric are tangent. If δ > 0 and 6 i=1 (u • e oi )(q * • e ∞i ) = 0, we have only one intersection point (linear equation). Otherwise, we have two different intersection points x ± computed by
x ± = u(-β ± √ δ)/ 2 6 i=1 (u • e oi )(q * • e ∞i ) + x 2 . ( 6
• • quadrics intersection • • • quadric plane intersection • • • versors • • Darboux cyclides • • •
in DPGA and DCGA. However, QCGA also faces some limitations that do not affect DPGA and DCGA, as summarized in Table 2. First, DPGA and DCGA are known to be capable of transforming all objects by versors [START_REF] Du | Modeling 3D Geometry in the Clifford Algebra R 4,4[END_REF][START_REF] Easter | Double conformal geometric algebra[END_REF] whereas it is not yet clear whether objects in QCGA can be transformed using versors. An extended version of CGA versors can be used to transform lines in QCGA (and probably all round and flat objects of CGA), but more investigation is needed. Second, the number of basis elements spanned by QCGA is 2 15 ( 32, 000) components for a full multivector. Although multivectors of QCGA are in reality almost always very sparse, this large number of elements may cause implementation issues (see Section 7.2). It also requires some numerical care in computation, especially during the wedge of 9 points. This is because some components are likely to be multiplied at the power of 9.
Implementations
There exist many different implementations of Geometric Algebra, however, very few can handle dimensions higher than 8 or 10. This is because higher dimensions bring a large number of elements of the multivector used, resulting in expensive computation. In many cases, the computation then becomes impossible in practice. QCGA has a 15 vector space dimension and hence requires some specific care during the computation.
We conducted our tests with an enhanced version of Breuils et al. [START_REF] Breuils | A geometric algebra implementation using binary tree[END_REF][START_REF] Breuils | A hybrid approach for computing products of high-dimensional geometric algebras[END_REF] which is based on a recursive framework. We remark that most of the products involved in our tests were the outer products between 14-vectors and 1-vectors, applying one of the less time consuming products of QCGA. Indeed, QCGA with vector space dimension of 15 has 2 15 elements and this number is 1,000 times as large as that of elements for CGA with vector space dimension of 5 (CGA with vector space dimension of 5 is needed for the equivalent operations with QCGA with dimension of 15). The computational time required for QCGA, however, did not need 1,000 times but only 70 times of that for CGA. This means that the computation of QCGA runs in reasonable time on the enhanced version of Breuils et al. [START_REF] Breuils | A geometric algebra implementation using binary tree[END_REF][START_REF] Breuils | A hybrid approach for computing products of high-dimensional geometric algebras[END_REF]. More detailed analysis in this direction is left for future work. Figure 1 depicts a few examples generated with our OpenGL renderer based on the outer product null-space voxels and our ray-tracer. From left to right: a dual hyperboloid built from its equation, an ellipsoid built from its control points (in yellow), the intersection between two cylinders, and a hyperboloid with an ellipsoid and planes (the last one was computed with our ray-tracer).
Conclusion
This paper presented a new Geometric Algebra framework, Quadric Conformal Geometric Algebra (QCGA), that handles the construction of quadric surfaces using the implicit equation and/or control points. QCGA naturally includes CGA objects and generalizes some dedicated constructions. The intersection between objects in QCGA is computed using only outer products. This paper also detailed the computation of the tangent plane and the normal vector at a point on a quadric surface. Although QCGA is defined in a high dimensional space, most of the computations run in relatively low dimensional subspaces of this framework. Therefore, QCGA can be used for numerical computations in applications such as computer graphics.
3. 1 . 2 .
12 Dual plane. The dualization of the primal form of the plane is π * = n + 1 3 h(e ∞1 + e ∞2 + e ∞3 ). (3.5) Proposition 3.2. A point x with Euclidean coordinates x lies on the dual plane π * iff x • π * = 0.
. 13 ) 3 . 5 .
1335 Proposition A point x lies on the dual line l * iff x • l * = 0.
.26) 3 . 3 . 2 .
332 Dual sphere. The dualization of the primal sphere s gives: s * = x c -1 6 r 2 (e ∞1 + e ∞2 + e ∞3 ). (3.27) Proposition 3.7. A point x lies on the dual sphere s * iff x • s * = 0.
Figure 1 .
1 Figure 1. Example of our construction of QCGA objects.From left to right: a dual hyperboloid built from its equation, an ellipsoid built from its control points (in yellow), the intersection between two cylinders, and a hyperboloid with an ellipsoid and planes (the last one was computed with our ray-tracer).
Table 1 .
1 9,6 with the Euclidean basis {e 1 , e 2 , e 3 }, and 6 basis vectors {e +1 , e +2 , e +3 , e +4 , e +5 , e +6 } each of which Inner product between QCGA basis vectors.e 1 e 2 e 3 e o1 e ∞1 e o2 e ∞2 e o3 e ∞3 e o4 e ∞4 e o5 e ∞5 e o6 e ∞6
e 1 1 0 0
y 1 z 1 e ∞6 + e o1 + e o2 + e o3 • x 2 e 1 + y 2 e 2 + z 2 e 3 +
1 2 x 2 2 e ∞1 + 1 2 y 2 2 e ∞2 + 1 2 z 2 2 e ∞3
+ x 2 y 2 e ∞4 + x 2 z 2 e ∞5 + y 2 z 2 e ∞6 + e o1 + e o2 + e o3 .
(2.10)
Proposition 3.1. A point x with Euclidean coordinates x lies on the plane π iff x ∧ π = 0. ∞3 + xye ∞4 + xze ∞5 + yze ∞6 + e o1 + e o2 + e o3 ∧ (x n e 23 -y n e 13 + z n e 12 )I ∞ ∧ I o
Proof.
x ∧ π = xe 1 + ye 2 + ze 3 + z 2 e + 1 2 x 2 e ∞1 + 1 2 y 2 e ∞2 + 1 2 h 3 e 123 I ∞ ∧ (e o2o3 -e o1o3 + e o1o2 ) ∧ e o4o5o6 . (3.3)
n e 23 -y n e 13 + z n e 12 I ∞ ∧ I o + h 3 e 123 I ∞ ∧ e o2o3 -e o1o3 + e o1o2 ∧ e o4o5o6 . (3.2)
∞3 )I ∞ ∧I o -3I ∞ ∧I o +x 4 I ∞ ∧I o . (3.16)Note the similarities with a CGA point x 4 + e o + 1 2 x 4 2 e ∞ . Then, the explicit outer product with x 3 gives:
2 (x 2 4 e ∞1 +y 2 4 e ∞2 +z 2 4 e
• I ∞ ∧ I o + ge 1 + he 2 + ie 3 e 123 I ∞ ∧ I o + j 3 e 123 I ∞ ∧ (e ∞1 + e ∞2 + e ∞3 ) • I o = -2ae o1 + 2be o2 + 2ce o3 + de o4 + ee o5 + fe o6 + ge 1 + he 2 + ie 3 -j 3 e ∞1 + e ∞2 + e ∞3 I = q * I, (4.3)
Table 2 .
2 Comparison of properties between DPGA, DCGA, and QCGA. The symboli • stands for "capable", • for "incapable" and for "unknown".
.6)
7.1. Limitations
The construction of quadric surfaces by the wedge of conformal points presented in Sections 3 and 4 is a distinguished property of QCGA that is missing
The term "quadric" (without being followed by surface) encompasses quadric surfaces and conic sections. |
01767264 | en | [
"spi.signal"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01767264/file/bGMCA.pdf | C Kervazo
J Bobin
C Chenot
Blind separation of a large number of sparse sources
Keywords: Blind source separation, sparse representations, block-coordinate optimization strategies, matrix factorization
Blind separation of a large number of sparse sources
Introduction
Problem statement
Blind source separation (BSS) is the major analysis tool to retrieve meaningful information from multichannel data. It has been particularly successful in a very wide range of signal processing applications ranging from astrophysics [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF] to spectroscopic data in medicine [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF] or nuclear physics [START_REF] Nuzillard | Application of blind source separation to 1-D and 2-D nuclear magnetic resonance spectroscopy[END_REF], to name only a few. In this framework, the observations {x i } i=1,...,m are modeled as a linear combination of n unknown elementary sources {s j } j=1,...,n :
x i = n j=1 a ij s j + z i . The coefficients a ij are measuring the contribution of the j-th source to the observation x i , while z i is modeling an additive noise as well as model imperfections. Each datum x i and source s j is supposed to have t entries. This problem can be readily recast in a matrix formulation:
X = AS + N (1)
where X is a matrix composed of the m row observations and t columns corresponding to the entries (or samples), the mixing matrix A is built from the {a ij } i=1,...,m,j=1,...,n coefficients and S is a n × t matrix containing the sources. Using this formulation, the goal of BSS is to estimate the unknown matrices A and S from the sole knowledge of X.
Blind source separation methods
It is well-known that BSS is an ill-posed inverse problem, which requires additional prior information on either A or S to be tackled [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF]. Making BSS a better-posed problem is performed by promoting some discriminant information or diversity among the sources. A first family of standard techniques, such as Independent Component Analysis (ICA), assumes that the sources are statistically independent [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF].
In this study, we will specifically focus on the family of algorithms dealing with the case of sparse BSS problems (i.e. where the sources are assumed to be sparse), which have attracted a lot of interest during the last two decades [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF][START_REF] Bronstein | Sparse ICA for blind separation of transmitted and reflected images[END_REF][START_REF] Li | Underdetermined blind source separation based on sparse representation[END_REF]. Sparse BSS has mainly been motivated by the success of sparse signal modeling for solving very large classes of inverse problems [START_REF] Starck | Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity[END_REF]. The Generalized Morphological Component Analysis (GMCA) algorithm [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] builds upon the concept of morphological diversity to disentangle sources that are assumed to be sparsely distributed in a given dictionary. The morphological diversity property states that sources with different morphologies are unlikely to have similar large value coefficients. This is the case of sparse and independently distributed sources, with high probability. In the framework of Independent Component Analysis (ICA), Efficient FastICA (EFICA) [START_REF] Koldovsky | Efficient variant of algorithm Fas-tICA for independent component analysis attaining the Cramér-Rao lower bound[END_REF] is a FastICA-based algorithm that is especially adapted to retrieve sources with generalized Gaussian distributions, which includes sparse sources. In the seminal paper [START_REF] Zibulevsky | Blind source separation with relative Newton method[END_REF], the author also proposed a Newton-like method for ICA called Relative Newton Algorithm (RNA), which uses quasi-maximum likelihood estimation to estimate sparse sources. A final family of algorithms builds on the special case where it is known that A and S are furthermore non-negative, which is often the case on real world data [START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF].
However, the performances of most of these methods decline when the number of sources n becomes large. As an illustration, Fig. 1 shows the evolution of the mixing matrix criterion (cf. sec. 3.1, [START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF]) as a function of the number of sources for various BSS methods. This experiment illustrates that most methods do not perform correctly in the "large-scale"regime. In this case, the main source of deterioration is very likely related to the non-convex nature of BSS. Indeed, for a fixed number of samples t, an increasing number of sources n will make these algorithms more prone to be trapped in spurious local minima, which tends to hinder the applicability of BSS on practical issues with a large n. Consequently, the optimization strategy has a huge impact on the separation performances.
Contribution
In a large number of applications such as astronomical [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF] or biomedical signals [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF], designing BSS methods that are tailored to precisely retrieve a large number of sources is of paramount importance. For that purpose, the goal of this article is to introduce a novel algorithm dubbed bGMCA (block-Generalized Morphological Component Analysis) to specifically tackle sparse BSS problems when a large number of sources need to be estimated.
In this setting, which we will later call the large-scale regime, the algorithmic strategy has a huge impact on the separation quality since BSS requires solving highy challenging non-convex problems. For that purpose, the proposed bGMCA algorithm builds upon the sparse modeling of the sources, as well as an efficient minimization scheme based on block-coordinate descent. In contrast to state-of-the art methods [START_REF] Zibulevsky | Blind source separation with relative Newton method[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF][START_REF] Rapin | NMF with sparse regularizations in transformed domains[END_REF][START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF], we show that making profit of block-based minimization with intermediate block sizes allows the bGMCA to dramatically enhance the separation performances, particularly when the number of sources to be estimated becomes large. Comparisons with the state-of-the art methods have been carried out on various simulation scenarios. The last part of the article will show the flexibility of bGMCA, with an application to sparse and non-negative BSS in the context of spectroscopy.
Optimization problem and bGMCA
General problem
Sparse BSS [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] aims to estimate the mixing matrix A and the sources S by minimizing a penalized least-squares of the form:
min A,S 1 2 X -AS 2 F + J (A) + G(S) (2)
The first term is a classical data fidelity term that measures the discrepancy between the data and the mixture model. The . F norm refers to the Frobenius norms, whose use stems from the assumption that the noise is Gaussian. The penalizations J and G enforce some desired properties on A and S (e.g. sparsity, non-negativity). In the following, we will consider that the proximal operators of J and G are defined, and that J and G are convex. However, the whole matrix factorization problem (2) is non-convex.
Consequently, the strategy of optimization has a critical impact on the separation performances, especially to avoid spurious local minimizers and to reduce the sensitivity to initialization. A common idea of several strategies (Block Coordinate Relaxation -BCR [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF], Proximal Alternating Linearized Minimization -PALM [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF], Alternating Least Squares -ALS) is to benefit from the multi-convex structure of (2) by using blocks [START_REF] Xu | A globally convergent algorithm for nonconvex optimization based on block coordinate update[END_REF] in which each sub-problem is convex. The minimization is then performed alternately with respect to one of the coordinate blocks while the other coordinates stay fixed, which entails solving a sequence of convex optimization problems. Most of the already existing methods can then be categorized in one of two families, depending on the block sizes:
-Hierarchical or deflation methods: these algorithms use a block of size 1. For instance, Hierarchical ALS (HALS) ( [START_REF] Gillis | Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization[END_REF] and references therein)
updates only one specific column of A and one specific row of S at each iteration. The main advantage of this family is that each subproblem is often much simpler as their minimizer generally admits a closed-form expression. Moreover, the matrices involved being small, the computation time is much lower. The drawback is however that the errors on some sources/mixing matrix columns propagate from one iteration to the other since they are updated independently.
-Full-size blocks: these algorithms use as blocks the whole matrices A and S (the block size is thus equal to n). For instance, GMCA [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF],
which is reminiscent of the projected Alternating Least Squares (pALS) algorithm, is part of this family. One problem compared to hierarchical or deflation methods is that the problem is more complex due to the simultaneous estimation of a high number of sources. Moreover, the computational cost increases quickly with the number of sources.
The gist of the proposed bGMCA algorithm is to adopt an alternative approach that uses intermediate block sizes. The underlying intuition is that using blocks of intermediate size can be recast as small-scale source separation problems, which are simpler to solve as testified by Fig. 1. As a byproduct, small-size subproblems are also less costly to tackle.
Block based optimization
In the following, bGMCA minimizes the problem in eq. ( 2) with blocks, which are indexed by a set of indices I of size r, 1 r n. In practice, the minimization is performed at each iteration on submatrices of A (keeping only the columns indexed by I ) and S (keeping only the rows indexed by I ).
Minimizing multi-convex problems
Block coordinate relaxation (BCR, [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF]) is performed by minimizing [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF] according to a single block while the others remain fixed. In this setting,
Tseng [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF] proved the convergence of BCR to minimize non-smooth optimization problems of the form (2). Although we adopted this strategy to tackle sparse NMF problems in [START_REF] Rapin | NMF with sparse regularizations in transformed domains[END_REF], BCR requires an exact minimization for one block at each iteration, which generally leads to a high computational cost. We therefore opted for Proximal Alternating Linearized Minimization (PALM), which was introduced in [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF]. It rather performs a single proximal gradient descent step for each coordinate at each iteration. Consequently, the PALM algorithm is generally much faster than BCR and its convergence to a stationary point of the multi-convex problem is guaranteed under mild conditions. In the framework of the proposed bGMCA algorithm, a PALMbased algorithm requires minimizing at each iteration eq. ( 2) over blocks of size 1 r n and alternating between the update of some submatrices of A and S (these submatrices will be noted A I and S I ). This reads at iteration (k) as:
1 -Update of a submatrix of S using a fixed A:
S (k) I = prox γG(.) A (k-1) T I A (k-1) I 2 S (k-1) I - γ A (k-1) T I A (k-1) I 2 A (k-1) T I (A (k-1) S (k-1) -X) (3)
2 -Update of a submatrix of A using a fixed S:
A (k) I = prox δJ (.) S (k) I S (k) T I 2 A (k-1) I - δ S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (4)
In eq. ( 3) and ( 4), the operator prox f is the proximal operator of f (cf. Appendix and [17] [18]). The scalars γ and δ are the gradient path lengths. The . 2 norm is the matrix norm induced by the 2 norm for vectors. More specifically, if x is a vector and .
2 is the 2 norm for vectors, the . 2 induced matrix norm is defined as:
M 2 = sup x =0 Mx 2 x 2 (5)
Block choice
Several strategies for selecting at each iteration the block indices I have been investigated: i) Sequential : at each iteration, r sources are selected sequentially in a cyclic way; ii) Random: at each iteration, r indices in [1, n] are randomly chosen following a uniform distribution and the corresponding sources updated; iii) Random sequential : this strategy combines the sequential and the random choices to ensure that all sources are updated an equal number of times. In the experiments, random strategies tended to provide better results. Indeed, compared to a sequential choice, randomness is likely to make the algorithm more robust with respect to spurious local minima.
Since the results between the random strategy and the random sequential one are similar, the first was eventually selected.
Examined cases and corresponding proximal operators
In several practical examples, an explicit expression can be computed for the proximal operators. In the next, the following penalizations have been considered:
1 -Penalizations G for the sources S:
-1 sparsity constraint in some transformed domain: The sparsity constraint on S is enforced with a 1 -norm penalization: G(S) = Λ S (SΦ T S ) 1 , where the matrix Λ S contains regularization parameters and denotes the Hadamard product. Φ S is a transform into a domain in which S can be sparsely represented. In the following, Φ S will be supposed to be orthogonal. The proximal operator for G in ( 3) is then explicit and corresponds to the softthresholding operator with threshold Λ S , which we shall denote S Λ S (.) (cf. Appendix). Using γ = 1 and assuming Φ S orthogonal, the update is then:
S (k) I = S Λ S S (k-1) I Φ S T - 1 A (k-1) I A (k-1) T I 2 A (k-1) T I (A (k-1) S (k-1) -X)Φ S T Φ S (6)
-Non-negativity in the direct domain and 1 sparsity constraint in some transformed domain: due to the non-negativity constraint, all coefficients in S must be non-negative in the direct domain in addition to the sparsity constraint in a transformed domain Φ S . It can be formulated as
G(S) = Λ S SΦ S T 1 + ι {∀j,k;S[j,k]≥0} (S)
where ι U is the indicator function of the set U . The difficulty is to enforce at the same time two constraints in two different domains, since the proximal operator of G is not explicit. It can either be roughly approximated by composing the proximal operators of the individual penalizations to produce a cheap update or computed accurately using the Generalized Forward-Backward splitting algorithm [START_REF] Raguet | A generalized forward-backward splitting[END_REF].
-Penalizations J for the mixing matrix A:
-Oblique constraint: to avoid obtaining degenerated A and S matrices ( A → ∞ and S → 0), the columns of A are constrained to be in the 2 ball, i.e. ∀j ∈ [1, n], A j 2 1. More specifically, J can be written as J (A) = ι {∀i; A i 2 2 ≤1} (A). Following this constraint, the proximal operator for J in eq. ( 4) is explicit and can be shown to be the projection Π . 2 (cf. Appendix) on the 2 unit ball of each column of the input. The update (4) of A I becomes:
A (k) I = Π . 2 1 A (k-1) I - 1 S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (7)
-Non-negativity and oblique constraint: Adding the non-negativity constraint on A reads:
J (A) = ι ∀i; A i 2 2 ≤1 (A) + ι ∀i,j;A[i,j]≥0 (A).
The proximal operator can be shown to be the composition of the proximal operator corresponding to non-negativity followed by Π . 2 1 . The proximal operator corresponding to non-negativity is the projection Π K + (cf. Appendix) on the positive orthant K + .
The update is then:
A (k) I = Π . 2 1 Π K + A (k-1) I - 1 S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I (8)
Minimization: introduction of a warm-up stage
While being provably convergent to a stationary point of (2), the above PALM-based algorithm suffers from a lack of robustness with regards to a bad initialization, which makes it more prone to be trapped in spurious local minima. Moreover, it is quite difficult to automatically tune the thresholds Λ so that it yields reasonable results. On the other hand, algorithms based on GMCA [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] have been shown to be robust to initialization. Furthermore, in this framework, fixing the parameters Λ can be done in an automatic manner. However, GMCA-like algorithms are based on heuristics, which preclude provable convergence to a minimum of (2).
The proposed strategy consists in combining the best of both approaches to build a two-stage minimization procedure (cf. Algorithm 1): i) a warm-up stage building upon the GMCA algorithm to provide a fast and reliable first guess, and ii) a refinement stage based on the above PALM-based algorithm that provably yields a minimizer of (2). Moreover, the thresholds Λ in the refinement stage will be naturally derived from the first stage. Based on the GMCA algorithm [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF], the warm-up stage is summarized below: 0 -Initialize the algorithm with random A. For each iteration (k):
1 -The sources are first updated assuming a fixed A. A submatrix S I is however now updated instead of S. This is performed using a projected least square solution:
S (k) I = prox G(.) (A (k-1) † I R I ) (9)
where: R I is the residual term defined by R I = X -A (k )
I C S (k )
I C (with I C the indices of the sources outside the block), which is the part of X to be explained by the sources in the current block
I . A (k) † I is the pseudo-inverse of A (k)
I , the estimate of A I at iteration (k).
2 -The mixing sub-matrix A I is similarly updated with a fixed S:
A (k) I = prox J (.) (R I S (k) † I ) ( 10
)
The warm-up stage stops after a given number of iterations. Since the penalizations are the same as in the refinement stage, the proximal operators can be computed with the formulae described previously, depending on the implemented constraints. For S, eq. ( 6) can be used to enforce sparsity. To enforce non-negativity and sparsity in some transformed domain, the cheap update described in section 2.2.1 consisting in composing the proximal operators of the individual penalizations can be used. For A, equations ( 7) and ( 8) can be used depending on the implemented constraint.
Heuristics for the warm-up stage
In the spirit of GMCA, the bGMCA algorithm exploits heuristics to make the separation process more robust to initialization, which mainly consists in making use of a decreasing thresholding strategy. In brief, the entries of the threshold matrix Λ first start with large values and then decrease along the iterations towards final values that only depend on the noise level. This stategy has been shown to significantly improve the performances of the separation process [START_REF] Bobin | Blind source separation: The sparsity revolution[END_REF][START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] as it provides: i) a better unmixing, ii) an increased robustness to noise, and iii) an increased robustness to spurious local minima.
In the bGMCA algorithm, this strategy is deployed by first identifying the coefficients of each source in I that are not statistically consistent with noise.
Assuming that each source is contaminated with a Gaussian noise with standard deviation σ, this is performed by retaining only the entries whose amplitude is larger than τ σ, where τ ∈ [START_REF] Biswal | Blind source separation of multiple signal sources of fMRI data sets using independent component analysis[END_REF][START_REF] Nuzillard | Application of blind source separation to 1-D and 2-D nuclear magnetic resonance spectroscopy[END_REF]. In practice, the noise standard deviation is estimated empirically using the Median Absolute Deviation (MAD)
estimator. For each source in I, the actual threshold at iteration k is fixed based on a given percentile of the available coefficients with the largest amplitudes. Decreasing the threshold at each iteration is then performed by linearly increasing the percentage of retained coefficients at each iteration:
Percentage = k iterations × 100.
Convergence
The bGMCA algorithm combines sequentially the above warm-up stage and the PALM-based refinement stage. Equipped with the decreasing thresholding strategy, it cannot be proved that the warm-up stage neither converges to a stationary point of eq. ( 2) nor converges at all. In practice, after consecutive iterates, the warm-up stage tends to stabilize. However, it plays a key role to provide a reasonable starting point, as well as threshold values Λ for the refinement procedure. In the refinement stage, the thresholds are computed from the matrices estimated in the warm-up and fixed for the whole refinement step. Based on the PALM algorithm, and with these fixed thresholds, the refinement stage converges to a stationary point of eq. ( 2).
The convergence is also guaranteed with the proposed block-based strategy, as long as the blocks are updated following an essentially cyclic rule [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] or even if they are chosen randomly and updated one by one [START_REF] Patrascu | Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization[END_REF].
Required number of iterations
Intuitively, the required number of iterations should be inversely proportional to r, since only r sources are updated at each iteration, requiring n/r times the number of iterations needed by an algorithm using the full matrices. As will be emphasized later on, the number of required iterations will be smaller than expected, which reduces the computation time.
In the refinement stage, the stopping criterion is based on the angular distance for each column of A, i.e. the angle between the current column and that of the previous iteration. Then, the mean over all the columns is taken:
∆ = j∈[1,n] A (k) j A (k-1) j 1 n (11)
The stopping criterion itself is then a threshold τ used to stop the algorithm when ∆ > τ . In addition, we also fixed a maximal number of iterations.
Numerical experiments on simulated data
In this part, we present our results on simulated data. The goal is to show and to explain on simple data how bGMCA works.
Experimental protocol
The simulated data were generated in the following way: The number of observations m is taken equal to the number of sources:
I = prox γG(.) A (k-1) T I A (k-1) I 2 S (k-1) I - γ A (k-1) T I A (k-1) I 2 A (k-1) T I (A (k-1) S (k-1) -X) A (k) I = prox δJ (.) S (k) I S (k) T I 2 A (k-1) I - δ S (k) I S (k) T I 2 (A (k-1) S (k) -X)S (k) T I ∆ = j∈[1,n] A (k) j A (k-1) j 1 n k = k + 1 end while return A, S 1
m = n.
In this first simulation, no noise is added. The algorithm was launched with 10, 000 iterations. It has to be emphasized that since neither A nor S are non-negative, the corresponding proximal operators we used did not enforce non-negativy. Thus, we used soft-thresholding for S and the oblique constraint for A according to section 2.2.1.
To measure the accuracy of the separation, we followed the definition in [START_REF] Bobin | Sparsity and adaptivity for the blind separation of partially correlated sources[END_REF] to use a global criterion on A:
C A = median(|PA † A * | -I d ),
Modeling block minimization
In this section, a simple model is introduced to describe the behavior of the bGMCA algorithm. As described in section 2.2, updating a given block
R I = X -A I C S I C = A * I S * I + E + N (12)
A way to further describe the structure of E is to decompose the S matrix in the true matrix plus an error: S I = S * I + I and S I C = S * I C + I C , where S is the estimated matrix, and is the error on S * . Assuming that the errors are small and neglecting the second-order terms, the residual R I can now be written as:
R I = X -A I C S I C = A * I S * I + A * I C S * I C -A I C S * I C -A I C I C + N (13)
This implies that:
E = (A * I C -A I C )S * I C -A I C I C (14)
Equation ( 14) highlights two terms. The first term can be qualified as interferences in that it comes from a leakage of the true sources that are outside the currently updated block. This term vanishes when A I C is perfectly estimated. The second term corresponds to interferences as well as artefacts.
It originates indeed from the error on the sources outside the block I. The artefacts are the errors on the sources induced by the soft thresholding corresponding to the 1 -norm.
Equation ( 14) also allows us to understand how the choice of a given block size r n will impact the separation process:
-Updating small-size blocks can be recast as a small-size source separation problem where the actual number of sources is equal to r. The residual of the sources that are not part of the block I then plays the role of extra noise. As testified by Fig. 1, updating small-size block problems should be easier to tackle.
-Small-size blocks should also yield larger errors E. It is intuitively due to the fact that many potentially badly estimated sources in I C are used for the estimation of A I and S I through the residual, deteriorating this estimation. It can be explained in more details using equation ( 14
Experiment
In this section, we investigate the behavior of the proposed block-based GMCA algorithm with respect to various parameters such as the block size, the number of sources, the conditioning of the mixing matrix and the sparsity level of the sources.
Study of the impact of r and n
In this subsection, bGMCA is evaluated for different numbers of sources n = 20, 50, 100. Each time the block sizes vary in the range 1 ≤ r ≤ n. In this experiment and to complete the description of section 3.1, the parameters for the matrices generation were: p = 0.1, t = 1, 000, C d = 1, m = n, with a Bernoulli-Gaussian distribution for the sources. These results are displayed in Fig. 2a. Interestingly, three different regimes characterize the behavior of the bGMCA algorithm:
-For intermediate and relatively large block sizes (typically r > 5 and r < n -5): we first observe that after an initial deterioration around r = 5 , the separation quality does not vary significantly for increasing block sizes. A degradation of several dB can then be observed for r close to n. In all this part of the curve, the error term E is composed of residuals of sparse sources, and thus E will be rather sparse when the block size is large. Based on the MAD, the thresholds are set according to dense and not to sparse noise. Consequently the automatic thresholding strategy of the bGMCA algorithm will not be sensitive to the estimation errors.
-A very prominent peak can be observed when the block size is of the order of 3. Interestingly, the maximum yields a mixing matrix criterion of about 10 -16 , which means that perfect separation is reached up to numerical errors. This value of 160 dB is at least 80 dB larger than in the standard case r = n, for which the values for the different n are all below 80 dB. In this regime, error propagation is composed of the mixture of a larger number of sparse sources, which eventually entails a densely distributed contribution that can be measured by the MAD-based thresholding procedure. Therefore, the threshold used to estimate the sources is able to filter out both the noise and the estimation errors. Moreover, r = 5 is quite small compared to n. Following the modeling introduced in section 3.2, small block sizes can be recast as a sequence of low-dimensional blind source separation problems, which are simpler to solve.
-For small block sizes (typically r < 4), the separation quality is deteriorated when the block size decreases, especially for large n values. In this regime, the level of estimation error E becomes large, which entails large values for the thresholds Λ. Consequently, the bias induced by the soft-thresholding operator increases, which eventually hampers the performance quality. Furthermore, for a fixed block size r, E increases with the number of sources n, making this phenomenon more pronounced for higher n values.
Condition number of the mixing matrix
In this section, we investigate the role played by the conditioning of the mixing matrix on the performances of the bGMCA algorithm. Fig. 2b displays the empirical results for several condition numbers C d of the A matrix.
There are n = 50 sources generated in the same way as in the previous experiment: with a Bernoulli-Gaussian distribution and p = 0.1, t = 1, 000.
One can observe that when C d increases, the peak present for r close to 5 tends to be flattened, which is probably due to higher projection errors. At some iteration k, the sources are estimated by projecting X -A I c S I c onto the subspace spanned by A I . In the orthogonal case, the projection error is low since A I c and A I are close to orthogonality at the solution. However, this error increases with the condition number C d .
Sparsity level p
In this section, the impact of the sparsity level of the sources is investigated. The sources are still following a Bernoulli-Gaussian distribution.
The parameters are: n = 50, t = 1, 000, C d = 1. As featured in Figure 3, the separation performances at the maximum value decrease slightly with larger p, while a slow shift of the transition between the small/large block size regimes towards larger block sizes operates. Furthermore, the results tend to deteriorate quickly for small block sizes (r < 4). Indeed, owing to the model of subsection 3.2, the contribution of S * I C and I C in the error term [START_REF] Tseng | Convergence of a block coordinate descent method for nondifferentiable minimization[END_REF] increases with p, this effect being even more important for small r (which could also explain the shift of the peak for p = 0.3, by a deterioration of the results at its beginning, r = 3). When p increases, the sources in S I become denser. Instead of being mainly sensitive to the noise and E, the MAD-based thresholding tends to be perturbated by S I , resulting in more artefacts, which eventually hampers the separation performances. This effect increases when the sparsity level of the sources decreases. Beyond improving the separation performances, the use of small block sizes decreases the computational cost of each iteration of the bGMCA algorithm. Since it is iterative, the final running time will depend on both the complexity of each iteration and of the number of iterations. In this part, we focus only on the warm-up stage, which is empirically the most computationally expensive stage. Each iteration of the warm-up stage can be decomposed into the following elementary steps: i) a residual term is computed with a complexity of O(mtr), where m is the number of observations, t the number of samples and r the block size; ii) the pseudo-inverse is performed with the singular value decomposition of a r × r matrix, which yield an overall complexity of O(r 3 + r 2 m + m 2 r); iii) the thresholding-strategy first requires the evaluation of the threshold values, which has a complexity of rt; iv) then the soft-thresholding step which has complexity O(rt); and v) updating A is finally performed using a conjugate gradient algorithm, whose complexity is known to depend on the number of non-zero entries in S and on the condition of this matrix C d (S). An upperbound for this complexity is thus O(rt C d (S)). The final estimate of the complexity of a single iteration is finally given by:
Complexity and computation time
r[mt + rm + m 2 + r 2 + t C d (S)] (15)
With C d (S) the conditioning number of S. Thus, both the r factor and the behavior in r 3 show that small r values will lower the computational budget of each iteration. We further assess the actual number of iterations required by the warm-up stage to yield a good initialization. To this end, the following experiment has been conducted:
1. First, the algorithm is launched with a large number of iterations (e.g. 10000) to give a good initialization for the A and S matrices. The corresponding value of C A is saved and called C * A .
2. Using the same initial conditions, the warm-up stage is re-launched and stops when the mixing matrix criterion reaches 1.05 × C * A (i.e. 5% of the "optimal"initialization for a given setting).
The number of iterations needed to reach the 5% accuracy is reported in Fig. 4. Intuitively, one would expect that when the block size decreases, the required number of iterations should increase by about n/r to keep the number of updates per source constant. This trend is displayed with the straight curve of Fig. 4. Interestingly, Fig. 4 shows that the actual number of iterations to reach the 5% accuracy criterion almost does not vary with r.
Consequently, on top of leading to computationally cheaper iterations, using small block sizes does not require more iterations for the warm-up stage to give a good initialization. Therefore, the use of blocks allows a huge decrease of the computational cost of the warm-up stage and thus of sparse BSS.
Experiment using realistic sources
Context
The goal of this part is to evaluate the behavior of bGMCA and show its efficiency in a more realistic setting. Our data come from a simulated LC -1 H NMR (Liquid Chromatography -1 H Nuclear Magnetic Resonance) experiment. The objective of such a experiment is to identify each of the chemicals compounds present in a fluid, as well as their concentrations. The LC -1 H NMR experiment enables a first physical imperfect separation during which the fluid goes through a chromatography column and its chemicals are separated according to their speeds (which themselves depend on their physical properties). Then, the spectrum of the output of the column is measured at a given time frequency. These measurements of the spectra at different times can be used to feed a bGMCA algorithm to refine the imperfect physical separation.
The fluids on which we worked could for instance correspond to drinks. The goal of bGMCA is then to identify the spectra of each compound (e.g. caffein, saccharose, menthone...) and the mixing coefficients (which are proportional to their concentrations) from the LC -1 H NMR data. BSS has already been successfully applied [START_REF] Toumi | Effective processing of pulse field gradient NMR of mixtures by blind source separation[END_REF] to similar problems but generally with lower number of sources n.
The sources (40 sources with each 10, 000 samples) are composed of elementary sparse non-negative theoretical spectra of chemical compounds taken from the SDBS database 1 , which are further convolved with a Laplacian having a width of 3 samples to simulate a given spectral resolution. Therefore, each convolved source becomes an approximately sparse non-negative row of S. The mixing matrix A of size (m,n) = (320,40) is composed of Gaussians (see Fig. 5), the objective being to have a matrix that could be consistent with the first imperfect physical separation.
Experiments
There are two main differences with the previous experiments of section 3: i) the sources are sparse in the undecimated wavelet domain Φ S , which is These results show that non-negativity yields a huge improvement for all block sizes r, which is expected since the problem is more constrained. This is probably due to the fact that all the small negative coefficients are set to 0, thus artificially allowing lower thresholds and therefore less artefacts. This is especially advantageous in the present context with very low noise2 (the Signal to Noise Ratio -SNR -has a value of 120 dB) where the thresholds do not need to be high to remove noise. Furthermore, the separation quality tends to be constant for r ≥ 10. In this particular setting, non-negativity helps curing the failure of sparse BSS when large blocks are used. However, using smaller block sizes still allows reducing the computation cost while preserving the separation quality. The bGMCA with non-negativity also compares favorably with respect to other tested standard BSS methods (cf. Section 1 for more details), yielding better results for all values of r. In particular, it is always better than HALS, which also uses non-negativity. As an illustration, a single original source is displayed in the right panel of Fig. 6 after its convolution with a Laplacian.
Its estimation using bGMCA with a non-negativity constraint is plotted in dashed line on the same graph, showing the high separation quality because of the nearly perfect overlap between the two curves. Both sources are drawn in the direct domain.
The robustness of the bGMCA algorithm with respect to additive Gaussian noise has further been tested. Fig. 7 reports the evolution of the mixing matrix criterion for varying values of the signal-to-noise ratio. It can be observed that bGMCA yields the best performances for all values of SNR.
Although it seems to particularly benefit from high SNR compared to HALS and EFICA, it still yields better results than the other algorithms for low SNR despite the small block size used (r = 10), which could have been particularly prone to error propagations.
Conclusion
While being central in numerous applications, tackling sparse BSS problems when the number of sources is large is highly challenging. In this article, we describe the block-GMCA algorithm, which is specifically tailored to solve sparse BSS in the large-scale regime. In this setting, the minimiza- All the numerical comparisons conducted show that bGMCA performs at least as well as standard sparse BSS on mixtures of a high number of sources and most of the experiments even show dramatically enhanced separation performances. As a byproduct, the proposed block-based strategy yields a significant decrease of the computational cost of the separation process.
Figure 1 :
1 Figure 1: Evolution of the mixing matrix criterion (whose computation is detailed in sec. 3.1) of four standard BSS algorithms for an increasing n. For comparison, the results of the proposed bGM CA algorithm is presented, showing that its use allows for the good results of GMCA for low n (around 160 dB for n = 3) to persist for n < 50 and to stay much better than GMCA for n > 50. The experiment was conducted using exactly sparse sources S, with 10% non-zero coefficients, the other coefficients having a Gaussian amplitude. The mixing matrix A was taken to be orthogonal. Both A and S were generated randomly, the experiments being done 25 times and the median used to draw the figure.
for 0 k < n max do Choose a set of indices I Estimation of S with a fixed A: S (k)I = prox G(.) (A (k-1) † I R I )Estimation of A with a fixed S: A (k)I = prox J (.) (R I S (k) † I ) Choice of a new threshold Λ (k)heuristic -see section 2.2.3 end for Refinement step while ∆ > τ and k < n max do Choose a set of indices I S (k)
- 2 -
2 Source matrix S: the sources are sparse in the sample domain without requiring any transform (the results would however be identical for any source sparse in an orthogonal representation). The sources in S are exactly sparse and drawn randomly according to a Bernoulli-Gaussian distribution: among the t samples (t = 1, 000), a proportion p (called sparsity degree-unless specified, p = 0.1) of the samples is taken non-zero, with an amplitude drawn according to a standard normal distribution. Mixing matrix A: the mixing matrix is drawn randomly according to a standard normal distribution and modified to have unit columns and a given condition number C d (unless specified, C d = 1).
where A * is the true mixing matrix and A is the solution given by the algorithm, corrected through P for the permutation and scale factors indeterminacies. I d is the identity matrix. This criterion quantifies the quality of the estimation of the mixing directions, that is the columns of A. If they are perfectly estimated, |PA † A * | is equal to I d and C A = 0. The data matrices being drawn randomly, each experiment was performed several times (typically 25 times) and the median of -10 log(C A ) over the experiments will be displayed. The logarithm is used to simplify the reading of the plots despite the high dynamics.
is performed at each iteration from the residual R I = X -A I C S I C . If the estimation were perfect, the residual would be equal to the part of the data explained by the true sources in the current block indexed by I, which would read: R I = A * I S * I , A * and S * being the true matrices. It is nevertheless mandatory to take into account the noise N, as well as a variety of flaws in the estimation by adding a term E to model the estimation error. This entails:
): with more sources in I C , the energy of A I C , A * I C , S * I C and I C increases, yielding bigger error terms (A * I C -A I C )S * I C and -A I C I C . Therefore the errors E become higher, deteriorating the results.
(a) Number of sources. (b) Condition number.
Figure 2 :
2 Figure 2: Left: mixing matrix criterion as a function of r for different n. Right: mixing matrix criterion as a function of r for different C d .
Figure 3 :
3 Figure 3: Mixing matrix criterion as a function of r for different sparsity degrees.
Figure 4 :
4 Figure 4: Right: number of iterations in logarithmic scale as a function of r.
It is designed in two parts: the first columns have relatively spaced Gaussian means while the others have a larger overlap to simulate compounds for which the physical separation is less discriminative. More precisely, an index m ∈ [1, m] is chosen, with m > m/2 (typically, m = 0.75m ). A set of n/2 indices (m k ) k=1... n/2 is then uniformly chosen in [0, m] and another set of n/2 indices (m k ) k= n/2 ...n is chosen in [ m + 1, m]. Each column of A is then created as a Gaussian whose mean is m k . Monte-carlo simulations have been carried out by randomly assigning the sources and the mixing matrix columns. The median over the results of the different experiments will be displayed.
Figure 5 :
5 Figure 5: Exemple of A matrix with 8 columns: the four first columns have spaced means, while the last ones are more correlated
Figure 6 :
6 Figure 6: Left: mixing criterion on realistic sources, with and without a non-negativity constraint. Right: example of a retrieved source, which is almost perfectly superimposed on the true source, therefore showing the quality of the results.
Figure 7 :
7 Figure 7: Mixing criterion on realistic sources, using a non-negative constraint with r = 10
National Institute of Advanced Industrial Science and Technology (AIST), Spectral database for organic compounds: http://sdbs.db.aist.go.jp
Depending on the instrumentation, high SNR values can be reached in such an experiment
Acknowledgement
This work is supported by the European Community through the grant LENA (ERC StG -contract no. 678282).
Appendix
Definition of proximal operators
The proximal operator of an extended-valued proper and lower semicontinuous convex function f : R n → (-∞, ∞] is defined as:
Definition of the soft thresholding operator
The soft thresholding operator S λ (.) is defined as:
Definition of the projection of the columns of a matrix M on the 2 ball
Definition of the projection of a matrix M on the positive orthant |
01683218 | en | [
"chim.orga"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01683218/file/2017-dalpozzo-et-al.pdf | Renato Dalpozzo
Alessandra Lattanzi
Hélène Pellissier
email: [email protected]
Applications of Chiral Three-Membered Rings for Total Synthesis: A Review
Keywords: Asymmetric synthesis, biological activity, chirality, natural products, strained molecules, total synthesis
This review updates recent applications of asymmetric aziridination, azirination, thiirination, epoxidation, and cyclopropanation in the total synthesis of biologically active compounds, including natural products, using chiral substrates or chiral catalysts, covering the literature since 2000. The interest towards these synthetic methodologies of chiral three-membered rings has increased in the last decade, dictated either by the biological activities that display many naturally occurring products bearing a three-membered unit or by the ring strain of three-membered rings making them useful precursors of many more complex interesting molecules. Classic as well as modern protocols in asymmetric aziridinations, azirinations, epoxidations, thiirinations, and cyclopropanations have been widely applied as key steps of a number of syntheses of important products. Although the use of chiral substrates and auxiliaries is still highly employed particularly in asymmetric aziridination and cyclopropanation, the development of enantioselective catalytic methodologies has witnessed exponential growth during the last decade. The present review is subdivided into three parts, dealing successively with the use of chiral nitrogen-containing three-membered rings, chiral epoxides and thiiranes, and chiral cyclopropanes in total synthesis.
INTRODUCTION
Chiral three-membered rings are useful building blocks in synthesis, as well as important synthetic targets. The interest towards synthetic methodologies for their preparation has increased in the last decade, dictated either by the biological activities displayed by many naturally occurring products bearing a three-membered unit or by being useful precursors for accessing more complex interesting molecules [START_REF] Salaun | Cyclopropane derivatives and their diverse biological activities[END_REF]. The goal of the present review is to highlight the major developments and applications in the use of asymmetric three-membered ring formations in total synthesis reported in the last fifteen years. It must be noted that a range of reviews, which will be cited in the respective sections of the present review, have been separatively devoted to asymmetric aziridinations, epoxidations, or cyclopropanations [2]. On the other hand, to the best of our knowledge, no previous review compiling all types of chiral threemembered rings and their synthetic applications exists. In 2000, a book dedicated to cyclopropanes in synthesis was published by de Meijere [3], while a special issue published in Chemical Reviews in 2014 was consecrated to small heterocycles in synthesis but did not especially focus on the asymmetric total synthesis of bioactive and natural products [START_REF] Andrei | Introduction: Small heterocycles in synthesis[END_REF]. The present review is subdivided into three parts, dealing successively with the use of chiral nitrogencontaining three-membered rings, chiral epoxides and thiiranes, and chiral cyclopropanes in total synthesis. The first part is subdivided into two sections successively devoted to chiral aziridines and chiral azirines. The second part of the review is also subdivided into two sections, dealing successively with chiral epoxides and thiiranes. The third part of the review is subdivided into four sections, treating successively asymmetric Simmons Smith cyclopropanations as key steps, asymmetric transition-metal decomposition of diazoalkanes as key steps, asymmetric Michael-initiated ring closures as key steps, and miscellaneous asymmetric cyclopropanations as key steps.
CHIRAL NITROGEN-CONTAINING THREE-MEMBERED RINGS IN TOTAL SYNTHESIS
Chiral Aziridines
Aziridines are among the most fascinating heterocyclic intermediates in organic synthesis [START_REF] Murphree | Three-Membered Heterocycles. Structure and Reactivity[END_REF], acting as precursors of many complex molecules including natural and biologically active products due to the high strain incorporated in their skeletons [START_REF] Aggarwal | For reviews concentrating on not especially asymmetric aziridination (and epoxidation)[END_REF]. The last decade has witnessed tremendous activity in the area of discovering new methodologies for their synthesis and transformations [7]. This growing interest is related to their striking chemical properties. The high strain energy associated with the aziridine ring enables easy cleavage of the C N bond, leading to a series of important nitrogen-containing products [START_REF] Padwa | Intermolecular 1,3-Dipolar Cycloadditions[END_REF]. Obtaining aziridines, especially optically active aziridines, has become of great importance in organic chemistry for many reasons. These reasons may include the antitumor, antibacterial and other biological properties associated with a great number of aziridine-containing compounds, such as mitomycins, azinomycins, and epothilones [START_REF] Zalialov | Aziridines and aziridinium ions in the practical synthesis of pharmaceutical intermediates-a perspective[END_REF]. Indeed, as powerful alkylating agents, aziridines have an inherent in vivo potency through their ability to act as DNA cross-linking agents via nucleo-philic ring opening of the aziridine moiety. Structure activity relationships have identified the aziridine ring is very essential for the antitumor activity, and a vast amount of work has concentrated on synthesizing derivatives of these natural products with increased potency. Various antitumor agents related to mitosanes and mitomycins, for example, have been synthesized and demonstrated to possess activity against a variety of cancers. A number of other synthetic chiral aziridines have also been shown to exhibit other useful biological properties such as enzyme-inhibitory activities. In addition to these important biological activities related to the aziridine unit, these molecules constitute key chiral building blocks for the easy construction of other types of biologically relevant as well as naturally occuring chiral nitrogen-containing compounds. Chiral aziridines can be prepared by either asymmetric catalytic methods or from chiral substrates. The main approaches to the synthesis of chiral aziridines can be classified as asymmetric nitrene transfer to alkenes, asymmetric carbene transfer to imines such as ylide-mediated aziridinations, asymmetric cyclization reactions through addition/elimination processes such as Gabriel Cromwell reactions, and miscellaneous asymmetric reactions such as intramolecular substitutions [7].
Asymmetric Aziridinations of Chiral Substrates as Key Steps Nitrene Transfer to Alkenes
Nitrogen-atom transfer to alkenes is a particularly appealing strategy for the generation of aziridines because of the ready avail-ability of olefinic starting materials and the direct nature of such a process.
The nitrene (or nitrenoid) source for this reaction can be generated from various methodologies, such as the metal-catalyzed reaction of [N-(p-toluenesulfonyl)imino]aryliodinanes [START_REF] Dauban | Iminoiodanes and C-N bond formation in organic synthesis[END_REF]. In 2006, Trost and Dong reported a total synthesis of (+)-agelastatin A, possessing nanomolar activity against several cancer cell lines, which was based on the aziridination of chiral piperazinone 1 (Scheme 1) [START_REF] Trost | New class of nucleophiles for palladium-catalyzed asymmetric allylic alkylation. total synthesis of agelastatin A[END_REF]. This process was performed in the presence of PhI=NTs as the nitrene source and a catalytic amount of copper N-heterocyclic carbene complex 2, providing the corresponding aziridine 3 as the only detected stereoisomer in 52% yield. This chiral aziridine was further converted into the expected (+)-agelastatin A in four supplementary steps. This natural product is also known to inhibit glycogen synthase kinase-3b, a behaviour that might provide an approach for the treatment of Alzheimer's disease.
In 2003, Dauban and Dodd applied the Ses iminoiodane (PhI=NSes) to the copper-catalyzed aziridination of 11-pregnane derivatives to prepare chiral 11,12-aziridino analogues of neuroactive steroids [START_REF] Di Chenna | PhI=NSes mediated aziridination of 11-pregnane derivatives: Synthesis of an 11,12-aziridino analogue of neuroactive steroids[END_REF]. As shown in Scheme 2, the reaction of chiral 11pregnene-3,20-dione 4a or 3--acetoxy-11-pregnen-20-one 4b with PhI=NSes in the presence of CuOTf led to the corresponding , -11,12-aziridino steroids 5a-b in moderate yields (45-53%). The 3acetoxy-11-pregnen-20-one derivative 5b was further converted via TASF-mediated removal of the N-Ses blocking group into N-methyl-11,12-aziridino-3 -hydroxy-5 -pregnan-20-one, which is a conformationally constrained analogue of the endogenous neuros- teroid, pregnanolone, and a structural analogue of the synthetic general anesthetic, minaxolone.
In 2005, Wood and Keaney explored the use of rhodium perfluorobutyramide (Rh 2 (pfm) 4 ) for the aziridination of olefins [START_REF] Keaney | Rhodium perfluorobutyramide (Rh2(pfm)4): A synthetically useful catalyst for olefin aziridinations[END_REF]. The authors found that the treatment of chiral olefin 6 by trichloroethylsulfamate ester in the presence of a combination of Rh 2 (pfm) [START_REF] Andrei | Introduction: Small heterocycles in synthesis[END_REF] with PhI(OAc) 2 provided the expected trichloroethoxysulfonylaziridine 7 in good yield and moderate diastereoselectivity (66% de), as shown in Scheme 3. Notably, this product constituted a potent intermediate for the formal synthesis of (+)-kalihinol A.
With the aim of developing a synthesis for the orally active neuraminidase inhibitor, ( )-oseltamivir, Trost and Zhang investigated the asymmetric aziridination of chiral diene 8 (Scheme 4) [14]. In this case, the best result for the aziridination was obtained when the reaction was performed in the presence of SesNH 2 as the nitrene source, PhI(OPiv) 2 as the oxidant, bis-[rhodium( , , ', 'tetramethyl-1,3-benzenedipropionate] [Rh 2 (esp) 2 ] as the catalyst, and chlorobenzene as the solvent, as shown in Scheme 4. Under these reaction conditions, the corresponding , -aziridine 9 was obtained as the only detected stereoisomer in 86% yield. This chiral product was further converted into required (-)-oseltamivir through four supplementary steps with an overall yield of 30%.
Another methodology to generate nitrenes consists of the in situ oxidation of hydrazine derivatives in the presence of Pb(OAc) [START_REF] Andrei | Introduction: Small heterocycles in synthesis[END_REF] ethyl-3,4-dihydroquinazolin-4-one combined with Pb(OAc) 4 and hexamethyldisilazide (HMDS) [START_REF] Diaper | The stereoselective synthesis of aziridine analogues of diaminopimelic acid (DAP) and their interaction with dap epimerase[END_REF]. An asymmetric version of this method was developed by these authors by using chiral 3acetoxyaminoquinazolinones for the aziridination of unsaturatedaminopimelic ester in order to prepare aziridine analogues of diaminopimelic acid which is an inhibitor of diaminopimelic acid epimerase [START_REF] Diaper | The stereoselective synthesis of aziridine analogues of diaminopimelic acid (DAP) and their interaction with dap epimerase[END_REF]. As shown in Scheme 5, the reaction of chiral alkene 10, performed in the presence of chiral aminoquinazolinone 11 and Pb(OAc) 4 , led to the expected corresponding aziridine 12 in 49% yield along with moderate diastereoselectivity of 72% de.
Another methodology to prepare aziridines is based on the thermolytic or photolytic decomposition or organic azides [START_REF] Katsuki | Azide compounds: Nitrogen sources for atom-efficient and ecologically benign nitrogen-atom-transfer reactions[END_REF]. In 2008, Tanaka et al. employed this methodology as key step of a total synthesis of ( )-agelastatin A, a potent antineoplastic agent [START_REF] Yoshimisu | Total synthesis of (-)-agelastatin A[END_REF]. Indeed, the nitrogen functionality of the agelastatin core was installed through thermolytic intramolecular aziridination of chiral azidoformate 13. The formed tricyclic aziridine 14 obtained as the only detected stereoisomer was further submitted to a regioselective azidation, leading to trans-diamination of the double bond. The obtained chiral azide was subsequently converted into the expected ( )-agelastatin A (structure in Scheme 1), as shown in Scheme 6.
Earlier in 2006, Lowary et al. reported the synthesis of Ldaunosamine and L-ristosamine glycosides based on photoinduced intramolecular aziridinations of acylnitrenes derived from Lrhamnose [START_REF] Mendlik | Synthesis of L-Daunosamine and L-ristosamine glycosides via photoinduced aziridination. Conversion to thioglycosides for use in glycosylation reactions[END_REF]. As shown in Scheme 7, upon exposure to UV light (254 nm), chiral acyl azide 15 was converted to the corresponding aziridine 16 in 79% yield and as the only detected stereoisomer through the generation of a presumed acylnitrene intermediate. This aziridine was further converted into expected glycoside L-daunosamine derivative. Similarly, the irradiation of a 2:1 : anomeric mixture of acyl azide 17 of an L-erythro-hex-2-enopyranoside derivative led to a mixture of the corresponding aziridines 18 in 91% yield (Scheme 7). These products were further separated by chromatography and then converted into important glycoside L-ristosamine derivatives.
Gabriel Cromwell Reactions
The Gabriel Cromwell aziridine synthesis involves a nucleophilic addition of a formal nitrene equivalent to a 2-haloacrylate or similar reagent. It involves an initial
13
none 19 derived from ( )-quinic acid [START_REF] Barros | Aziridines as a protecting and directing group. Stereoselective synthesis of (+)-Bromoxone[END_REF]. The aziridination, performed in the presence of 4-methoxybenzylamine and Cs 2 CO 3 as a base, afforded the corresponding aziridine 20 in good yield (84%) as an 80:20 mixture of diastereomers, as shown in Scheme 8. This reaction constituted the key step of a short synthesis of (+)bromoxone, the acetate of which showed potent antitumor activity.
In the same context, Dodd et al. have developed a total synthesis of the non-natural enantiomer of polyoxamic acid on the basis of the domino aza-Michael-type addition/elimination reaction of chiral triflate 21 derived from D-ribonolactone [START_REF] Tarrade | Enantiospecific total synthesis of (-)-Polyoxamic acid using 2,3-aziridino--lactone methodology[END_REF]. This triflate reacted with 3,4-dimethoxybenzylamine to provide the corresponding aziridine 22 as the only detected stereoisomer in good yield (Scheme 9). The complete diasteroselectivity of the reaction was explained as a result of a aza-Michael-type addition of 3,4-dimethoxybenzylamine to the face opposite to that of the bulky silyl group at C-5. This aziridine was further converted through six supplementary steps into the expected ( )-polyoxamic acid with an overall yield of 10%.
Ylide-Mediated Aziridinations
In 2006, Aggarwal et al. developed a total synthesis of the protein kinase C inhibitor ( )-balanol [START_REF] Unthank | The use of vinyl sulfonium salts in the stereocontrolled asymmetric synthesis of epoxide-and aziridinefused heterocycles: Application to the synthesis of (-)-balanol[END_REF]. The key step of this synthesis was the reaction of diphenyl vinyl sulfonium triflate salt with chiral aminal 23 in the presence of NaH as the base, which led to the corresponding aziridine 24 in moderate yield (68%) and diastereoselectivity (50% de), as shown in Scheme 10. The mecha-nism of the key step of the synthesis, evolving through ylidemediated aziridination, is depicted in Scheme 10.
Intramolecular Substitutions
The asymmetric aziridination based on the use of 1,2-amino alcohols has been applied by several groups for developing total syntheses of various biologically active products. As an example, the A total synthesis of 7-Epi (+)-FR900482, exhibiting equal potency as antitumor agent than natural product (+)-FR900482, was later reported by Trost and O'Boyle, involving the asymmetric aziridination of chiral amino diol 27 (Scheme 12), which was selectively silylated and mesylated [START_REF] Trost | Synthesis of 7-Epi (+)-FR900482: An epimer of comparable anti-cancer activity[END_REF]. The mesylate was then exposed to cesium carbonate, affording the expected aziridine 28 as the only detected stereoisomer in 77% yield, which was further transformed into expected 7-Epi (+)-FR900482. In 2003, Terashima et al. developed the synthesis of the C1-C17 fragment of the antitumor antibiotic, carzinophilin, that involved as a key step the asymmetric aziridination of chiral pyrrolidin-2-ylidenemalonate 29 derived from -D-arabinofuranose [START_REF] Hashimoto | Synthetic studies of carzinophilin[END_REF]. As shown in Scheme 13, the treatment of this pyrrolidin-2ylidenemalonate with KHMDS as base provided the corresponding aziridine 30 as the only detected stereoisomer in 63% yield. The latter was further converted into the expected C1-C17 fragment of carzinophilin. Later, Vedejs et al. reported the synthesis of enantiopure aziridinomitosene, the key step of which was the asymmetric aziridination of chiral oxazole 1,2-amino alcohol 31 derived from L-serine (Scheme 14) [25]. The reaction yielded the corresponding aziridine 32 as the only detected stereoisomer in 65% yield which was further converted into aziridinomitosenes among which the first C6,C7-unsubstituted one, a DNA alkylating agent depicted in Scheme 14. In 2015, Kongkathip et al. reported a novel total synthesis of oseltamivir phosphate (structure of oseltamivir in Scheme 4) which was based on the asymmetric aziridination of chiral 1,2-amino mesylate 33 derived from D-glucose into aziridine 34 as the only detected stereoisomer, as shown in Scheme 15 [START_REF] Kongkathip | A new and efficient asymmetric synthesis of oseltamivir phosphate (Tamiflu) from D-glucose[END_REF]. The complete strategy gave rise to oseltamivir phosphate in 7.2% overall yield.
t-BuPh
KHMDS
The last step of an enantioselective synthesis of an aziridinomitosane, reported by Miller et al., was based on the cyclization of chiral tricyclic 1,2-azido alcohol 35 into aziridine 36 as the only detected stereoisomer (Scheme 16) [START_REF] Papaioannou | Enantioselective synthesis of an aziridinomitosane and selective functionalizations of a key intermediate[END_REF]. This process was achieved in two steps with resin-bound PPh 3 , affording the expected enantiopure aziridinomitosane with the trans configuration in moderate yield (Scheme 16).
In the same area, Metzger and Fürmeier reported the first preparation of chiral fat-derived aziridines, with the aim of gaining insight into their biological properties [START_REF] Fürmeier | Fat-Derived aziridines and their -substituted derivatives: Biologically active compounds based on renewable raw materials[END_REF]. The same methodology as described in Scheme 16, based on the use of resin-bound PPh 3 , was applied to the enantiopure azido alcohol 37 derived from chiral methyl vernolate (Scheme 17). Under these reaction conditions, the corresponding unsaturated cis-aziridine 38 was isolated in 75% yield as the only detected stereoisomer, according to the mechanism summarized in Scheme 17. This represented the first enantiomerically pure aziridine based on fats and oils.
In 2010, the cyclization of other chiral 2-azido alcohols was investigated by Coates et al. in the course of synthesizing aziridine analogues of presqualene diphosphates as inhibitors of squalene synthase [START_REF] Koohang | Enantioselective inhibition of squalene synthase by aziridine analogues of presqualene diphosphate[END_REF]. As shown in Scheme latter was subsequently converted into a diphosphate exhibiting squalene synthase inhibitory activity.
Miscellaneous Aziridinations
In 2012, Ishikawa et al. reported the synthesis of (-)-benzolactam-V8, an artificially-designed cyclic dipeptide exhibiting strong tumor-promoter activity [START_REF] Khantikaew | Synthesis of (-)-benzolactam-V8 by application of asymmetric aziridination[END_REF]. The key step of the synthesis consisted in the reaction of chiral guanidinium bromide 41 with benzyl (S)-N-(2-formylphenyl)-N-methylvalinate 42 to give the corresponding syn-aziridine 43 in 59% yield and a moderate diastereoselectivity of 36% de (Scheme 19). The latter was further converted into the expected (-)-benzolactam-V8 in five supplementary steps.
The total synthesis of ( )-oseltamivir, elaborated by Fukuyama et al. in 2007, included the formation of a bicyclic aziridine 44 by rearrangement of chiral allyl carbamate 45 [31]. Therefore, treat-ment of this carbamate with NaOEt resulted in ethanolysis of N-Boc lactam, dehydrobromination, and aziridine formation, which provided the desired aziridine 44 as the only detected stereoisomer in high yield (87%), as shown in Scheme 20. This aziridine was further converted into the final ( )-oseltamivir (structure in Scheme 4) in four steps.
In 2005, a photocyclization reaction providing chiral aziridines was developed by Mariano et al., starting from chiral pyridinium perchlorate 46 derived from D-glucose [START_REF] Feng | Pyridinium Salt photochemistry in a concise route for synthesis of the trehazolin aminocyclitol, trehazolamine[END_REF]. Irradiation of this substrate in aqueous NaHCO 3 produced a mixture of isomeric Nglycosyl-bicyclic-aziridines, which could be partially separated by chromatography to yield the major aziridine 47 (Scheme 21) as the only detected stereoisomer in 15% yield. This aziridine was subsequently converted into trehazolamine which is the aminocyclitol core of the potent trehalase inhibitor, trehazolin.
Enantioselective Aziridinations as Key Steps
Copper-Catalyzed Nitrene Transfer to Alkenes
In addition to chiral dirhodium catalysts [START_REF] Fruit | Diastereoselective rhodium-catalyzed nitrene transfer starting from chiral sulfonimidamide-derived iminoiodanes[END_REF], the most commonly employed chiral catalyst systems in enantioselective aziridination via nitrene transfer to alkenes are based on copper complexes of chiral bisoxazolines early reported by Evans et al., in 1991 [34]. In 2007, Cranfill and Lipton reported the use of Evans' bisoxazoline ligand 48 for the asymmetric aziridination of ,unsaturated ester 49 in the presence of Cu(OTf) 2 and PhINNs (N-(p-nitrophenylsulfonyl)iminophenyliodinane) as the nitrene source [START_REF] Cranfill | Enantio-and diastereoselective synthesis of (R,R)--Methoxytyrosine[END_REF]. This process allowed the corresponding chiral trans-aziridine 50 to be obtained in 89% yield, and with 90% de and 94% ee, (Scheme 22). This nice reaction constituted the key step of a total synthesis of (R,R)--methoxytyrosine, which is a constituent of several cyclic depsipeptide natural products.
Organocatalyzed Nitrene Transfer to Alkenes
In 2014, Hamada et al. developed a total synthesis of (R)sumanirole that exhibits selective dopamine D2 receptor agonist activity [START_REF] Nemoto | Enantioselective synthesis of (R)-Sumanirole using organocatalytic asymmetric aziridination of an , -unsaturated aldehyde[END_REF]. The key step of the synthesis was the aziridination of , -unsaturated aldehyde 51 organocatalyzed by chiral diphenyl-prolinol triethylsilyl ether 52 in the presence of three equivalents of a base, such as NaOAc, allowing the key intermediate aziridine 53 to be obtained in 94% yield and 97% ee (Scheme 23) [START_REF] Arai | Enantioselective aziridination reaction of , -unsaturated aldehydes using an organocatalyst and tert-butyl N-arenesulfonyloxycarbamates[END_REF]. The latter was subsequently converted into the desired (R)-sumanirole.
Carbene Transfer to Imines Through Carbene Methodology
Although most of the catalytic methods for synthesizing chiral aziridines proceeded through the transfer of a nitrogen group to an alkene, methods based on the less-studied enantioselective transfer of a carbenoid to an imine have been successfully developed in recent years [38]. For example, the formation of aziridines based on transition metal-or Lewis acid-catalyzed decomposition of diazo compounds in the presence of imines is well established. The synthetic utility of this approach was illustrated by Wulff et al. in the total synthesis of a leukointegrin LFA-1 antagonist, BIRT-377, an agent for the treatment of inflammatory and immune disorders [START_REF] Patwardhan | Highly diastereoselective alkylation of aziridine-2-carboxylate esters: Enantioselective synthesis of LFA-1 antagonist BIRT-377[END_REF]. As shown in Scheme 24, the key step of this synthesis provided chiral cis-aziridine 54, arisen from the reaction of ethyl diazoacetate with N-benzhydryl imine 55 in the presence of a combination of B(OPh) 3 as Lewis acid and (S)-VAPOL as chiral ligand.
In 2011, the highly efficient asymmetric Wulff s aziridination methodology was applied by Chen et al. to develop a total synthesis of antibacterial florfenicol shown in Scheme 25 [START_REF] Wang | An efficient enantioselective synthesis of florfenicol via asymmetric aziridination[END_REF]. In this case, the authors employed (R)-VANOL-boroxinate to aziridinate benzhydryl aldimine 56 with ethyl diazoacetate, providing the key chiral aziridine intermediate 57 in 93% yield as a single cisdiastereomer in 85% ee. The latter was further converted into expected florfenicol in 45% overall yield from commercially available p-(methylsulfonyl)benzaldehyde. hexadecanal with MEDAM amine and ethyl diazoacetate led to almost enantiopure aziridine-2-carboxylates 60 or ent-60, respecti-vely. Access to all four stereoisomers of sphinganine was achieved upon ring-opening of the enantiopure aziridine-2-carboxylate at the C-3 position through a direct S N 2 attack of an oxygen nucleophile, which occurred with inversion of configuration and by ring expansion of an N-acyl aziridine to an oxazolidinone followed by hydrolysis.
Carbene Transfer to Imines Through Sulfur Ylide-Mediated Aziridinations
Aggarwal has developed an asymmetric aziridination methodology based on the generation of a carbene from diazo decomposition with [Rh 2 (OAc) 4 ], its association to a chiral sulfide, and subsequent transfer to an imine [START_REF] Aggarwal | Catalytic, asymmetric sulfur ylide-mediated epoxidation of carbonyl compounds: Scope, selectivity, and applications in synthesis[END_REF]. This approach was applied in 2003 to build the taxol side chain with a high degree of enantioselectivity via a trans-aziridine [START_REF] Aggarwal | Asymmetric sulfur ylide mediated aziridination: Application in the synthesis of the side chain of taxol[END_REF]. As shown in Scheme 27, the reaction of N-Ses imine 61 with tosylhydrazone salt 62 derived from benzaldehyde in the presence of a phase-transfer catalyst (PTC), Rh 2 (OAc) 4 , and 20 mol% of chiral sulfide 63 provided the corresponding aziridine 64 in 52% yield as a 89:11 trans/cis diastereoisomeric ratio. The expected trans-aziridine was obtained with an enantiomeric excess of 98% ee and was further converted into the desired final taxol side chain. A catalytic cycle was proposed, involving the decomposition of the diazo compound in the presence of the rhodium complex, to yield the metallocarbene. The latter was then transferred to the chiral sulfide, forming a sulfur ylide, which underwent a reaction with the imine to give the expected aziridine, returning the sulfide to the cycle to make it available for further catalysis (Scheme 27).
Chiral Azirines
2H-Azirines constitute the smallest nitrogen unsaturated heterocyclic system, bearing two carbon atoms and one double bond in a three-membered ring. Their stability can be attributed not only to the combined effects of bond shortening and angle compression, but also to the presence of the electron-rich nitrogen atom [44]. The biological applications and the chemistry of these molecules have been widely investigated [8h, 45]. The chemistry of 2H-azirines is related to their ring strain, reactive -bond, and ability to undergo regioselective ring cleavage. For example, they can act as nucleophiles as well as electrophiles in organic reactions, but they can also interact as dienophiles and dipolarophiles in cycloaddition reactions. Therefore, they constitute useful precursors for the synthesis of a variety of nitrogen-containing heterocyclic systems. In particular, 2H-azirines containing a carboxylic ester group are constituents of naturally occurring antibiotics. Several synthetic approaches are available to reach 2H-azirines, such as Neber rearrangement of oxime sulfonates, and elimination reaction of N-substituted aziridines, such as N-sulfinylaziridines or N-chloroaziridines. Several asymmetric versions of these methodologies have been recently applied in total synthesis.
N R 2 R 3 N R 2 R 1 R 3 R 2 S-CHR 1 R 2 S Rh 2 (OAc) 4 Rh=CHR 1 N 2 CHR 1 PTC N 2 + NaTs
Asymmetric Azirination Through Neber Approaches
In 2000, an asymmetric Neber reaction was developed by Palacios et al. for the preparation of constituants of naturally occurring antibiotics, such as alkyl-and aryl-substituted 2H-azirines bearing a phosphonate group in the 2-position of the ring [START_REF] Palacios | Simple asymmetric synthesis of 2H-Azirines derived from phosphine oxides[END_REF]. As shown in Scheme 28, the Neber reaction of -phosphorylated tosyloximes 65 provided the corresponding 2H-azirines 66 in excellent yields along with low to good enantioselectivities of up to 82% ee when employing quinidine as organocatalyst. The scope of this methodology was extended to the synthesis of enantioenriched 2H-azirines derived from phosphonates albeit with lower enantioselectivities ranging from 20 to 52% ee along with high yields (85-95%) [START_REF] Palacios | Easy and efficient synthesis of enantiomerically enriched 2H-azirines derived from phosphonates[END_REF]. Later, better enantioselectivities of up to 72% ee were reported by the same authors in the synthesis of chiral 2H-azirine phosphonates 67 performed with the same catalyst in the presence of K 2 CO 3 (Scheme 28) [START_REF] Palacios | Asymmetric synthesis of 2H-aziridine phosphonates, and -or -aminophosphonates from enantiomerically enriched 2Hazirines[END_REF]. These three-membered heterocycles constitute important building blocks in the preparation of biologically active compounds of interest in medicinal chemistry including naturally occurring antibiotics.
Later, Molinski et al. reported a total synthesis of marine natural and antifungal product (-)-Z-dysidazirine the key step of which was an enantioselective Neber reaction catalyzed by the same organocatalyst quinidine (Scheme 29) [START_REF] Skepper | Synthesis and antifungal activity of (-)-(Z)-Dysidazirine[END_REF]. The Neber reaction of tosyloxime 69 derived from pentadecyne led to the corresponding key chiral azirine 70 in good yield and moderate enantioselectivity (59% ee). This product was further converted into expected (-)-Zdysidazirine through partial hydrogenation using Lindlar's catalyst.
69
In 2010, the same conditions were applied by these authors to the synthesis of shorter chain analogues 71 and 72 of (-)-Zdysidazirine to be evaluated as antifungal agents [START_REF] Skepper | Synthesis and chain-dependent antifungal activity of long-chain 2H-azirine-carboxylate esters related to dysidazirine[END_REF]. As depicted in Scheme 30, these products were obtained in 61 and 86% ee, respectively. Their antifungal activity was found comparable to that of (-)-Z-dysidazirine.
The first enantioselective Neber reaction of -ketoxime sulfonates 73 catalyzed by bifunctional thiourea 74 was reported by Takemoto et al., in 2011 [51]. The reaction was performed in the presence of only 5 mol% of catalyst loading and 10 equivalents of Na 2 CO 3 to provide the corresponding 2H-azirine carboxylic esters 75 in good yields and moderate to high enantioselectivities of up to 93% ee (Scheme 31). The utility of this novel methodology was demonstrated by an asymmetric synthesis of (-)-Z-dysidazirine.
CHIRAL EPOXIDES AND THIIRANES IN TOTAL SYN-THESIS
Chiral Epoxides
Epoxides are strained three-membered rings of wide importance as versatile synthetic intermediates in total synthesis of a number of important products. Their strain energy allows an easy ring-opening by reacting with nucleophiles to provide a range of sulfides, hydrazino alcohols, 1,2-halohydrins, 1,2-cyanohydrins, alcohols and pharmaceuticals [START_REF] Nemoto | Catalytic asymmetric epoxidation of , -unsaturated amides: Efficient synthesis of -aryl -hydroxy amides using a one-pot tandem catalytic asymmetric epoxidation-Pd-catalyzed epoxide opening process[END_REF][START_REF]Aziridines and Epoxides in Organic Synthesis[END_REF]. Given the considerable number of asymmetric methodologies accessible to synthesize chiral epoxides, chemists have judiciously applied the combination of asymmetric epoxidation/ring-opening reactions to achieve various total syntheses of natural and biologically active compounds [START_REF] Heravi | Applications of sharpless asymmetric epoxidation in total synthesis[END_REF].
)-Z-dysidazirine. N R 1 X O O SO 2 R 2 N R 1 COX NMe 2 N H S N H Ar = 3,5-(CF 3 ) 2 C 6 H 3
For example, -adrenergic blocking agents used for the treatment of hypertension and angina pectoris, can be easily obtained from ringopening of chiral terminal epoxides including (S)-propranolol [START_REF] Sonawane | Concise synthesis of twoadrenergic blocking agents in high stereoselectivity using the readily available chiral building block (2S,2'S,2"S)-tris-(2,3-epoxypropyl)-isocyanurate[END_REF], and (R)-dichloroisoproterenol [START_REF] Wei | Asymmetric synthesis of -adrenergic blockers through multistep one-pot transformations involving in situ chiral organocatalyst formation[END_REF]. Carboxypeptidase A, a zinccontaining proteolytic enzyme of physiological importance, has been found to be irreversibly inactivated, with the highest activity among all the possible stereoisomers, by terminal epoxide (2R,3S)-2-benzyl-3,4-epoxybutanoic acid via S N 2 type ring-cleavage [56].
Moreover, [3+2] asymmetric cycloaddition reactions of epoxides have also recently emerged as valuable tools to obtain chiral dihydro-or tetrahydrofuran derivatives, important core structures present in natural products [START_REF] Chen | An asymmetric [3+2] cycloaddition of alkynes with oxiranes by selective C-C bond cleavage of epoxides: Highly efficient synthesis of chiral furan derivatives[END_REF]. Besides the relevance of chiral epoxides in synthesis, a great number of natural products and bioactive compounds exhibit the epoxide subunit in their structure as exemplified by sex pheromone for gypsy moth (+)-disparlure [58], antibiotic agent monocillin I [START_REF] Ayer | The isolation, identification, and bioassay of the antifungal metabolites produced by Monocillium nordinii[END_REF], potent oral hypoglycemic and antiketogenic agent in mammals (R)-methyl palmoxirate [START_REF] Ruano | A new general method to obtain chiral 2-alkylglycidic acid derivatives: Synthesis of methyl (R)-(+)palmoxirate[END_REF], and anticancer agents ovalicin, fumagillin and ephothilones A and B [61]. The epoxidation of alkenes is undoubtedly the most investigated and convenient approach to obtain epoxides [START_REF]Modern Oxidation Methods[END_REF]. In the last decade, total syntheses of many important natural and biologically relevant products have been based on asymmetric metal-or organocatalyzed epoxidations of alkenes, kinetic resolution of racemic epoxides, asymmetric sulfur ylide-mediated epoxidations of carbonyl compounds, and asymmetric Darzens reactions as key steps.
Asymmetric Metal-Catalyzed Epoxidations as Key Steps
In the last decade, many studies have focused on the development of asymmetric metal-catalyzed procedures for the epoxidation of alkenes [START_REF] For Reviews ; Jacobsen | Comprehensive Asymmetric Catalysis I-III[END_REF]. As a recent example, Echavarren et al. have developed a short total synthesis of three natural aromadendrane sesquiterpenes, such as (-)-epiglobulol, (-)-4 ,7 -aromadendranediol, and (-)-4 ,7 -aromadendranediol, the first step of which was the Katsuki-Sharpless asymmetric epoxidation of (E,E)-farnesol 76 (Scheme 32) [START_REF] Carreras | Gold(I) as an artificial cyclase: Short stereodivergent syntheses of (-)-epiglobulol and (-)-4 ,7 -and (-)-4 ,7 -aromadendranediols[END_REF]. These products, widespread in plant species, are endowed with a variety of antiviral, antibacterial, antifungal activities [65]. The epoxidation of 76 was performed in the presence of 5 mol% of Ti(OiPr) 4 /L-DIPT as catalyst system, leading to the corresponding chiral epoxide 77 in 88% yield and 82% ee. The latter was further converted into expected (-)-epiglobulol, (-)-4 ,7aromaden-dranediol, and (-)-4 ,7 -aromadendranediol.
Very recently, this methodology was also employed by Muthukrishnan et al. as key step in a simple synthesis of (R)-2-benzylmorpholine, an appetite suppressant agent [START_REF] Viswanadh | An alternate synthesis of appetite suppressant (R)-2-benzylmorpholine employing Sharpless asymmetric epoxidation strategy[END_REF]. As shown in Scheme 33, the Katsuki-Sharpless asymmetric epoxidation of (E)cinnamyl alcohol 78 led to the corresponding chiral epoxy alcohol 79 in high yield (86%) and excellent enantioselectivity under standard conditions. The latter was further converted into expected (R)-2-benzylmorpholine in 24% overall yield (Scheme 33). The Katsuki-Sharpless asymmetric epoxidation was also applied by Voight et al. to the total synthesis of potent antibiotic GSK966587 (Scheme 34) [START_REF] Voight | Target-directed synthesis of antibacterial drug candidate GSK966587[END_REF]. The epoxidation of allylic alcohol 80 smoothly proceeded with 10 mol% of the same metal complex, in the presence of cumyl hydroperoxide (CHP) to give the key chiral epoxy alcohol 81 in 81% yield and 90% ee. The synthesis afforded the target compound in eight steps and 25% overall yield.
Zirconium catalysts have been of limited use in the area of asymmetric epoxidation, with a particular focus on homoallylic alcohols. As a rare example, an asymmetric Zr(Ot-Bu) 4 /bishydroxamic acid-catalyzed asymmetric epoxidation of an homoallylic alcohol was very recently applied as key step in the synthesis of the tricyclic polar segment of fusarisetin A [START_REF] Kohyama | An enantiocontrolled entry to the tricyclic polar segment of (+)-fusarisetin A[END_REF], which is a fungal metabolite with anticancer activity, due to potent inhibition against acinar morphogenesis, cell migration, and cell invasion in MDAMB-231 cells [START_REF] Jang | an acinar morphogenesis inhibitor from a soil fungus, Fusarium sp. FN080326[END_REF]. The synthesis began with the epoxidation of (Z)-pent-3-en-1-ol 82, which yielded the corresponding key chiral epoxyalcohol 83 in 88% yield with 84% ee (Scheme 35). Chiral vanadium complexes have also been applied to catalyze asymmetric epoxidation. An example of the synthetic utility of this methodology was demonstrated in the synthesis of the fragrance abisabolol reported by Yamamoto et al., in 2003 [START_REF] Makita | Asymmetric epoxidation of homoallylic alcohols and application in a concise total synthesis of (-)--bisabolol and (-)-8-epi--bisabolol[END_REF]. The key step to install the second stereocenter of the molecule was based on asymmetric epoxidation of the (S)-limonene-derived homoallylic alcohol 84 using the vanadium/ligand 85/CHP catalytic system. The required epoxide intermediate 86 was isolated in good yield and 90% de (Scheme 36). More recently, an enantioselective synthesis of florfenicol, a fluorinated derivative of thiamphenicol albeit exhibiting superior antibacterial spectrum, was developed by Chen et al. [START_REF] Li | An efficient enantioselective synthesis of florfenicol via a vanadium-catalyzed asymmetric epoxidation[END_REF]. The product was obtained in 37% overall yield from commercially available 4-methylthiobenzaldehyde. The key epoxidation step was performed on the trans-cinnamyl alcohol derivative 87 in the presence of 5 mol% of the Yamamoto vanadium complex 88 and TBHP to give the key epoxide 89 in 75% yield and > 90% ee (Scheme 36). This chiral epoxide was further converted into the antibacterial agent florfenicol (structure in Scheme 25).
In 1990, a pivotal discovery was illustrated by the groups of Jacobsen [START_REF] Zhang | Enantioselective epoxidation of unfunctionalized olefins catalyzed by (Salen)manganese complexes[END_REF] and Katsuki [START_REF] Irie | Catalytic asymmetric epoxidation of unfunctionalized olefins[END_REF], who independently reported the asymmetric epoxidation of a variety of unfunctionalized alkenes with optically pure Mn(III)/salen complexes using readily available PhIO, bleach, H 2 O 2 , oxone as the terminal oxidants, providing good levels of enantioselectivity [START_REF] Krishnan | Recent advances and perspectives in the manganese-catalysed epoxidation reactions[END_REF]. Soon after their discovery, enantioselective epoxidations catalyzed by Jacobsen-Katsuki complexes were employed as key steps in the synthesis of biologically active products, such as anti-HIV compound indinavir (crixivan ® ) [START_REF] Federsel | In: Stereoselective Synthesis of Drugs -An Industrial Perspective[END_REF], and Iks-channel blockers (Scheme 37) [START_REF] Gerlach | Synthesis and activity of novel and selective IKs-channel blockers[END_REF]. In more than a decade, Fuchs et al. applied and optimized this methodology for the asymmetric epoxidation of cyclic dienyl sulfones and triflates to the corresponding chiral monoepoxides [START_REF] Chen | Synthesis of termini-differentiated 6-carbon stereotetrads: An alkylative oxidation strategy for preparation of the C21-C26 segment of apoptolidin[END_REF]. These compounds proved to be very useful intermediates to access a variety of natural products such as (+)-pretazettine alkaloid core led to the key intermediate epoxide 92 in 70% yield and high enantioselectivity of > 90% ee by using catalyst 93 and H 2 O 2 as oxidant. Epoxide 92 was further converted into (+)-pretazettine.
In 2013, Gao et al. reported the total synthesis of potassium channel activator levcromakalim the key step of which was the enantioselective Mn-catalyzed epoxidation of alkene 94 into functionalized epoxide 95 performed in the presence of porphyrininspired ligand 96 bearing chiral oxazoline groups as ligand with 92% yield and 94% ee (Scheme 39) [78].
Up to date, a few examples have been reported on stereoselective epoxidations mediated by chiral zinc complexes. In 2014, Martin et al. reported the epoxidation of trans-, -unsaturated ketone 97 mediated by overstoichiometric amounts of a complex generated from Et 2 Zn and (1R,2R)-N-methylpseudoephedrine 98 in the presence of molecular oxygen as a benign oxidant. This reaction constituted the final step in the total synthesis of citrinadrin A, an alkaloid isolated from marine-derived fungus Penicillium citrinum (Scheme 40) [START_REF] Bian | Enantioselective total syntheses of citrinadins A and B. Stereochemical revision of their assigned structures[END_REF]. Indeed, the asymmetric epoxidation of enone 97 under these conditions led to citrinadrin A in 81% yield and moderate diastereoselectivity (66% de).
In 2005, Shibasaki et al. developed a catalytic system generated in the presence of molecular sieves from Y(Oi-Pr) 3 , (Ph) 3 As=O as additive, and ligand 99 in 1:1:1 ratio, suitable for the asymmetric epoxidation of -aromatic and aliphatic , -unsaturated esters 100 into epoxides 101 with up to 99% ee (Scheme 41) [START_REF] Kakei | Catalytic asymmetric epoxidation of , -unsaturated esters using an yttrium-biphenyldiol complex[END_REF] , -unsaturated amines [START_REF] Nemoto | Catalytic asymmetric epoxidation of , -unsaturated amides: Efficient synthesis of -aryl -hydroxy amides using a one-pot tandem catalytic asymmetric epoxidation-Pd-catalyzed epoxide opening process[END_REF] and imidazolides [START_REF] Nemoto | Catalytic asymmetric synthesis of , -epoxy esters, aldehydes, amides, and , -epoxy -keto esters: Unique reactivity of , -unsaturated carboxylic acid imidazolides[END_REF] have been successfully employed in key epoxidation steps of total syntheses of various biologically active products, such as anticancer agent (+)decursin [START_REF] Nemoto | Enantioselective total syntheses of novel PKC activator (+)-decursin and its derivatives using catalytic asymmetric epoxidation of an enone[END_REF], antifeedant (-)-marmesin [START_REF] Hara | Catalytic asymmetric epoxidation of , -unsaturated phosphane oxides with a Y(O-Pr)3/biphenyldiol complex[END_REF], antifungal agent (+)strictifolione [START_REF] Tosaki | Catalytic asymmetric synthesis of both syn-and anti-3,5-dihydroxy esters: Application to 1,3polyol/ -pyrone natural product synthesis[END_REF], and antidepressant drug fluoxetin [START_REF] Kakei | Efficient synthesis of chiral -and -hydroxy amides: Application to the synthesis of ( )fluoxetine[END_REF] (Scheme 41).
Asymmetric Organocatalyzed Epoxidations as Key Steps
Asymmetric organocatalysis has increasingly become one of the most useful and practical tool to perform a synthetic transformation [START_REF]Asymmetric organocatalysis in continuous flow: Opportunities for impacting industrial catalysis[END_REF]. In particular, the organocatalyzed asymmetric epoxidation of olefins is an important part of this field, and a number of these processes have been successfully applied to the total synthesis of various important molecules in the last 15 years. Actually, this topic was reviewed in 2014 by Shi et al. [START_REF] Zhu | Organocatalytic asymmetric epoxidation and aziridination of olefins and their synthetic applications[END_REF], consequently only some representative examples will be detailed in this Section. The readers are invited to consult this comprehensive coverage for a complete coverage of the field. Asymmetric phase transfer catalysis (PTC) includes valuable methodologies for building carbon-carbon and carbon-heteroatom bonds [START_REF] Jew | Cinchona-based phase-transfer catalysts for asymmetric synthesis[END_REF]. The mildness and often environmentally friendly reaction conditions of asymmetric PTC reactions cope well for large-scale and industrial applications [START_REF] Albanese | Sustainable oxidations under phasetransfer catalysis conditions[END_REF]. In 2006, an interesting application of the cinchona alkaloid-derived PTCcatalyzed diastereo-and enantioselective epoxidation to access potent naturally-occurring cysteine protease inhibitor, the epoxysuccinyl peptide E-64c, was reported by Lygo et al. [START_REF] Lygo | Stereoselective synthesis of the epoxysuccinyl peptide E-64c[END_REF]. The key epoxidation of , -unsaturated amide 102 into the corresponding functionalized epoxide 103 occurred with 70% yield and moderate diastereoselectivity of 66% de when catalyzed with cinchona alkaloid salt 104, as shown in Scheme 42.
The most efficient class of oxidants to perform the epoxidation of alkenes are dioxiranes [START_REF] Curci | A novel approach to the efficient oxygenation of hydrocarbons under mild conditions. Superior oxo transfer selectivity using dioxiranes[END_REF]. In contrast to similar oxidizing agents, such as percarboxylic acids, the development of chiral dioxiranes for asymmetric epoxidations has met great success over the last decades [START_REF] Padwa | Intermolecular 1,3-Dipolar Cycloadditions[END_REF][START_REF] Wong | Oxidation: Organocatalyzed Asymmetric Epoxidation of Alkenes[END_REF]. In late 1990s, Shi and co-workers introduced a variety of pseudo C 2 -symmetric six-membered carbocyclic ketones derived from quinic acid as stoichiometric precursors of in situ generated chiral dioxiranes to be used in asymmetric epoxidation of trans-disubstituted, trisubstitued, terminal alkenes, and electronpoor alkenes. Oxone was used in DME under basic buffered conditions at 0 °C in the presence of only 5-10 mol% of the chiral ketone [94]. More recently, a comparable methodology was applied by Chandrasekhar and Kumar to the total synthesis of pladienolide B, a natural anticancer macrolide [START_REF] Kumar | Enantioselective synthesis of pladienolide B and truncated analogues as new anticancer agents[END_REF]. The key step of the synthesis was the epoxidation of homoallylic alcohol 105 performed in the presence of superstoichiometric amounts of ketone 106 and oxone to give epoxide 107 in 64% yield and 95% de (Scheme 43). In 2012, a simple total synthesis of (+)-ambrisentan, a drug clinically used for the treatment of pulmonary arterial hypertension, was reported by Shi et al. [START_REF] Peng | Synthesis of (+)-Ambrisentan via chiral ketonecatalyzed asymmetric epoxidation[END_REF]. It began with the asymmetric epoxidation of ethyl 3,3-diphenylacrylate 108 (Scheme 44). This electron-poor alkene was epoxidized with the most effective ketone 109 to provide the corresponding chiral epoxide 110 in 90% yield and 85% ee. The reaction crude was then directly subjected to other steps to finally afford (+)-ambrisentan in 53% overall yield and >99% ee.
Biomimetic synthesis based on intramolecular cascade ringopening reaction of chiral epoxides has been also reported by McDonald et al. (Scheme 45) [START_REF] Tong | Total syntheses of durgamone, nakorone, and abudinol B via biomimetic oxa-and carbacyclizations[END_REF]. With the aim of developing a synthetic route to abudinols A and B, oxygenated triterpenoid marine natural products, triene 111 was selectively epoxidized at two of its three carbon-carbon double bonds to give diepoxide 112 in good yield (76%) and high diastereoselectivity of > 90% de when using ketone 106 as the organocatalyst. The allylic electronwithdrawing group proximal to the other carbon-carbon double bond prevented its epoxidation. ent-Nakorone, an oxidative degradation product of abudinols, was then obtained via TMSOTfpromoted cyclization with the propargylsilane nucleophile, followed by ozonolysis.
Conceptually similar chiral iminium salts/oxaziridinium salts systems have been introduced as epoxidizing systems since the late 1980s [START_REF] Page | Oxaziridinium Salt-Mediated Catalytic Asymmetric Epoxidation[END_REF]. A tetraphenylphosphonium monoperoxybisulfate (TPPP) [START_REF] Page | Organocatalysis of asymmetric epoxidation mediated by iminium salts under nonaqueous conditions[END_REF] was employed as oxidant in combination with organocatalyst 113 in the asymmetric epoxidation of some Z-alkenes to the corresponding epoxides, which were isolated in good yields and good to high enantioselectivities of up to 97% ee (Scheme 46) [START_REF] Page | Asymmetric epoxidation of cis-alkenes mediated by iminium salts: Highly enantioselective synthesis of Levcromakalim[END_REF]. In particular, the epoxidation of a benzopyran substrate into tricyclic epoxide 114 occurred with the highest efficiency, and was applied to synthesize the antihypertensive agent (-)-cromakalim with high stereoselectivity via ring-opening of the epoxide unit with pyrrolidone, as shown in Scheme 46. In addition, organocatalyst 113 was used in the key epoxidation step for a concise asymmetric synthesis of natural coumarin derivatives (-)-lomatin, and (+)-transkhellactone [START_REF] Page | Highly enantioselective total synthesis of (-)-(3 S)lomatin and (+)-(3 S,4 R)-trans-khellactone[END_REF].
Page and co-authors reported the application of another chiral iminium salt-catalyzed epoxidation as key step in the first highly enantioselective synthesis of (+)-scuteflorin A, one of the numerous and pharmacological active compounds recently isolated from Scutellaria species (Scheme 47) [START_REF] Bartlett | Enantioselective total synthesis of (+)-scuteflorin a using organocatalytic asymmetric epoxidation[END_REF]. A previously reported chiral iminium salt 115 [START_REF] Page | New organocatalysts for the asymmetric catalytic epoxidation of alkenes mediated by chiral iminium salts[END_REF] served as organocatalyst under non aqueous conditions, to epoxidize the starting Z-alkene 116 into the corresponding key enantiopure epoxide 117 in 97% yield. Epoxide hydrolysis/oxidation and esterification completed the synthesis.
The combination of diaryl prolinol 118 as organocatalyst with an alkyl hydroperoxide, such as TBHP, as oxidant in asymmetric epoxidation system was investigated by Zhao et al., developing the first asymmetric epoxidation reactions of a variety of transdisubstituted electron-poor alkenes, such as 119, into the corresponding chiral epoxides, such as 120, in good yields, complete diastereocontrol, and good enantioselectivities [START_REF] Zheng | Highly efficient asymmetric epoxidation of electron-deficient , -enones and related applications to organic synthesis[END_REF]. As an application, the authors developed a short synthesis of natural products (-)-(5R,6S)-norbalasubramide and (-)-(5R,6S)-balasubramide which is depicted in Scheme 48. In 2010, Hayashi et al. reported a short synthesis of (R)-methyl palmoxirate, a potent oral hypoglycemic agent, based on the first successful asymmetric epoxidation of aliphatic -substituted acroleins, that was catalyzed by diphenylprolinol silyl ether 121 and H 2 O 2 as oxidant [START_REF] Bondzic | Asymmetric epoxidation of -substituted acroleins catalyzed by diphenylprolinol silyl ether[END_REF]. Terminal epoxide 122, bearing a quaternary stereocenter, was obtained as key intermediate in good yield (78%) and high enantioselectivity (up to 92% ee) starting from alkene 123, as shown in Scheme 49.
Ph
Kinetic Resolutions of Racemic Epoxides as Key Steps
Besides reactions mediated by metal complexes or organocatalysts, another important tool in asymmetric synthesis to obtain optically active products is the kinetic resolution of racemic molecules [START_REF] Pellissier | Catalytic non-enzymatic kinetic resolution[END_REF]. In 1990s, Jacobsen et al. demonstrated the exceptional catalytic performance of Cr(III)-salen complexes in the kinetic resolu-tion of racemic terminal monosubstituted and 2,2-disubstituted epoxides by TMSN 3 , which occurred with complete regioselectivity for the terminal position and very high stereoselectivity factors of up to 280 [107]. Later, these authors reported enhanced reactivity by using dimeric catalyst 124 in hydrolytic kinetic resolution of terminal epoxides (Scheme 50) [START_REF] White | New oligomeric catalyst for the hydrolytic kinetic resolution of terminal epoxides under solvent-free conditions[END_REF]. A recent synthetic application of kinetic resolution using primary alcohols was illustrated by the same authors in the total synthesis of (+)-reserpine via intermediate enone 125 [START_REF] Rajapaksa | Enantioselective total synthesis of (+)-Reserpine[END_REF]. The kinetic resolution of a racemic epoxide performed in acetonitrile with 4.5 mol% of catalyst 128 and benzyl alcohol as the nucleophile, afforded the secondary alcohol 126 in 41% yield and 96% ee, which was further converted into enone 125. Then, this product was coupled with dihydro--carboline 127 to access antipsychotic and antihypertensive drug (+)-reserpine. Rare examples of hydrolytic kinetic resolution of functionalized racemic epoxides bearing two stereocenters have been reported. The groups of Tae and Sudalai independently investigated the hydrolytic kinetic resolution of racemic syn-and anti-2-hydroxy-1oxiranes in order to obtain both enantiomers of syn-and antiepoxides (Scheme 51) [110]. Racemic syn-O-protected terminal epoxide 128 underwent hydrolytic kinetic resolution with chiral cobalt catalyst 129 to give enantioenriched syn-epoxide 128 along with 1,2-diol 130 in excellent conversions and enantioselectivities of up to 98% ee. Epoxide 128 was employed as the starting reagent for the total synthesis of the opposite enantiomer of natural product (5S,7R)-kurzilactone. Optically active trans-epoxide 131 was produced under comparable reaction conditions albeit using catalyst ent-129 and further employed in a concise total synthesis of the cytokine modulator (+)-epi-cytoxazone (Scheme 51).
R 3 R 1 R 2 R 3 R 1 R 2 O O O NC 59%,
The hydrolytic kinetic resolution methodology performed in the presence of cobalt catalyst ent-129 also constituted the key step of a total synthesis of natural product (+)-eldanolide, a long range sex attractant, which was reported by Sudalai et al., in 2013 [START_REF] Devalankar | Optically pure -butyrolactones and epoxy esters two stereocentered HKR of 3-substituted epoxy esters: A formal synthesis of (-)-paroxetine, Ro 67-8867 and (+)-eldanolide[END_REF]. As shown in Scheme 52, racemic anti-3-methyl epoxy ester 132 led to the corresponding trans-3,4-disubstituted-g-butyrolactone 133 along with unreacted chiral epoxide 132 both isolated in excellent enantioselectivities of 96-97% ee. Epoxide 132 was subsequently converted into expected natural product (+)-eldanolide. Another advantage of this methodology was the production of chiral 3,4disubstituted -butyrolactones, which constitute important scaffolds endowed with several biological activities.
The hydrolytic kinetic resolution of terminal epoxides was recently applied in the total synthesis of the anti-Parkinson agent safinamide, starting from commercially available benzyl glycidyl ether (Scheme 53) [START_REF] Reddy | A new enantioselective synthesis of the anti-Parkinson agent safinamide[END_REF]. Enantiopure epoxide 134 was obtained in good conversion (46%) via hydrolytic kinetic resolution of the corresponding racemic epoxide in the presence of ent-129 as catalyst, which secured a highly enantioselective approach to safinamide via simple elaboration steps. In addition, the hydrolytic kinetic resolution of racemic epoxides has also been used in the synthesis of other natural and bioactive compounds, such as (S)-timolol [START_REF] Narina | Enantioselective synthesis of (S)-timolol via kinetic resolution of terminal epoxides and dihydroxylation of allylamines[END_REF], decarestrictine D [START_REF] Gupta | An efficient total synthesis of decarestrictine D[END_REF], (+)-isolaurepan [START_REF] Tripathi | A general approach to medium-sized ring ethers via hydrolytic and oxidative kinetic resolutions: Stereoselective syntheses of (-)-cis-lauthisan and (+)-isolaurepan[END_REF], (R)-tuberculostearic acid [START_REF] Dyer | Synthesis and structure of phosphatidylinositol dimannoside[END_REF], (R)-mexiletine [START_REF] Sasikumar | A convenient synthesis of enantiomerically pure (R)-mexiletine using hydrolytic kinetic resolution method[END_REF], neocarazostatin B [START_REF] Czerwonka | First enantioselective total synthesis of neocarazostatin B, determination of its absolute configuration and transformation into carquinostatin A[END_REF], and amprenavir [START_REF] Gadakh | Enantioselective synthesis of HIV protease inhibitor amprenavir via Co-catalyzed HKR of 2-(1-azido-2phenylethyl)oxirane[END_REF].
In 2004, Bartoli, Melchiorre et al. showed that the same cobalt catalyst ent-129 could also catalyze the highly regioselective kinetic resolution of epoxides with carbamates as nucleophiles in the presence of p-nitrobenzoic acid as an additive to provide the corresponding enantiopure chiral -amino alcohols 135 (Scheme 54) [START_REF] Bartoli | Asymmetric catalytic synthesis of enantiopure N-protected 1,2-amino alcohols[END_REF]. The utility of this novel methodology was demonstrated in a total synthesis of the anti-hypertensive drug ( -blocker) (S)propanolol.
Asymmetric Sulfur Ylide-Mediated Epoxidations as Key Steps
An alternative non-oxidative preparation of chiral epoxides relies on the use of chiral sulfur ylides and aldehydes as reactive partners [START_REF] Gnas | Chiral auxiliaries -principles and recent applications[END_REF]. This methodology has been the focus of significant interest to efficiently produce chiral epoxides in the last decade [87a, 122]. For The same group also developed asymmetric sulfur ylide epoxidations of readily available hemiaminals [START_REF] Kokotos | Hemiaminals as substrates for sulfur ylides: Direct asymmetric syntheses of functionalized pyrrolidines and piperidines[END_REF]. As shown in Scheme 56, the reaction of cyclic hemiaminal 139 with chiral sulfonium salt 140 at room temperature in dichloromethane in the presence of a base, such as phosphazene P2 (N,N,N',N'tetramethyl-N'-(tris(dimethylamino)phosphoranylidene) phosphoric triamide ethylimine), led to the corresponding chiral epoxide 141 in 54% yield. The latter was further successively submitted to TMSOTf and Sc(OTf) 3 to afford the corresponding chiral piperidine 142 as the only detected diastereomer in 93% ee and with 90% yield. The synthetic utility of this novel methodology was demonstrated by its application in the synthesis of the potent neurokinin-1 (NK-1) receptor antagonist CP-122,721, as depicted in Scheme 56.
Most of the recent efforts in the asymmetric ylide-mediated epoxidation were focused on the synthesis of readily accessible optically pure sulfides able to improve and extend the scope of epoxidation in order to apply this methodology in total synthesis [START_REF] Aggarwal | Catalytic, asymmetric sulfur ylide-mediated epoxidation of carbonyl compounds: Scope, selectivity, and applications in synthesis[END_REF][START_REF] Aggarwal | The complexity of catalysis : Origins of enantio-and diastereocontrol in sulfur ylide mediated epoxidation reactions[END_REF]. An innovative scaffold for the starting sulfide has been reported by Sarabia et al., based on the use of easily available amino acids Land D-methionines (Scheme 57) [START_REF] Sarabia | A highly efficient methodology of asymmetric epoxidation based on a novel chiral sulfur ylide[END_REF]. For example, sulfonium salt 143 was readily prepared in four steps and 70% overall yield from L-methionine. The reactions of 143 with various aldehydes were performed under basic conditions to give exclusively the corresponding chiral trans-epoxides 144 in almost diastereoisomerically pure form. Employment of enantiopure aldehydes as reagents enabled the preparation of complex epoxy amides, such as 145, with an excellent control of the diastereoselectivity. This compound constituted a useful intermediate for the synthesis of macrolide-type natural products. The same group recently reported an in-depth study on the bicyclic core ring-size of sulfonium salts of type 143, demonstrating that it was possible to increase the scope of simple and chiral aldehydes employable in the epoxidation, for instance heteroaromatic, vinyl, and hemiacetal [START_REF] Sarabia | A highly stereoselective synthesis of glycidic amides based on a new class of chiral sulfonium salts: Applications in asymmetric synthesis[END_REF]. The versatility of this methodology was demonstrated in the total syntheses of bengamides analogues [START_REF] Sarabia | Epi-, epoxy-, and C2-modified Bengamides: Synthesis and biological evaluation[END_REF], a family of marine natural products isolated from sponges, exhibiting prominent antitumor, antihelmintic, and antibiotic activities as well as natural product (-)-depudecin [START_REF] García-Ruiz | Stereoselective total synthesis of (-)-Depudecin[END_REF], an antiangiogenic microbial polyketide (Scheme 57). Further applications include the synthesis of cyclodepsipeptides globomycin and SF-1902 A5 [START_REF] Sarabia | Solid phase synthesis of globomycin and SF-1902 A5[END_REF], and sphingoid-type bases [START_REF] Sarabia | Exploring the reactivity of chiral glycidic amides for their applications in synthesis of bioactive compounds[END_REF].
Catalytic Asymmetric Darzens Reactions as Key Steps
An important transformation to obtain epoxides bearing an electron-withdrawing groups is the Darzens reaction [132] North et al. in 2007 [START_REF] Achard | Enantio-and diastereoselective Darzens condensations[END_REF], successful examples of this type of methodology have been developed and applied in total synthesis. Therefore, Gong et al. have described an enantioselective titaniumcatalyzed Darzens reaction as key step in the total synthesis of protease inhibitor (-)-bestatin [START_REF] Liu | An asymmetric catalytic Darzens reaction between diazoacetamides and aldehydes generates -glycidic amides with high enantiomeric purity[END_REF]. The reaction was catalyzed by an in situ generated complex from Ti(OiPr) 4 and (R)-BINOL. It occurred between diazoacetamide 146 and aldehyde 147 in the presence of molecular sieves (Scheme 58). A complete control of the diastereoselectivity was observed, providing the corresponding cisglycidic amide 148 in good yield (88%) and high enantioselectivity of 92% ee. This epoxide constituted a useful intermediate in the synthesis of (-)-bestatin.
Chiral Thiiranes
Sulfur-containing compounds are widespread among natural products and biologically active substances [135]. Consequently, great efforts have been devoted to develop stereocontrolled C S bond-forming procedures [136]. In particular, thiiranes are suitable precursors of numerous products, including biologically active compounds. Several methods for thiirane preparation [137] been reported, among which the most convenient one consists in the conversion of oxiranes into the corresponding thiiranes by an oxygen sulfur exchange reaction. With this aim, various sulfur reagents have been investigated such as thiourea 149 [START_REF] Ketcham | Cis-and trans-Stilbene Sulfides[END_REF]. In 2005, an asymmetric version of this methodology was applied by Mobashery et al. to the synthesis of chiral 1,2-(4-phenoxyphenylsulfonylmethyl)thiirane 150, which is a selective gelatinase inhibitor active for cancer metastasis [START_REF] Lee | Mobashery, S. 1,4-Di(1,2,3-triazol-1-yl)butane as building block for the preparation of the iron(II) Spin-Crossover 2D coordination polymer[END_REF]. The key step of the synthesis consisted in the reaction of the corresponding (S)-epoxide 151 with thiourea 149 to give the expected (R)-thiirane 151 in good yield (84%) and high enantioselectivity (> 90% ee), as shown in Scheme 59.
CHIRAL CYCLOPROPANES IN TOTAL SYNTHESIS
The strained cyclopropane subunit [2a, 140] is present in a range of biologically relevant products, such as terpenes, pheromones, fatty acid metabolites, and unusual amino acids [141], among others. These compounds exhibit a large spectrum of biological properties, including enzyme inhibition and insecticidal, antifungal, herbicidal, antimicrobial, antibiotic, antibacterial, antitumor, and antiviral activities [3,142]. This fact has inspired chemists to find novel approaches for their synthesis [143], and thousands of cyclopropane compounds have already been prepared [144]. In particular, the asymmetric synthesis of cyclopropanes has remained a challenge [2f, 145], since it was demonstrated that members of the pyrethroid class of compounds to be effective insecticides [START_REF] Arlt | Syntheses of pyrethroid acids[END_REF]. In the last decade, many important chiral cyclopropane derivatives have been synthesized according to three principal methodologies, including the Simmons-Smith reaction [START_REF] Charette | Simmons-Smith Cyclopropanation Reaction in Organic Reactions[END_REF], the transition-metal-catalyzed decomposition of diazo compounds [2f, 148], and the Michael-initiated ring-closure (MIRC) [149]. In each case, the reactions can start from chiral substrates (or auxiliaries) or can be promoted by chiral catalysts.
Asymmetric Simmons Smith Cyclopropanations as Key Steps
In 1950s, Simmons and Smith reported the reaction of alkenes with diiodomethane performed in the presence of activated zinc which afforded cyclopropanes in high yields [START_REF] Simmons | A new synthesis of cyclopropanes from olefins[END_REF]. The reactive intermediate is an organozinc species and the preparation of such species, including RZnCH 2 I or IZnCH 2 I compounds and samarium derivatives, was developed in the following years [START_REF] Chan | Relative rates and stereochemistry of the iodomethylzinc iodide methylenation of some hydroxy-and methoxysubstituted cyclic olefins[END_REF]. Ever since, asymmetric versions of the Simmons-Smith reaction [152] have been developed and applied to the synthesis of various biologically active products using either chiral substrates or chiral catalysts.
Using Chiral Substrates
Various asymmetric cyclopropanations of acyclic allylic alcohols have been reported, using the heteroatom as the directing group, through chelation with the zinc reagent. This Simmons-Smith reaction has distinct advantages over the reaction with a simple olefin in relation to the reaction rate and stereocontrol [START_REF] Hoveyda | Substrate-directable chemical reactions[END_REF]. Many asymmetric cyclopropanations of chiral allylic alcohols have been used as key steps in total synthesis of natural products of biological interest. For instance, Takemoto et al. reported in 2000s an asymmetric total synthesis of natural product halicholactone, in which a regio-and stereoselective cyclopropanation of chiral diene 152 into epoxide 153 obtained as the only detected stereoisomer constituted the key step (Scheme 60) [START_REF] Takemoto | Asymmetric total synthesis of halicholactone[END_REF].
In 2006, Smith and Simov developed the total synthesis of the marine diolide (-)-clavosolide A on the basis of the direct Simmons-Smith cyclopropanation of chiral N-methoxyamide 154, providing the corresponding key cyclopropane intermediate 155 in 74% yield and diastereoselectivity of 84% de, as shown in Scheme 61 [START_REF] Smith | Total synthesis of the marine natural product (-)-Clavosolide A. A show case for the Petasis-Ferrier Union/Rearrangement tactic[END_REF].
In 2007, a total synthesis of the two biologically active oxylipins solandelactones E and F was described by White et al. [START_REF] White | Total synthesis of solandelactones E and F, homoeicosanoids from the hydroid Solanderia secunda[END_REF]. The key step was a comparable Simmons-Smith cyclopropanation of chiral N-methoxyamide 156 which provided the corresponding functionalized cyclopropane 157 as the only detected stereoisomer diastereomer in almost quantitative yield, as shown in Scheme 62. The authors confirmed that the structures of the two solandelactones were epimeric at C11.
Brevipolides are extracted from the invasive tropical plant of Hyptis brevipes, and exhibit interesting drug properties. Recently, Mohapatra et al. developed a highly diastereoselective synthesis of the C1-C12 fragment of brevipolide H (Scheme 63) [START_REF] Mohapatra | Toward the synthesis of brevipolide H[END_REF]. The key step was the cyclopropanation of chiral alkene 158 into 159 in excellent yield (up to 97%) and diastereoselectivity (up to 98% de). A similar reaction was previously reported by Kumaraswamy et al., but with inferior results in the synthesis of another representative member of the brevipolide family [START_REF] Kumaraswamy | Towards the diastereoselective synthesis of derivative of 11 -epibrevipolide H[END_REF]. In addition to chiral allylic alcohols, chiral acetals bearing an alkene function, such as 160 and 161, have been used in diastereoselective acetal-directed cyclopropanations as key steps of total syntheses of solandelactone E (Scheme 64a, structure in Scheme 62) [START_REF] Davoren | Enantioselective synthesis and structure revision of solandelactone E[END_REF], and a marine fatty acid metabolite exhibiting lipoxygenase-inhibiting activity (Scheme 64b) [START_REF] Mohapatra | Asymmetric total synthesis of eicosanoid[END_REF], both providing the corresponding cyclopropane derivatives 162 and 163, respectively, in good to excellent yields (72-95%) and as the only detected stereoisomers.
In 2006, standard Simmons-Smith conditions were also applied by Abad et al. to the cyclopropanation of diterpene 164 [START_REF] Abad | A unified synthetic approach to trachylobane-, beyerane-, atisane-and kaurane-type diterpenes[END_REF]. The reaction occurred stereoselectively from the less hindered b-side of the double bond, affording the expected cyclopropane 165 in 94% yield and as the only detected stereoisomer (Scheme 65). This tricyclo[3.2.1.0]octane moiety constituted the key intermediate in the synthesis of biologically interesting trachylobane-, beyerane-, atisane-, and kaurane-type diterpenes.
In 2015, Tori et al. applied similar Simmons-Smith conditions to the last step of a total synthesis of natural product (+)crispatanolide starting from chiral alkene 166, as shown in Scheme 66 [START_REF] Nakashima | What is the absolute configuration of (+)-crispatanolide isolated from Makinoa crispate (liverwort)?[END_REF]. Surprisingly, the major product was not the expected (+)- crispatanolide, but a diastereomer, very likely because of the directing effect of the lactone carbonyl group. However, this synthesis allowed likewise assigning the absolute configuration of the natural (+)-crispatanolide.
In addition, various types of recoverable chiral auxiliaries have been successfully employed in asymmetric Simmons-Smith cyclo-propanations as key steps of total syntheses of various natural products. For example, the asymmetric Simmons-Smith cyclopropanation of chiral allylic alcohol 167 led to the corresponding cyclopropanes 168 in high to quantitative yields and as almost single diastereomers (> 95% de) [START_REF] Cheeseman | A novel strategy for the asymmetric synthesis of chiral cyclopropane carboxaldehydes[END_REF] to be used as key intermediates in several natural products, such as cascarillic acid [START_REF] Cheeseman | A temporary stereocentre approach for the asymmetric synthesis of chiral cyclopropanecarboxaldehydes[END_REF] (-)-clavosolide A (structure in Scheme 61) [START_REF] Son | Enantioselective total synthesis of (-)-Clavosolide B[END_REF], and grenadamide [START_REF] Green | An efficient asymmetric synthesis of grenadamide[END_REF] (Scheme 67).
Using Chiral Catalysts
In 2001, Liu and Ghosh reported the cyclopropanation of cisand trans-disubstituted allylic alcohols, such as 170 and 171, per-formed in the presence of chiral dioxaborolane ligand 169, which led to the corresponding chiral cyclopropylmethanols 172 and 173, respectively, in diastereoselectivities of up to > 95% de. These reactions constituted the key steps for biologically active product synthesis, such as an epothilone analogue [START_REF] Nicolaou | Chemical synthesis and biological evaluation of cis-and trans-12,13-cyclopropyl and 12,13-cyclobutyl epothilones and related pyridine side chain analogues[END_REF], and (-)-doliculide [START_REF] Ghosh | Total synthesis of antitumor depsipeptide (-)-doliculide[END_REF], as shown in Scheme 68. It must be noted that in fact these results arose from a double induction since the starting materials were also chiral.
In 2006, (S)-phenylalanine-derived disulfonamide 174 was applied as chiral ligand to promote the cyclopropanation of a range of 3,3-diaryl-2-propen-1-ols in the presence of Et 2 Zn and CH 2 I 2 , providing the corresponding cyclopropylmethanols with moderate to good enantioselectivities (59-84% ee), as shown in Scheme 69 [START_REF] Miura | Syntheses of (R)-(+)-cibenzoline and analogues via catalytic enantioselective cyclopropanation using (S)phenylalanine-derived disulfonamide[END_REF]. Chiral cyclopropanes 175, 176 and 177, derived from the corresponding allylic alcohols 178, 179, and 180, were further converted into (+)-cibenzoline, an antiarrhythmic agent [START_REF] Koyata | Convenient preparation of optically active cibenzoline and analogues from 3,3-diaryl-2-propen-1-ols[END_REF], (+)tranylcypromine, a strong monoamineoxidase inhibitor, and (-)milnacipran, a serotonin-noradrenaline reuptake inhibitor, respectively [START_REF] Ishizuka | Asymmetric syntheses of pharmaceuticals containing a cyclopropane moiety using catalytic asymmetric Simmons-Smith reactions of allylalcohols: Syntheses of optically active tranylcypromine and milnacipran[END_REF].
Asymmetric Transition-Metal Decomposition of Diazoalkanes as Key Steps
Intermolecular Cyclopropanations Chiral Substrates
Since the pioneering work of Nozaki and Noyori reported in 1966 [START_REF] Nozaki | Asymmetric induction in carbenoid reaction by means of a dissymmetric copper chelate[END_REF], the transition-metal-catalyzed cyclopropanation of alkenes with diazo compounds has emerged as one of the most highly efficient routes to functionalized cyclopropanes. These reactions have been applied in total synthesis, starting from chiral substrates, but also in the presence of chiral catalysts. As a rare example of reaction involving a chiral substrate, Snapper et al. developed the asymmetric cyclopropanation of chiral tricyclic alkene 181 as key step in total syntheses of natural products pleocarpenene and pleocarpenone [START_REF] Williams | Intramolecular cyclobutadiene cycloaddition/cyclopropanation/thermal rearrangement: An effective strategy for the asymmetric syntheses of pleocarpenene and pleocarpenone[END_REF]. The authors observed a high stereochemical control (> 90% de) in the cyclopropanation with ethyl diazoacetate (EDA) into compound 182 followed by deacetylation reaction using Cu(acac) 2 as the catalyst (Scheme 70).
Moreover, Oppolzer's chiral sultam 183 was applied for the synthesis of novel melatoninergic agents, as in Scheme 71 [START_REF] Sun | R)-2-(4-Phenylbutyl)Dihydrobenzofuran derivatives as melatoninergic agents[END_REF].
Chiral Catalysts
Chiral copper catalysts are among the most effective catalysts for the preparation of the trans-isomer of cyclopropanes with the widest reaction scope. Among them, non-racemic C 2 -symmetric bidentate bisoxazoline ligands [START_REF] Evans | oxazolines) as chiral ligands in metal-catalyzed asymmetric reactions. Catalytic, asymmetric cyclopropanation of olefins[END_REF] have been used in cyclopropanation reactions with copper for more than thirty years (see also subsection entitled "Copper-catalyzed nitrene transfer to alkenes") [175]. Some of these copper-catalyzed reactions have been included into multistep syntheses of natural products [START_REF] Honma | Development of catalytic asymmetric intramolecular cyclopropanation of -diazo--keto sulfones and applications to natural product synthesis[END_REF]. For example, carbohydrate-based bis(oxazoline) ligand 184 and copper(I) triflate were used in the reaction of non-1-ene 185 and EDA for the total synthesis of unnatural (+)-grenadamide [START_REF] Minuth | Carbohydrate-Derived bis(oxazoline) ligand in the total synthesis of grenadamide[END_REF] (Scheme 72).
In 2012, the cyclopropanation of N-Boc-3-methylindole 186, performed in the presence of bisoxazoline ligand 187 in combination with CuOTf, yielded a key building block 188 for the synthesis of the indole alkaloid (-)-desoxyeseroline that was isolated in 59% overall yield and 96% ee (Scheme 73) [START_REF] Ozoduru | Enantioselective cyclopropanation of indoles: Construction of all-carbon quaternary stereocenters[END_REF]. Moreover, the use of ligand 48 allowed the stereoselective preparation of the tetracyclic core and key intermediate 189 in a total synthesis of cryptotrione to be achieved (Scheme 73). The key cyclopropane intermediate 189 was obtained in 93% yield and 82% de starting from alkene 190 [START_REF] Chen | Stereoselective construction of the tetracyclic core of cryptotrione[END_REF].
Copper-salen catalysts were found particularly efficient in the synthesis of chrysantemate esters, and (1R,3R)-chrysanthemic ester 191 was prepared in the presence of Cu(I)-salen complex 192 in 90% yield with 78:22 dr and enantioselectivities of 91% and 62% ee for the trans-and cis-diastereomers, respectively [START_REF] Suenobu | Reaction pathway and stereoselectivity of asymmetric synthesis of chrysanthemate with the aratani c1symmetric salicylaldimine-copper catalyst[END_REF] (Scheme 74).
Chiral dirhodium carboxamide catalysts were originally developed by Doyle for enantioselective cyclopropanations [181]. In the presence of these catalysts, allylic substrates and in particular dihydronaphthalene have a potentially competing pathway to cyclopropanation, such as allylic C-H insertion. An example was developed Moreover, chiral ruthenium catalysts have also been applied to the field of catalytic enantioselective cyclopropanation. This approach was employed by Marcin al. as key step in a total synthesis of BMS-505130, a selective serotonin reuptake inhibitor [START_REF] Marcin | Catalytic asymmetric diazoacetate cyclopropanation of 1-tosyl-3-vinylindoles. A route to conformationally restricted homotryptamines[END_REF]. As shown in Scheme 76, 1-tosyl-3-vinylindole 193 was cyclopropanated by Nishiyama's catalyst 194 with ethyl diazoacetate to give the corresponding key cyclopropane 195 in 82% yield and 86% de. The latter was further converted into expected BMS-505130.
Intramolecular Cyclopropanations Chiral Substrates
Initially, intramolecular cyclopropanation reactions were performed with appropriate chiral substrates, and generally occurred with stereocontrol, leading to the exclusive formation of one stereoisomeric product. Copper and rhodium complexes are the most popular catalysts for these reactions. For example, Rh 2 (OAc) 4 was shown to promote the last step in the synthesis of terpenes, such as dihydromayurone, as the only detected stereoisomer [184]. As shown in Scheme 77, this step evolved through the intramolecular cyclopropanation of chiral diazoketone 196 with complete diastereoselectivity and moderate yield (57%). On the other hand, a copper-catalyzed intramolecular cyclopropanation, occurring with low diastereoselectivity of 26% de, constituted the key step in the synthesis of sesquiterpenes (-)-microbiotol and (+)--microbiotene starting from chiral diazoketone 197 derived from cyclogeraniol (Scheme 77) [START_REF] Srikrishna | The first enantioselective synthesis of (-)-microbiotol and (+)--microbiotene[END_REF]. The key cyclopropane intermediate 198 was obtained in good yield (75%). In addition, the key step to construct the bicyclic system of another sesquiterpene (+)-pinguisenol was based on the diastereoselective copper-catalyzed intramolecular cyclopropanation of diazoketone 199 into cyclopropane 200 obtained as the only detected stereoisomer with moderate yield (52%), followed by regioselective cyclopropane cleavage (Scheme 77) [START_REF] Srikrishna | An enantiospecific approach to pinguisanes from (R)-carvone. Total synthesis of (+)-pinguisenol[END_REF].
Chiral Catalysts
A number of chiral catalysts have been successfully applied to the enantioselective intramolecular cyclopropanation of unsaturated diazoketones as key steps in total synthesis. For example, Nakada et al. investigated the enantioselective copper-catalyzed intramolecular cyclopropanation of -diazo--keto sulfones (Scheme 78) [187]. The success of this methodology was illustrated by its application to the total syntheses of several biologically active products, such as (-)-allocyathin B 2 [START_REF] Takano | Synthetic studies on cyathins: Enantioselective total synthesis of (+)-allocyathin B2[END_REF], (-)-malyngolide [START_REF] Miyamoto | A new asymmetric total synthesis of enantiopure (-)-malyngolide[END_REF], and, (-)-methyl jasmonate, as illustrated in Scheme 78 [START_REF] Takeda | Asymmetric total synthesis of enantiopure (-)-methyl jasmonate via catalytic asymmetric intramolecular cyclopropanation of -diazo--keto sulfone[END_REF]. Key intermediates 202a-c of these syntheses were produced with up to 97% ee and 93% yield through intramolecular cyclopropanation of the corresponding unsaturated diazoketones 201a-c catalyzed with a combination of CuOTf with a chiral bisoxazoline ligand among 203-205.
The same group also developed the enantioselective preparation of tricyclo[4.4.0.0]dec-2-ene derivatives [START_REF] Ida | Highly enantioselective preparation of tricyclo[4.4.0.0 5,7 ]decene derivatives via catalytic asymmetric intramolecular cyclopropanation reactions of -diazo--keto esters[END_REF] and tricyclo[4.3.0.0]nonenones 206 by using CuOTf combined with chiral ligand 205 [192]. The resulting chiral cyclopropanes 207 were employed as key intermediates in the (formal) total syntheses of natural and biologically active products, such as (+)-busidarasin C and acetoxytubipofuran [START_REF] Ida | Highly enantioselective preparation of tricyclo[4.4.0.0 5,7 ]decene derivatives via catalytic asymmetric intramolecular cyclopropanation reactions of -diazo--keto esters[END_REF], (+)-digitoxigenin [START_REF] Honma | Enantioselective total synthesis of (+)digitoxigenin[END_REF], (-)-platensimycin and (-)-platencin [192], as well as nemorosone, garsubellin A, clusianone, and hyperforin [START_REF] Uetake | Enantioselective approach to polycyclic polyprenylated acylphloroglucinols via catalytic asymmetric intramolecular cyclopropanation[END_REF] (Scheme 79).
A related methodology was also applied to the intramolecular cyclopropanation of various -diazo--oxo-5-hexenyl phosphonates 208 [START_REF] Sawada | Enantioselective total synthesis of (+)-Colletoic acid via catalytic asymmetric intramolecular cyclopropanation of a -Diazo-keto diphenylphosphine oxide[END_REF]. In the presence of a combination of CuBF 4 and bisoxazoline ligand 203 (structure in Scheme 78), (1R,5S)bicyclo[3.1.0]hexane 209 was obtained in good yield (79%) and high enantioselectivity of 91% ee and was futher converted into natural and bioactive product (+)-colletoic acid (Scheme 80).
Moreover, asymmetric Rh 2 (S-MEPY) 4 -catalyzed cyclization of allylic diazoacetates led to chiral [START_REF] Berberich | Total synthesis of (+)-ambruticin S[END_REF]. Furthermore, the reaction of secondary divinyldiazoacetate 212 led to the corresponding cyclopropane derivative 213 as a 50:50 mixture of two diastereomers each obtained in 94% ee and quantitative yield (Scheme 81) [START_REF] Martin | Enantio-and diastereoselectivity in the intramolecular cyclopropanation of secondary allylic diazoacetates[END_REF]. This mixture was further employed as key intermediate in the total synthesis of natural products, tremulenediol A and tremulenolide A [START_REF] Ashfeld | Enantioselective syntheses of tremulenediol A and tremulenolide A[END_REF], and to that of various cyclopropanederived peptidomimetics [START_REF] Reichelt | Synthesis and properties of cyclopropane-derived peptidomimetics[END_REF].
In 2015, Chanthamath and Iwasa reported the enantioselective intramolecular cyclopropanation of electron-deficient allylic diazoacetate 214, performed in the presence of ruthenium catalyst 215 that provided the corresponding diastereo-and enantiopure cyclopropane-fused g-lactone 216 in high yield (90%) [START_REF] Nakagawa | II)-Pheox-Catalyzed asymmetric intramolecular cyclopropanation of electron-deficient olefins[END_REF]. The latter was employed as building block in the total syntheses of drug DCG-IV and natural product dysibetaine CPa (Scheme 82).
Asymmetric Michael-Initiated Ring Closures as Key Steps
Michael-initiated ring-closing (MIRC) reactions also constitute highly efficient routes to cyclopropanes. These reactions involve a conjugate addition to an electrophilic alkene generally to produce an enolate, which then undergoes an intramolecular ring closure. A range of asymmetric Michael-initiated ring-closing reactions based on the use of chiral substrates have been applied to the synthesis of important products. As a recent example, Marek et al. have developed the MIRC reaction of chiral alkylidene bis(p-tolylsulfoxides) 217 with trimethylsulfoxonium ylide 218, leading to the corresponding chiral bis(p-tolylsulfinyl) cyclopropane 219 with a moderate diastereoselectivity of 72% de, which was further used to prepare enantiomerically enriched polyalkylated cyclopropane derivatives [START_REF] Abramovitch | Convergent preparation of enantiomerically pure polyalkylated cyclopropane derivatives[END_REF]. As illustrated in Scheme 83, this methodology was applied to the synthesis of (9R,10S)-dihydrosterculic acid, a natural fatty acid [START_REF] Palko | A flexible and modular stereoselective synthesis of (9R,10S)-dihydrosterculic acid[END_REF].
Miscellaneous Asymmetric Cyclopropanations as Key Steps
Enantioenriched cyclopropane derivatives, such as 220, can also be efficiently prepared from the addition of the dianion of (-)dimenthylsuccinate 221 to bromochloromethane 222 (Scheme 84) [START_REF] Misumi | Simple asymmetric construction of a carbocyclic framework. Direct coupling of dimenthyl succinate and 1,.Omega.-dihalides[END_REF]. This method, providing up to 98% de and 87% yield, was used in the total synthesis of natural bioactive callipeltoside [START_REF] Trost | Callipeltoside A: Assignment of absolute and relative configuration by total synthesis[END_REF], and to that of the first peptide nucleic acid (PNA) bearing a cyclopropane, (S,S)-tcprPNA [START_REF] Pokorski | Cyclopropane PNA: Cbservable triplex melting in a PNA constrained with a 3-membered ring[END_REF].
LTMP-induced intramolecular cyclopropanation of unsaturated terminal epoxides provided an efficient and completely stereoselective entry to bicyclo[3.1.0]hexan-2-ols and bicyclo[4.1.0]heptan-2ols. This methodology was applied to a total synthesis of natural product (+)-cuparenone, starting from chiral chlorohydrin 223 that was converted into the corresponding bicyclohexanol 224 as almost single stereoisomer (> 99% de, 97% ee) in 59% yield (Scheme 85) [START_REF] Hodgson | Intramolecular cyclopropanation of unsaturated terminal epoxides and chlorohydrins[END_REF]. In 2015, Cramer et al. reported an efficient access to the sevenmembered ring of the cyclopropylindolobenzazepine core of antiviral agent beclabuvir [START_REF] Pedroni | Enantioselective palladium(0)-catalyzed intramolecular cyclopropane functionalization: Access to dihydroquinolones, dihydroisoquinolones and the BMS-791325 ring system[END_REF]. As shown in Scheme 86, a TADDOLbased phosphoramidite palladium(0) complex 225 enabled the enantioselective Friedel Crafts reaction by C-H insertion on cyclopropane 226 to give pentacyclic chiral product 227 with 80% yield and 89% ee. This method provided efficient access to the construction of the seven-membered ring of the cyclopropylindolobenzazepine core of beclabuvir.
In 1989, the group of Kulinkovich showed that the reaction of esters with a mixture of Ti(O-i-Pr) 4 and an excess of a Grignard reagent led to the corresponding substituted cyclopropanols [209]. Later, asymmetric versions of this methodology have been devel-oped by using either chiral substrates or chiral titanium catalysts. For example, Singh et al. demonstrated that, under Kulinkovich reaction conditions, chiral b-alkoxy ester 228 afforded the corresponding cyclopropanol 229 as the only detected stereoisomer in high yield (87%), that constituted the key intermediate for the synthesis of all the stereoisomers of tarchonanthuslactone, a naturally occurring biologically active product (Scheme 87) [START_REF] Baktharaman | Asymmetric synthesis of all the stereoisomers of tarchonanthuslactone[END_REF].
In 2006, a total synthesis of antitumor agent ( )-irofulven was developed on the basis of the reaction between strained ketene hemithioacetal 230 and methyl pyruvate 231 performed in the presence of chiral bisoxazoline copper catalyst 232 [START_REF] Movassaghi | Enantioselective total synthesis of (-)-acylfulvene and (-)-irofulven[END_REF]. The reaction afforded the corresponding functionalized chiral cyclopropane 233 in both high yield (95%) and enantioselectivity (92% ee) that was further converted into expected ( )-irofulven, as shown in Scheme 88.
CONCLUSION
This review highlighted major total syntheses of biologically active compounds, including natural products, using chiral threemembered rings as key intermediates. The interest towards synthetic methodologies for their preparation has increased in the last decades, dictated either by the biological activities that display many naturally occurring products bearing a three-membered unit or by their ring strain making them useful precursors of more complex interesting molecules. Classic as well as modern protocols, such as organocatalyzed reactions, have been applied to make asymmetric aziridination, azirination, epoxidation, thiirination, and cyclopropanation key steps of a wide number of syntheses of important products. The use of classical methods which employ chiral substrates and auxiliaries is still highly frequent particularly for asymmetric aziridination and cyclopropanation. On the other hand, the development of enantioselective catalytic methodologies has witnessed exponential growth during the last decade, in particular, in the area of asymmetric organocatalytic epoxidations. The development of new catalytic systems including organocatalysts or chiral ligands for metal catalysts for the synthesis of the other threemembered rings is still in its infancy, However, their expansion is awaited in the coming few years, opening the way to the synthesis of other biologically important products. Undoubtedly, the chemistry of three-membered rings will continue to play a dominant role in the history of total synthesis for many years.
Scheme 1 .Scheme 2 .
12 Scheme 1. Synthesis of (+)-agelastatin A.
Scheme 5 .
5 Scheme 5. Synthesis of an analogue of diaminopimelic acid.
20 Scheme 8 .
208 Scheme 7. Syntheses of L-daunosamine and L-ristosamine derivatives.
Scheme 10 . 26 Scheme 11 .
102611 Scheme 9. Synthesis of ( )-polyoxamic acid.
Scheme 12 .
12 Scheme 12. Synthesis of 7-Epi (+)-FR900482.
Scheme 14. Synthesis of an aziridinomitosene.
Scheme 19 .Scheme 21 .
1921 Scheme 18. Synthesis of a squalene synthase inhibitor.
Scheme 22 .Scheme 23 .
2223 Scheme 22. Synthesis of (R,R)--methoxytyrosine.
In 2014 ,
2014 Scheme 24. Synthesis of BIRT-377.
Scheme 26 .
26 Scheme 26. Synthesis of four stereoisomers of sphinganine.
Scheme 27 .
27 Scheme 27. Synthesis of taxol side chain.
Scheme 28 .
28 Scheme 28. Synthesis of constituants of naturally occurring antibiotics.
74 ( 5 75 Ar
575 Scheme 31. Synthesis of (-)-Z-dysidazirine.
Scheme 33 .
33 Scheme 33. Synthesis of (R)-2-benzylmorpholine based on Katsuki-Sharpless asymmetric epoxidation.
TiScheme 34 .PhScheme 35 .
3435 Scheme 34. Synthsesis of GSK966587 based on Katsuki-Sharpless asymmetric epoxidation.
Scheme 37 .
37 Scheme 37. Biologically active products synthesized through Jacobsen/Katsuki epoxidations.
90 .Scheme 36 .
9036 Scheme 36. Syntheses of -bisabolol and florfenicol through V-catalyzed epoxidations.
Scheme 41 .
41 Scheme 41. Syntheses of biologically active products through Shibasaki's epoxidation.
Scheme 47 .
47 Scheme 47. Synthesis of (+)-scuteflorin A through chiral iminium salt-catalyzed epoxidation.
126 Scheme 50 .
12650 Scheme 50. Synthesis of (+)-reserpine.
Scheme 53 .
53 Scheme 53. Synthesis of safinamide and structures of other products.
Scheme 57 .
57 Scheme 57. Syntheses of bengamide E and (-)-depudecin.
Scheme 59 .
59 Scheme 58. Synthesis of (-)-bestatin.
Scheme 63 .
63 Scheme 63. Synthesis of the C1-C12 fragment of brevipolide H.
Scheme 62 .
62 Scheme 60. Synthesis of halicholactone.
Scheme 67 .Scheme 68 .
6768 Scheme 67. Syntheses of cascarillic acid, (-)-clavosolide and grenadamide.
Scheme 69 . 183 Scheme 71 .
6918371 Scheme 69. Syntheses of (+)-cibenzoline, (+)-tranylcypromine and (-)milnacipran.
203 :Scheme 79 .
20379 Scheme 79. (Formal) syntheses of (+)-busidarasin C, acetoxytubipofuran, (+)-digitoxigenin, (-)-platencin, (-)-platensimycin, nemorosone, garsubellin A, clusianone, and hyperforin though copper-catalyzed cyclopropanations.
Scheme 80 . 81 .Scheme 82 . 83 . 84 . 85 .
808182838485 Scheme 80. Synthesis of (+)-colletoic acid through copper-catalyzed cyclopropanation.
86 .Scheme 87 .
8687 Scheme 86. Synthesis of the cyclopropylindolobenzazepine core of beclabuvir.
. In 2005, Vederas et al. applied this methodology to the asymmetric aziridination of a camphor derivative in the presence of 3-amino-2-
H
O O Rh 2 (pfm) 4
S
O H + Cl 3 C O NH 2 PhI(OAc) 2
O MgO
6
H N S O O H NC
O CCl 3
HO H O
O H NC
O Cl
7
73%, 66% de (+)-kalihinol A
Scheme 3. Formal synthesis of (+)-kalihinol A.
(2 mol%)
PhthN CO 2 Et SesNH 2 (1.1 equiv) PhI(OPiv) 2 (1.3 equiv)
MgO (2.3 equiv)
C 6 H 5 Cl, 0 °C to r.t.
Ses
N
PhthN CO 2 Et
86%, > 99% de
130 131 Scheme 51. Syntheses
N N
t-Bu Co t-Bu
O OAc O OH OTBS
t-Bu t-Bu HO
O OTBS 129 (0.3-0.5 mol%)
+ 48%, 94% ee
rac-128 H 2 O, r.t. O OTBS
128
O 48%, 98% ee
O OH O
Ph
(5S,7R)-kurzilactone N 3 OH
OH
N 3 MeO
ent-129 (0.5 mol%) 47%, 98% ee
MeO O H 2 O, r.t. + N 3
rac-131 O
O
HN O 48%, 98% ee
OH
MeO
(+)-epi-cytoxazone
of (5S,7R)-kurzilactone and (+)-epi-cytoxazone.
example, Aggarwal et al. developed a total synthesis of SK&F 104353, a leukotriene D 4 inhibitor, that was based on the reaction of diastereoisomerically pure O-protected sulfonium salt 136 with aromatic hindered aldehyde 137, performed in the presence of KOH at low temperature, to afford the corresponding chiral key trans-epoxide 138 in 77% yield and high enantioselectivity (90% ee) [123]. The latter was subsequently converted into expected SK&F 104353 (Scheme 55).
O
ent-129 (0.5 mol%)
MeO
rac-132 O H 2 O, r.t.
O
MeO + OH
132 O O O 133
48%, 96% ee 46%, 97% ee
MgBr
THF, -30°C to 0°C
CuBr 2 (SMe 2 ) .
O O
(+)-eldanolide
Scheme 52. Synthesis of (+)-eldanolide.
O OBn ent-129 (0.5 mol%) O OBn
rac-134 H 2 O, 0°C to r.t. 134 46%, > 99% ee
. Since the first asymmetric metal-catalyzed Darzens reaction reported by
O rac R + NH 2 Boc ent-129 (1.5 mol%) t-BuOMe, r.t. p-nitrobenzoic acid R OH 135 NHBoc O > 90% ee + R
R = alkyl, aryl 25-49%, > 99% ee
1) CF 3 CO 2 H
R = CH 2 O(1-Naph) 2) PtO 2 /H 2
acetone, MeOH
OH
O H N
(S)-propanolol
96%
Scheme 54. Synthesis of (S)-propanolol.
(CH 2 ) 8 Ph H Br O KOH, MeOH
O + S NEt 2 -30°C
OMe
137 136
S CO 2 H
O CO 2 H
Ph(H 2 C) 8 O N(Et) 2 OH (CH 2 ) 8 Ph
138 SK&F 104353
77%, 90% ee
Scheme 55. Synthesis of SK&F 104353.
Scheme 6. Synthesis of ( )-agelastatin A.
Scheme 29. Synthesis of (-)-Z-dysidazirine.
201Scheme 78. Syntheses of (-)-malyngolide, (-)-methyl jasmonate and (-)-allocyathin B2 through copper-catalyzed cyclopropanations.
ACKNOWLEDGEMENTS
Declared none.
Angew. Chem., Int. Ed., 2012, 51, 5538-5540. (f) Chawla, R.; Singh, A.K.; Yadav, L.D.S. Organocatalysis in synthesis and reactions of epoxides and aziridines. RSC Adv., 2013, 3, 11385-11403. (g) Wang, P.A. Organocatalyzed enantioselective desymmetrization of aziridines and epoxides. Beilstein J. Org. Chem., 2013, 9, 1677-1695. (h) Charette, A.B.; Lebel, H.; Roy, M.N. Asymmetric Cyclopropanation and Aziridination Reactions. In: Copper-Calalyzed Asymmetric Synthesis; Alexakis, A.; Krause, N.; Woodward, S., Eds.; Wiley-VCH: Weinheim, 2014; pp.203-258. (i) Pellissier, H. Recent develoments in asymmetric aziridination. Adv. Synth. Catal., 2014, 356, 1899-1935. (j) Degennaro, L.; Trinchera, P.; Luisi, R. Recent advances in the stereoselective synthesis of aziridines. Chem. Rev., 2014, 114, 7881-7929. (k) Callebaut, G.; Meiresonne, T.; De Kimpe, N.; Mangelinckx, S. Synthesis and reactivity of 2-(Carboxymethyl)aziridine derivatives. Chem. Rev., 2014, 114, 7954-8015. [7] (a) Pellissier, H. Recent developments in asymmetric aziridination. Tetrahedron, 2010, 66, 1509-1555. (b) Padwa, A. Aziridines and Azirines: Monocyclic. In: Comprehensive Heterocyclic Chemistry III; Ramsden, A.; Scriven, E.F.V.; Taylor, R.J.K., Eds.; Elsevier: Oxford, 2008; vol. 1, pp. 1-104. (c) Bisol, B.T.; Mandolesi Sà, M. Recentes avanços na preparação de aziridinas. Aplicações sintéticas e implicações mecanísticas. Quim. Nova, 2007, 30, 106-115. (d) Singh, G.S.; D'hooghe, M.; De Kimpe, N. Synthesis and reactivity of c-heteroatom-substituted aziridines. Chem. Rev., 2007, 107, 2080-2135. (e) Sweeney, J.B. Synthesis of Aziridines. In: Aziridines and Epoxides in Organic Synthesis; Yudin, A., Ed.; Wiley-VCH: Weinheim, 2006; chap. 4, pp. 117-144. (f) Aggarwal, V.K.; Badine, M.; Moorthie, V. Asymmetric Synthesis of Epoxides and Aziridines from Aldehydes and Imines. In: Aziridines and Epoxides in Organic Synthesis; Yudin, V., Ed.; Wiley-VCH: Weinheim, 2006; chap. 1, pp. 1-35. (g) Zhou, P.; Chen, B.C.; Davis, F.A. Asymmetric Syntheses with Aziridinecarboxylate and Aziridinephosphonate Building Blocks. In: Aziridines and Epoxides in Organic Synthesis; Yudin, A., Ed.; Wiley-VCH: Weinheim, 2006; chap. 3, pp. 73-115. (h) Padwa, A.; Murphree, S.S. Epoxides and aziridines -A mini review. Arkivoc, 2006, III, 6-33. (i) Mö ner, C.; Bolm, C. Catalyzed Asymmetric Aziridinations. In: Transition Metals for Organic Synthesis, 2 nd ed.; Wiley-VCH: Weinheim, 2004; pp. 389-402. (j) Cardillo, G.; Gentilucci, L.; Tolomelli, A. Aziridines and oxazolines: Valuable intermediates in the synthesis of unusual amino acids. Aldrichim. Acta, 2003, 36, 39-50. (k) Lee, W.K.; Ha, H.J. Highlights of the chemistry of enantiomerically pure aziridine-2-carboxylates. Aldrichim. Acta, 2003, 36, 57-63. (l) Padwa, A.; Murphree, C. Three-membered Ring Systems. In: Progress in Heterocyclic Chemistry; Gribble, G.W.; Joule, J.A., Eds.; Pergamon: Oxford, 2003; vol. 15, pp. 75-99. (m) Müller, P.; Fruit, C. Enantioselective catalytic aziridinations and asymmetric nitrene insertions into CH bonds. Chem. Rev., 2003, 103, 2905-2919. (n) Padwa, A.; Murphree, S.S. Three-membered Ring Systems. In: Progress in Heterocyclic Chemistry; Gribble, G.W.; Gilchrist, T.L., Eds.; Elsevier Science: Oxford, 2000; vol. 12, pp. 57-76. (o) Osborn, H.M.I.; Sweeney, J.B. The asymmetric synthesis of aziridines. Tetrahedron: Asymmetry, 1997, 8, 1693-1715. (p) Pearson, W.H.; Lian, B.W.; Bergmeier, S.C. Aziridines and Azirines: Monocyclic. In: Comprehensive Heterocyclic Chemistry II; Padwa, A., Ed.; Pergamon: Oxford, 1996; vol. 1A, pp. 1. (q) Kemp, J.E.G. Addition Reactions with Formation of Carbon-Nitrogen Bonds. In: Comprehensive Organic Synthesis; Pergamon:
CONFLICT OF INTEREST
The author(s) confirm that this article content has no conflict of interest. |
01767321 | en | [
"info",
"info.info-ni"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767321/file/978-3-319-19195-9_8_Chapter.pdf | Ferruccio Damiani
email: [email protected]
Mirko Viroli
email: [email protected]
Danilo Pianini
email: [email protected]
Jacob Beal
email: [email protected]
Code Mobility Meets Self-organisation: a Higher-order Calculus of Computational Fields
Self-organisation mechanisms, in which simple local interactions result in robust collective behaviors, are a useful approach to managing the coordination of large-scale adaptive systems. Emerging pervasive application scenarios, however, pose an openness challenge for this approach, as they often require flexible and dynamic deployment of new code to the pertinent devices in the network, and safe and predictable integration of that new code into the existing system of distributed self-organisation mechanisms. We approach this problem of combining self-organisation and code mobility by extending "computational field calculus", a universal calculus for specification of self-organising systems, with a semantics for distributed first-class functions. Practically, this allows selforganisation code to be naturally handled like any other data, e.g., dynamically constructed, compared, spread across devices, and executed in safely encapsulated distributed scopes. Programmers may thus be provided with the novel firstclass abstraction of a "distributed function field", a dynamically evolving map from a network of devices to a set of executing distributed processes.
Introduction
In many different ways, our environment is becoming ever more saturated with computing devices. Programming and managing such complex distributed systems is a difficult challenge and the subject of much ongoing investigation in contexts such as cyberphysical systems, pervasive computing, robotic systems, and large-scale wireless sensor networks. A common theme in these investigations is aggregate programming, which aims to take advantage of the fact that the goal of many such systems are best described in terms of the aggregate operations and behaviours, e.g., "distribute the new version of the application to all subscribers", or "gather profile information from everybody in the festival area", or "switch on safety lights on fast and safe paths towards the emergency exit". Aggregate programming languages provide mechanisms for building systems in terms of such aggregate-level operations and behaviours, and a global-to-local mapping that translates such specifications into an implementation in terms of the actions and interactions of individual devices. In this mapping, self-organisation techniques provide an effective source of building blocks for making such systems robust to device faults, network topology changes, and other contingencies. A wide range of such aggregate programming approaches have been proposed [START_REF] Beal | Organizing the aggregate: Languages for spatial computing[END_REF]: most of them share the same core idea of viewing the aggregate in terms of dynamically evolving fields, where a field is a function that maps each device in some domain to a computational value. Fields then become first-class elements of computation, used for tasks such as modelling input from sensors, output to actuators, program state, and the (evolving) results of computation.
Many emerging pervasive application scenarios, however, pose a challenge to these approaches due to their openness. In these scenarios, there is need to flexibly and dynamically deploy new or revised code to pertinent devices in the network, to adaptively shift which devices are running such code, and to safely and predictably integrate it into the existing system of distributed processes. Prior aggregate programming approaches, however, have either assumed that no such dynamic changes of code exist (e.g., [START_REF] Beal | Infrastructure for engineered emergence in sensor/actuator networks[END_REF][START_REF] Viroli | A calculus of computational fields[END_REF]), or else provide no safety guarantees ensuring that dynamically composed code will execute as designed (e.g., [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF][START_REF] Viroli | Linda in space-time: an adaptive coordination model for mobile ad-hoc environments[END_REF]). Accordingly, our goal in this paper is develop a foundational model that supports both code mobility and the predictable composition of self-organisation mechanisms. Moreover, we aim to support this combination such that these same self-organisation mechanisms can also be applied to manage and direct the deployment of mobile code.
To address the problem in a general and tractable way, we start from the field calculus [START_REF] Viroli | A calculus of computational fields[END_REF], a recently developed minimal and universal [START_REF] Beal | Towards a unified model of spatial computing[END_REF] computational model that provides a formal mathematical grounding for the many languages for aggregate programming. In field calculus, all values are fields, so a natural approach to code mobility is to support fields of first-class functions, just as with first-class functions in most modern programming languages and in common software design patterns such as MapReduce [START_REF] Dean | Mapreduce: simplified data processing on large clusters[END_REF]. By this mechanism, functions (and hence, code) can be dynamically consumed as input, passed around by device-to-device communication, and operated upon just like any other type of program value. Formally, expressions of the field calculus are enriched with function names, anonymous functions, and application of function-valued expressions to arguments, and the operational semantics properly accommodates them with the same core field calculus mechanisms of neighbourhood filtering and alignment [START_REF] Viroli | A calculus of computational fields[END_REF]. This produces a unified model supporting both code mobility and self-organisation, greatly improving over the independent and generally incompatible mechanisms which have typically been employed in previous aggregate programming approaches. Programmers are thus provided with a new first-class abstraction of a "distributed function field": a dynamically evolving map from the network to a set of executing distributed processes.
Section 2 introduces the concepts of higher-order field calculus; Section 3 formalises their semantics; Section 4 illustrates the approach with an example; and Section 5 concludes with a discussion of related and future work.
Fields and First-Class Functions
The defining property of fields is that they allow us to see computation from two different viewpoints. On the one hand, by the standard "local" viewpoint, computation is seen as occurring in a single device, and it hence manipulates data values (e.g., numbers) and communicates such data values with other devices to enable coordination. On the other hand, by the "aggregate" (or "global") viewpoint [START_REF] Viroli | A calculus of computational fields[END_REF], computation is seen as occurring on the overall network of interconnected devices: the data abstraction manipulated is hence a whole distributed field, a dynamically evolving data structure having extent over a subset of the network. This latter viewpoint is very useful when reasoning about aggregates of devices, and will be used throughout this document. Put more precisely, a field value φ may be viewed as a function φ : D → L that maps each device δ in the domain D to an associated data value in range L . Field computations then take fields as input (e.g., from sensors) and produce new fields as outputs, whose values may change over time (e.g., as inputs change or the computation progresses). For example, the input of a computation might be a field of temperatures, as perceived by sensors at each device in the network, and its output might be a Boolean field that maps to true where temperature is greater than 25 • C, and to false elsewhere.
Field Calculus
The field calculus [START_REF] Viroli | A calculus of computational fields[END_REF] is a tiny functional calculus capturing the essential elements of field computations, much as λ -calculus [START_REF] Church | A set of postulates for the foundation of logic[END_REF] captures the essence of functional computation and FJ [START_REF] Igarashi | Featherweight Java: A minimal core calculus for Java and GJ[END_REF] the essence of object-oriented programming. The primitive expressions of field calculus are data values denoted (Boolean, numbers, and pairs), representing constant fields holding the value everywhere, and variables x, which are either function parameters or state variables (see the rep construct below). These are composed into programs using a Lisp-like syntax with five constructs: (1) Built-in function call (o e 1 • • • e n ): A built-in operator o is a means to uniformly model a variety of "point-wise" operations, i.e. involving neither state nor communication. Examples include simple mathematical functions (e.g., addition, comparison, sine) and context-dependent operators whose result depends on the environment (e.g., the 0-ary operator uid returns the unique numerical identifier δ of the device, and the 0-ary nbr-range operator yields a field where each device maps to a subfield mapping its neighbours to estimates of their current distance from the device). The expression
(o e 1 • • • e n ) thus
a) (if x (f (sns)) (g (sns))) f" f" f" f" f" f" f" g" g" g" g" (b) ((if x f g) (sns))
Fig. 1: Field calculus functions are evaluated over a domain of devices. E.g., in (a) the if operation partitions the network into two subdomains, evaluating f where field x is true and g where it is false (both applied to the output of sensor sns). With first-class functions, however, domains must be constructed dynamically based on the identity of the functions stored in the field, as in (b), which implements an equivalent computation.
(3) Time evolution (rep x e 0 e): The "repeat" construct supports dynamically evolving fields, assuming that each device computes its program repeatedly in asynchronous rounds. It initialises state variable x to the result of initialisation expression e 0 (a value or a variable), then updates it at each step by computing e against the prior value of x. For instance, (rep x 0 (+ x 1)) is the (evolving) field counting in each device how many rounds that device has computed. (4) Neighbourhood field construction (nbr e): Device-to-device interaction is encapsulated in nbr, which returns a field φ mapping each neighbouring device to its most recent available value of e (i.e., the information available if devices broadcast the value of e to their neighbours upon computing it). Such "neighbouring" fields can then be manipulated and summarised with built-in operators, e.g., (min-hood (nbr e)) outputs a field mapping each device to the minimum value of e amongst its neighbours.
(5) Domain restriction (if e 0 e 1 e 2 ): Branching is implemented by this construct, which computes e 1 in the restricted domain where e 0 is true, and e 2 in the restricted domain where e 0 is false.
Any field calculus computation may be thus be viewed as a function f taking zero or more input fields and returning one output field, i.e., having the signature f : (D → L ) k → (D → L ). Figure 1a illustrates this concept, showing an example with complementary domains on which two functions are evaluated. This aggregatelevel model of computation over fields can then be "compiled" into an equivalent system of local operations and message passing actually implementing the field calculus program on a distributed system [START_REF] Viroli | A calculus of computational fields[END_REF].
Higher-order Field Calculus The higher-order field calculus (HFC) is an extension of the field calculus with embedded first-class functions, with the primary goal of allowing it to handle functions just like any other value, so that code can be dynamically injected, moved, and executed in network (sub)domains. If functions are "first class" in the language, then: (i) functions can take functions as arguments and return a function as result (higher-order functions); (ii) functions can be created "on the fly" (anonymous The syntax of the calculus is reported in Fig. 2. Values in the calculus include fields φ , which are produced at run-time and may not occur in source programs; also, local values may be smoothly extended by adding other ground values (e.g., characters) and structured values (e.g., lists). Borrowing syntax from [START_REF] Igarashi | Featherweight Java: A minimal core calculus for Java and GJ[END_REF], the overbar notation denotes metavariables over sequences and the empty sequence is denoted by •. E.g., for expressions, we let e range over sequences of expressions, written e 1 , e 2 , . . . e n (n ≥ 0). The differences from the field calculus are as follows: function application expressions (e e) can take an arbitrary expression e instead of just an operator o or a user-defined function name f; anonymous functions can be defined (by syntax (fun (x) e)); and built-in operators, user-defined function names, and anonymous functions are values. This implies that the range of a field can be a function as well. To apply the functions mapped to by such a field, we have to be able to transform the field back into a single aggregatelevel function. Figure 1b illustrates this issue, with a simple example of a function call expression applied to a function-valued field with two different values.
How can we evaluate a function call with such a heterogeneous field of functions? It would seem excessive to run a separate copy of function f for every device that has f as its value in the field. At the opposite extreme, running f over the whole domain is problematic for implementation, because it would require devices that may not have a copy of f to help in evaluating f . Instead, we will take a more elegant approach, in which making a function call acts as a branch, with each function in the range applied only on the subspace of devices that hold that function. Formally, this may be expressed as transforming a function-valued field φ into a function f φ that is defined as:
f φ (ψ 1 , ψ 2 , . . . ) = f ∈φ (D) f (ψ 1 | φ -1 ( f ) , ψ 2 | φ -1 ( f ) , . . . ) (1)
where ψ i are the input fields, φ (D) is set of all functions held as data values by some devices in the domain D of φ , and
ψ i | φ -1 ( f )
is the restriction of ψ i to the subspace of only those devices that φ maps to function f . In fact, when the field of functions is constant, this reduces to be precisely equivalent to a standard function call. This means that we can view ordinary evaluation of function f as equivalent to creating a functionvalued field with a constant value f , then making a function call applying that field to its argument fields. This elegant transformation is the key insight of this paper, enabling first-class functions to be implemented with a minimal change to the existing semantics while also ensuring compatibility with the prior semantics as well, thus also inheriting its previously established desirable properties.
3 The Higher-order Field Calculus: Dynamic and Static Semantics Dynamic Semantics (Big-Step Operational Semantics) As for the field calculus [START_REF] Viroli | A calculus of computational fields[END_REF], devices undergo computation in rounds. In each round, a device sleeps for some time, wakes up, gathers information about messages received from neighbours while sleeping, performs an evaluation of the program, and finally emits a message to all neighbours with information about the outcome of computation before going back to sleep.
The scheduling of such rounds across the network is fair and non-synchronous. This section presents a formal semantics of device computation, which is aimed to represent a specification for any HFC-like programming language implementation. The syntax of the HFC calculus has been introduced in Section 2 (Fig. 2). In the following, we let meta-variable δ range over the denumerable set D of device identifiers (which are numbers). To simplify the notation, we shall assume a fixed program P. We say that "device δ fires", to mean that the main expression of P is evaluated on δ .
We model device computation by a big-step operational semantics where the result of evaluation is a value-tree θ , which is an ordered tree of values, tracking the result of any evaluated subexpression. Intuitively, the evaluation of an expression at a given time in a device δ is performed against the recently-received value-trees of neighbours, namely, its outcome depends on those value-trees. The result is a new value-tree that is conversely made available to δ 's neighbours (through a broadcast) for their firing; this includes δ itself, so as to support a form of state across computation rounds (note that any implementation might massively compress the value-tree, storing only enough information for expressions to be aligned). A value-tree environment Θ is a map from device identifiers to value-trees, collecting the outcome of the last evaluation on the neighbours. This is written δ → θ as short for δ 1 → θ 1 , . . . , δ n → θ n .
The syntax of field values, value-trees and value-tree environments is given in Fig. 3 (top). Figure 3 (middle) defines: the auxiliary functions ρ and π for extracting the root value and a subtree of a value-tree, respectively (further explanations about function π will be given later); the extension of functions ρ and π to value-tree environments; and the auxiliary functions args and body for extracting the formal parameters and the body of a (user-defined or anonymous) function, respectively. The computation that takes place on a single device is formalised by the big-step operational semantics rules given in Fig. 3 (bottom). The derived judgements are of the form δ ;Θ e ⇓ θ , to be read "expression e evaluates to value-tree θ on device δ with respect to the value-tree environment Θ ", where: (i) δ is the identifier of the current device; (ii) Θ is the field of the value-trees produced by the most recent evaluation of (an expression corresponding to) e on δ 's neighbours; (iii) e is a run-time expression (i.e., an expression that may contain field values); (iv) the value-tree θ represents the values computed for all the expressions encountered during the evaluation of e-in particular ρ(θ ) is the resulting value of expression e. The first firing of a device δ after activation or reset is performed with respect to the empty tree environment, while any other firing must consider the outcome of the most recent firing of δ (i.e., whenever Θ is not empty, it includes the value of the most recent evaluation of e on δ )-this is needed to support the stateful semantics of the rep construct.
The operational semantics rules are based on rather standard rules for functional languages, extended so as to be able to evaluate a subexpression e of e with respect to Field values, value-trees, and value-tree environments:
φ ::= δ → field value θ ::= v (θ )
value-tree Θ ::= δ → θ value-tree environment Auxiliary functions:
ρ(v (θ )) = v π i (v (θ 1 , . . . , θ n )) = θ i if 1 ≤ i ≤ n π ,n (v (θ 1 , . . . , θ n+2 )) = θ n+2 if ρ(θ n+1 ) = π i (θ ) = • otherwise π ,n (θ ) = • otherwise For aux ∈ ρ, π i , π ,n : aux(δ → θ ) = aux(θ ) if aux(θ ) = • aux(δ → θ ) = • if aux(θ ) = • aux(Θ ,Θ ) = aux(Θ ), aux(Θ ) args(f) = x if (def f(x) e) body(f) = e if (def f(x) e) args((fun (x) e)) = x body((fun (x) e)) = e
Rules for expression evaluation:
δ ;Θ e ⇓ θ [E-LOC] δ ;Θ ⇓ () [E-FLD] φ = φ | dom(Θ )∪{δ } δ ;Θ φ ⇓ φ () [E-B-APP] δ ; π n+1 (Θ ) e n+1 ⇓ θ n+1 ρ(θ n+1 ) = o δ ; π 1 (Θ ) e 1 ⇓ θ 1 • • • δ ; π n (Θ ) e n ⇓ θ n v = ε o δ ;Θ (ρ(θ 1 ), . . . , ρ(θ n )) δ ;Θ e n+1 (e 1 , . . . , e n ) ⇓ v (θ 1 , . . . , θ n+1 ) [E-D-APP] δ ; π n+1 (Θ ) e n+1 ⇓ θ n+1 ρ(θ n+1 ) = args( ) = x 1 , . . . , x n δ ; π 1 (Θ ) e 1 ⇓ θ 1 • • • δ ; π n (Θ ) e n ⇓ θ n body( ) = e δ ; π ,n (Θ ) e[x 1 := ρ(θ 1 ) . . . x n := ρ(θ n )] ⇓ θ n+2 v = ρ(θ n+2 ) δ ;Θ e n+1 (e 1 , . . . , e n ) ⇓ v (θ 1 , . . . , θ n+2 ) [E-REP] 0 = ρ(Θ (δ )) if Θ = / 0 otherwise δ ; π 1 (Θ ) e[x := 0 ] ⇓ θ 1 1 = ρ(θ 1 ) δ ;Θ (rep x e) ⇓ 1 (θ 1 ) [E-NBR] Θ 1 = π 1 (Θ ) δ ;Θ 1 e ⇓ θ 1 φ = ρ(Θ 1 )[δ → ρ(θ 1 )] δ ;Θ (nbr e) ⇓ φ (θ 1 ) [E-THEN] δ ; π 1 (Θ ) e ⇓ θ 1 ρ(θ 1 ) = true δ ; π true,0 Θ e ⇓ θ 2 = ρ(θ 2 ) δ ;Θ (if e e e ) ⇓ (θ 1 , θ 2 ) [E-ELSE] δ ; π 1 (Θ ) e ⇓ θ 1 ρ(θ 1 ) = false δ ; π false,0 Θ e ⇓ θ 2 = ρ(θ 2 ) δ ;Θ (if e e e ) ⇓ (θ 1 , θ 2 )
Fig. 3: Big-step operational semantics for expression evaluation the value-tree environment Θ obtained from Θ by extracting the corresponding subtree (when present) in the value-trees in the range of Θ . This process, called alignment, is modelled by the auxiliary function π, defined in Fig. 3 (middle). The function π has two different behaviours (specified by its subscript or superscript): π i (θ ) extracts the i-th subtree of θ , if it is present; and π ,n (θ ) extracts the (n + 2)-th subtree of θ , if it is present and the root of the (n + 1)-th subtree of θ is equal to the local value .
Rules [E-LOC] and [E-FLD] model the evaluation of expressions that are either a local value or a field value, respectively. For instance, evaluating the expression 1 produces (by rule [E-LOC]) the value-tree 1 (), while evaluating the expression + produces the value-tree + (). Note that, in order to ensure that domain restriction is obeyed (cf.
Section 2), rule [E-FLD] restricts the domain of the value field φ to the domain of Θ augmented by δ .
Rule [E-B-APP] models the application of built-in functions. It is used to evaluate expressions of the form (e n+1 e 1 • • • e n ) such that the evaluation of e n+1 produces a value-tree θ n+1 whose root ρ(θ n+1 ) is a built-in function o. It produces the value-tree v (θ 1 , . . . , θ n , θ n+1 ), where θ 1 , . . . , θ n are the value-trees produced by the evaluation of the actual parameters e 1 , . . . , e n (n ≥ 0) and v is the value returned by the function.
Rule [E-B-APP] exploits the special auxiliary function ε, whose actual definition is abstracted away. This is such that ε o δ ;Θ (v) computes the result of applying built-in function o to values v in the current environment of the device δ . In particular, we assume that the built-in 0-ary function uid gets evaluated to the current device identifier (i.e., ε uid δ ;Θ () = δ ), and that mathematical operators have their standard meaning, which is independent from δ and Θ (e.g., ε + δ ;Θ (1, 2) = 3). The ε function also encapsulates measurement variables such as nbr-range and interactions with the external world via sensors and actuators. In order to ensure that domain restriction is obeyed, for each built-in function o we assume that:
ε o δ ;Θ (v 1 , • • • , v n ) is defined only if all the field values in v 1 , . . . , v n have domain dom(Θ ) ∪ {δ }; and if ε o δ ;Θ (v 1 , • • • , v n ) returns a field value φ , then dom(φ ) = dom(Θ ) ∪ {δ }.
For instance, evaluating the expression (+ 1 2) produces the value-tree 3 (1 (), 2 (), + ()). The value of the whole expression, 3, has been computed by using rule [E-B-APP] to evaluate the application of the sum operator + (the root of the third subtree of the value-tree) to the values 1 (the root of the first subtree of the value-tree) and 2 (the root of the second subtree of the value-tree). In the following, for sake of readability, we sometimes write the value v as short for the value-tree v (). Following this convention, the value-tree 3 (1 (), 2 (), + ()) is shortened to 3 (1, 2, +).
Rule [E-D-APP] models the application of user-defined or anonymous functions, i.e., it is used to evaluate expressions of the form (e n+1 e 1 • • • e n ) such that the evaluation of e n+1 produces a value-tree θ n+1 whose root = ρ(θ n+1 ) is a user-defined function name or an anonymous function. It is similar to rule [E-B-APP], however it produces a value-tree which has one more subtree, θ n+2 , which is produced by evaluating the body of the function with respect to the value-tree environment π ,n (Θ ) containing only the value-trees associated to the evaluation of the body of the same function .
To illustrate rule [E-REP] (rep construct), as well as computational rounds, we consider program (rep x 0 (+ x 1)) (cf. Section 2). The first firing of a device δ after activation or reset is performed againstthe empty tree environment. Therefore, according to rule [E-REP], to evaluate (rep x 0 (+ x 1)) means to evaluate the subexpression (+ 0 1), obtained from (+ x 1) by replacing x with 0. This produces the valuetree θ 1 = 1 (1 (0, 1, +)), where root 1 is the overall result as usual, while its sub-tree is the result of evaluating the third argument. Any subsequent firing of the device δ is performed with respect to a tree environment Θ that associates to δ the outcome of the most recent firing of δ . Therefore, evaluating (rep x 0 (+ x 1)) at the second firing means to evaluate the subexpression (+ 1 1), obtained from (+ x 1) by replacing x with 1, which is the root of θ 1 . Hence the results of computation are 1, 2, 3, and so on.
Value-trees also support modelling information exchange through the nbr construct, as of rule [E-NBR]. Consider the program e = (min-hood (nbr (sns-num))), where the 1-ary built-in function min-hood returns the lower limit of values in the range of its field argument, and the 0-ary built-in function sns-num returns the numeric value measured by a sensor. Suppose that the program runs on a network of three fully connected devices δ A , δ B , and δ C where sns-num returns 1 on δ A , 2 on δ B , and 3 on δ C . Considering an initial empty tree-environment / 0 on all devices, we have the following: the evaluation of (sns-num) on δ A yields 1 (sns-num) (by rules [E-LOC] and [E-B-APP], since ε sns-num δ A ; / 0 () = 1); the evaluation of (nbr (sns-num)) on δ A yields (δ A → 1) (1 (sns-num)) (by rule [E-NBR]); and the evaluation of e on δ A yields
θ A = 1 ((δ A → 1) (1 (sns-num)), min-hood) (by rule [E-B-APP], since ε min-hood δ A ; / 0 ((δ A → 1)) = 1)
. Therefore, after its first firing, device δ A produces the value-tree θ A . Similarly, after their first firing, devices δ B and δ C produce the value-trees
θ B = 2 ((δ B → 2) (2 (sns-num)), min-hood) θ C = 3 ((δ C → 3) (3 (sns-num)), min-hood)
respectively. Suppose that device δ B is the first device that fires a second time. Then the evaluation of e on δ B is now performed with respect to the value tree environment
Θ B = (δ A → θ A , δ B → θ B , δ C → θ C
) and the evaluation of its subexpressions (nbr(sns-num)) and (sns-num) is performed, respectively, with respect to the following value-tree environments obtained from Θ B by alignment:
Θ B = π 1 (Θ B ) = (δ A → (δ A → 1) (1 (sns-num)), δ B → • • • , δ C → • • • ) Θ B = π 1 (Θ B ) = (δ A → 1 (sns-num), δ B → 2 (sns-num), δ C → 3 (sns-num))
We have that ε sns-num δ B ;Θ B () = 2; the evaluation of (nbr (sns-num)) on δ B with respect to Θ B yields φ (2 (sns-num)) where φ = (δ A → 1, δ B → 2, δ C → 3); and ε min-hood δ B ;Θ B (φ ) = 1. Therefore the evaluation of e on δ B produces the value-tree 1 (φ (2 (sns-num)), min-hood). Namely, the computation at device δ B after the first round yields 1, which is the minimum of sns-num across neighbours-and similarly for δ A and δ C . We now present an example illustrating first-class functions. Consider the program ((pick-hood (nbr (sns-fun)))), where the 1-ary built-in function pick-hood returns at random a value in the range of its field argument, and the 0-ary built-in function sns-fun returns a 0-ary function returning a value of type num. Suppose that the program runs again on a network of three fully connected devices δ A , δ B , and δ C where sns-fun returns 0 = (fun () 0) on δ A and δ B , and returns 1 = (fun () e ) on δ C , where e = (min-hood (nbr (sns-num))) is the program illustrated in the previous example. Assume that sns-num returns 1 on δ A , 2 on δ B , and 3 on δ C . Then after its first firing, device δ A produces the value-tree
θ A = 0 ( 0 ((δ A → 0 ) ( 0 (sns-fun)), pick-hood), 0)
where the root of the first subtree of θ A is the anonymous function 0 (defined above), and the second subtree of θ A , 0, has been produced by the evaluation of the body 0 of 0 . After their first firing, devices δ B and δ C produce the value-trees
θ B = 0 ( 0 ((δ B → 0 ) ( 0 (sns-fun)), pick-hood), 0) θ C = 3 ( 1 ((δ C → 1 ) ( 1 (sns-fun)), pick-hood), θ C )
respectively, where θ C is the value-tree for e given in the previous example.
Suppose that device δ A is the first device that fires a second time. The computation is performed with respect to the value tree environment
Θ A = (δ A → θ A , δ B → θ B , δ C → θ C
) and produces the value-tree 1 ( 1 (φ ( 1 (sns-fun)), pick-hood), θ A ), where
φ = (δ A → 1 , δ C → 1 ) and θ A = 1 ((δ A → 1, δ C → 3) (1 (sns-num)), min-hood),
since, according to rule [E-D-APP], the evaluation of the body e of 1 (which produces the value-tree θ A ) is performed with respect to the value-tree environment π 1 ,0 (Θ A ) = (δ C → θ C ). Namely, device δ A executed the anonymous function 1 received from δ C , and this was able to correctly align with execution of 1 at δ C , gathering values perceived by sns-num of 1 at δ A and 3 at δ C .
Static Semantics (Type-Inference System) We have developed a variant of the Hindley-Milner type system [START_REF] Damas | Principal type-schemes for functional programs[END_REF] for the HFC calculus.This type system has two kinds of types, local types (the types for local values) and field types (the types for field values), and is aimed to guarantee the following two properties:
Type Preservation If a well-typed expression e has type T and e evaluates to a value tree θ , then ρ(θ ) also has type T. Domain Alignment The domain of every field value arising during the evaluation of a well-typed expression on a device δ consists of δ and of the aligned neighbours.
Alignment is key to guarantee that the semantics correctly relates the behaviour of if, nbr, rep and function application-namely, two fields with different domain are never allowed to be combined. Besides performing standard checks (i.e., in a function application expression (e n+1 e 1 • • • e n ) the arguments e 1 , . . . e n have the expected type; in an if-expression (if e 0 e 1 e 2 ) the condition e 0 has type bool and the branches e 1 and e 2 have the same type; etc.) the type system perform additional checks in order to ensure domain alignment. In particular, the type rules check that:
-In an anonymous function (fun (x) e) the free variables y of e that are not in x have local type. This prevents a device δ from creating a closure e = (fun (x) e)[y := φ ] containing field values φ (whose domain is by construction equal to the subset of the aligned neighbours of δ ). The closure e may lead to a domain alignment error since it may be shifted (via the nbr construct) to another device δ that may use it (i.e., apply e to some arguments); and the evaluation of the body of e may involve use of a field value φ in φ such that the set of aligned neighbours of δ is different from the domain of φ . -In a rep-expression (rep x w e) it holds that x, w and e have (the same) local type.
This prevents a device δ from storing in x a field value φ that may be reused in the next computation round of δ , when the set of the set of aligned neighbours may be different from the domain of φ . -In a nbr-expression (nbr e) the expression e has local type. This prevents the attempt to create a "field of fields" (i.e., a field that maps device identifiers to field values)-which is pragmatically often overly costly to maintain and communicate. -In an if-expression (if e 0 e 1 e 2 ) the branches e 1 and e 2 have (the same) local type.
This prevents the if-expression from evaluating to a field value whose domain is different from the subset of the aligned neighbours of δ .
We now illustrate the application of first-class functions using a pervasive computing example. In this scenario, people wandering a large environment (like an outdoor festival, an airport, or a museum) each carry a personal device with short-range pointto-point ad-hoc capabilities (e.g. a smartphone sending messages to others nearby via Bluetooth or Wi-Fi). All devices run a minimal "virtual machine" that allows runtime injection of new programs: any device can initiate a new distributed process (in the form of a 0-ary anonymous function), which the virtual machine spreads to all other devices within a specified range (e.g., 30 meters). For example, a person might inject a process that estimates crowd density by counting the number of nearby devices or a process that helps people to rendezvous with their friends, with such processes likely implemented via various self-organisation mechanisms. The virtual machine then executes these using the first-class function semantics above, providing predictable deployment and execution of an open class of runtime-determined processes.
Virtual Machine Implementation The complete code for our example is listed in Figure 4, with syntax coloring to increase readability: grey for comments, red for field calculus keywords, blue for user-defined functions, and green for built-in operators. In this code, we use the following naming conventions for built-ins: functions sns-* embed sensors that return a value perceived from the environment (e.g., sns-injection-point returns a Boolean indicating whether a device's user wants to inject a function); functions *-hood yield a local value obtained by aggregating over the field value φ in input (e.g., sum-hood sums all values in each neighbourhood); functions *-hood+ behave the same but exclude the value associated with the current device; and built-in functions pair, fst, and snd respectively create a pair of locals and access a pair's first and second component. Additionally, given a built-in o that takes n ≥ 1 locals an returns a local, the built-ins o[*,...,*] are variants of o where one or more inputs are fields (as indicated in the bracket, l for local or f for field), and the return value is a field, obtained by applying operator o in a point-wise manner. For instance, as = compares two locals returning a Boolean, =[f,f] is the operator taking two field inputs and returns a Boolean field where each element is the comparison of the corresponding elements in the inputs, and similarly =[f,l] takes a field and a local and returns a Boolean field where each element is the comparison of the corresponding element of the field in input with the local. The first two functions in Figure 4 implement frequently used self-organisation mechanisms. Function distance-to, also known as gradient [START_REF] Clement | Self-assembly and self-repairing topologies[END_REF][START_REF] Lin | The gradient model load balancing method[END_REF], computes a field of minimal distances from each device to the nearest "source" device (those mapping to true in the Boolean input field). This is computed by repeated application of the triangle inequality (via rep): at every round, source devices take distance zero, while all others update their distance estimates d to the minimum distance estimate through their neighbours (min-hood+ of each neighbour's distance estimate (nbr d) plus the distance to that neighbour nbr-range); source and non-source are discriminated by mux, a builtin "multiplexer" that operates like an if but differently from it always evaluates both branches on every device. Repeated application of this update procedure self-stabilises into the desired field of distances, regardless of any transient perturbations or faults [START_REF] Kutten | Time-adaptive self stabilization[END_REF]. The second self-organisation mechanism, gradcast, is a directed broadcast, achieved by a computation identical to that of distance-to, except that the values are pairs (note that pair[f,f] produces a field of pairs, not a pair of fields), with the second element set to the value of v at the source: min-hood operates on pairs by applying lexicographic ordering, so the second value of the pair is automatically carried along shortest paths from the source. The result is a field of pairs of distance and most recent value of v at the nearest source, of which only the value is returned.
The latter two functions in Figure 4 use these self-organisation methods to implement our simple virtual machine. Code mobility is implemented by function deploy, which spreads a 0-ary function g via gradcast, keeping it bounded within distance range from sources, and holding 0-ary function no-op elsewhere. The corresponding field of functions is then executed (note the double parenthesis). The virtual-machine then simply calls deploy, linking its arguments to sensors configuring deployment range and detecting who wants to inject which functions (and using (fun () 0) as no-op function).
In essence, this virtual machine implements a code-injection model much like those used in a number of other pervasive computing approaches (e.g., [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF][START_REF] Gelernter | Generative communication in linda[END_REF][START_REF] Butera | Programming a Paintable Computer[END_REF])-though of course it has much more limited features, since it is only an illustrative example. With these previous approaches, however, code shares lexical scope and cannot have its network domain externally controlled. Thus, injected code may spread through the network unpredictably and may interact unpredictably with other injected code that it encounters. The extended field calculus semantics that we have presented, however, ensures that injected code moves only within the range specified to the virtual machine and remains lexically isolated from different injected code, so that no variable can be unexpectedly affected by interactions with neighbours.
Simulated Example Application
We further illustrate the application of first-class functions with an example in a simulated scenario. Consider a museum, whose docents monitor their efficacy in part by tracking the number of patrons nearby while they are working. To monitor the number of nearby patrons, each docent's device injects the following anonymous function (of type: () → num):
(fun () (low-pass 0.5 (converge-sum (distance-to (sns-injection-point))
(sns-patron))))
This counts patrons using the function converge-sum defined in Figure 4(bottom), a simple version of another standard self-organisation mechanism [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF] which operates like an inverse broadcast, summing the values sensed by sns-patron (1 for a patron, 0 for a docent) down the distance gradient back to its source-in this case the docent at the injection point. In particular, each device's local value is summed with those identifying it as their parent (their closest neighbour to the source, breaking ties with device unique identifiers from built-in function uid), resulting in a relatively balanced spanning tree of summations with the source at its root. This very simple version of summation is somewhat noisy on a moving network of devices, so its output is passed through a simple low-pass filter, the function low-pass, also defined in Figure 4(bottom), in order to smooth its output and improve the quality of estimate. Figure 5a shows a simulation of a docent and 250 patrons in a large 100x30 meter museum gallery. Of the patrons, 100 are a large group of school-children moving together past the stationary docent from one side of the gallery to the other, while the rest are wandering randomly. In this simulation, people move at an average 1 m/s, the docent and all patrons carry personal devices running the virtual machine, executing asynchronously at 10Hz, and communicating via low-power Bluetooth to a range of 10 meters. The simulation was implemented using the ALCHEMIST [START_REF] Pianini | Chemical-oriented simulation of computational systems with Alchemist[END_REF] simulation framework and the Protelis [START_REF] Pianini | Practical aggregate programming with PROTELIS[END_REF] incarnation of field calculus, updated to the extended version of the calculus presented in this paper.
In this simulation, at time 10 seconds, the docent injects the patron-counting function with a range of 25 meters, and at time 70 seconds removes it. Figure 5a shows two snapshots of the simulation, at times 11 (top) and 35 (bottom) seconds, while Figure 5b compares the estimated value returned by the injected process with the true value. Note that upon injection, the process rapidly disseminates and begins producing good estimates of the number of nearby patrons, then cleanly terminates upon removal.
Conclusion, Related and Future Work
Conceiving emerging distributed systems in terms of computations involving aggregates of devices, and hence adopting higher-level abstractions for system development, is a thread that has recently received a good deal of attention. A wide range of aggregate programming approaches have been proposed, including Proto [START_REF] Beal | Infrastructure for engineered emergence in sensor/actuator networks[END_REF], TOTA [START_REF] Mamei | Programming pervasive and mobile computing applications: The tota approach[END_REF], the (bio)chemical tuple-space model [START_REF] Viroli | Spatial coordination of pervasive services through chemical-inspired tuple spaces[END_REF], Regiment [START_REF] Newton | Region streams: Functional macroprogramming for sensor networks[END_REF], the σ τ-Linda model [START_REF] Viroli | Linda in space-time: an adaptive coordination model for mobile ad-hoc environments[END_REF], Paintable Computing [START_REF] Butera | Programming a Paintable Computer[END_REF], and many others included in the extensive survey of aggregate programming languages given in [START_REF] Beal | Organizing the aggregate: Languages for spatial computing[END_REF]. Those that best support self-organisation approaches to robust and environment-independent computations have generally lacked well-engineered mechanisms to support openness and code mobility (injection, update, etc.). Our contribution has been to develop a core calculus, building on the work presented in [START_REF] Viroli | A calculus of computational fields[END_REF], that smoothly combines for the first time self-organisation and code mobility, by means of the abstraction of "distributed function field". This combination of first-class functions with the domain-restriction mechanisms of field calculus allows the predictable and safe composition of distributed self-organisation mechanisms at runtime, thereby enabling robust operation of open pervasive systems. Furthermore, the simplicity of the calculus enables it to easily serve as both an analytical framework and a programming framework, and we have already incorporated this into Protelis [START_REF] Pianini | Practical aggregate programming with PROTELIS[END_REF], thereby allowing these mechanisms to be deployed both in simulation and in actual distributed systems.
Future plans include consolidation of this work, by extending the calculus and its conceptual framework, to support an analytical methodology and a practical toolchain for system development, as outlined in [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF]. First, we aim to apply our approach to support various application needs for dynamic management of distributed processes [START_REF] Beal | Dynamically defined processes for spatial computers[END_REF], which may also impact the methods of alignment for anonymous functions. Second, we plan to isolate fragments of the calculus that satisfy behavioural properties such as self-stabilisation, quasi-stabilisation to a dynamically evolving field, or density independence, following the approach of [START_REF] Viroli | A calculus of self-stabilising computational fields[END_REF]. Finally, these foundations can be applied in developing APIs enabling the simple construction of complex distributed applications, building on the work in [START_REF] Beal | Building blocks for aggregate programming of self-organising applications[END_REF] to define a layered library of self-organisation patterns, and applying these APIs to support a wide range of practical distributed applications.
2 )
2 produces a field mapping each device identifier δ to the result of applying o to the values at δ of its n ≥ 0 arguments e 1 , . . . , e n . (Function call (f e 1 . . . e n ): Abstraction and recursion are supported by function definition: functions are declared as (def f(x 1 . . . x n ) e) (where elements x i are formal parameters and e is the body), and expressions of the form (f e 1 . . . e n ) are the way of calling function f passing n arguments.
(
eFig. 2 :
2 Fig.2: Syntax of HFC (differences from field calculus are highlighted in grey).
Fig. 4 :
4 Fig. 4: Virtual machine code (top) and application-specific code (bottom).
Estimated vs. True Count
Fig. 5 :
5 Fig. 5: (a) Two snapshots of museum simulation: patrons (grey) are counted (black) within 25 meters of the docent (green). (b) Estimated number of nearby patrons (grey) vs. actual number (black) in the simulation.
This work has been partially supported by HyVar (www.hyvar-project.eu, this project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 644298 -Damiani), by EU FP7 project SAPERE (www.sapere-project.eu, under contract No 256873 -Viroli), by ICT COST Action IC1402 ARVI (www.cost-arvi.eu -Damiani), by ICT COST Action IC1201 BETTY (www.behavioural-types.eu -Damiani), by the Italian PRIN 2010/2011 project CINA (sysma.imtlucca.it/cina -Damiani & Viroli), by Ateneo/CSP project SALT (salt.di.unito.it -Damiani), and by the United States Air Force and the Defense Advanced Research Projects Agency under Contract No. FA8750-10-C-0242 (Beal). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views, opinions, and/or findings |
01767327 | en | [
"info",
"info.info-ni"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767327/file/978-3-319-19195-9_1_Chapter.pdf | Luca Padovani
Luca Novara
Types for Deadlock-Free Higher-Order Programs
Type systems for communicating processes are typically studied using abstract models -e.g., process algebras -that distill the communication behavior of programs but overlook their structure in terms of functions, methods, objects, modules. It is not always obvious how to apply these type systems to structured programming languages. In this work we port a recently developed type system that ensures deadlock freedom in the π-calculus to a higher-order language.
Introduction
In this article we develop a type system that guarantees well-typed programs that communicate over channels to be free from deadlocks. Type systems ensuring this property already exist [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF], but they all use the π-calculus as the reference language. This choice overlooks some aspects of concrete programming languages, like the fact that programs are structured into compartmentalized blocks (e.g., functions) within which only the local structure of the program (the body of a function) is visible to the type system, and little if anything is know about the exterior of the block (the callers of the function). The structure of programs may hinder some kinds of analysis: for example, the type systems in [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] enforce an ordering of communication events and to do so they take advantage of the nature of π-calculus processes, where programs are flat sequences of communication actions. How do we reason on such ordering when the execution order is dictated by the reduction strategy of the language rather than by the syntax of programs, or when events occur within a function, and nothing is known about the events that are supposed to occur after the function terminates? We answer these questions by porting the type system in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] to a higher-order functional language.
To illustrate the key ideas of the approach, let us consider the program send a (recv b) | send b (recv a) (1.1) consisting of two parallel threads. The thread on the left is trying to send the message received from channel b on channel a; the thread on the right is trying to do the opposite. The communications on a and b are mutually dependent, and the program is a deadlock. The basic idea used in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] and derived from [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF] for detecting deadlocks is to assign each channel a number -which we call level -and to verify that channels are used in order according to their levels. In (1.1) this mechanism requires b to have smaller level than a in the leftmost thread, and a to have a smaller level than b in the rightmost thread. No level assignment can simultaneously satisfy both constraints. In order to perform these checks with a type system, the first step is to attach levels to channel types. We therefore assign the types ![int] m and ?[int] n respectively to a and b in the leftmost thread of (1.1), and ?[int] m and ![int] n to the same channels in the rightmost thread of (1.1). Crucially, distinct occurrences of the same channel have types with opposite polarities (input ? and output !) and equal level. We can also think of the assignments send : ∀ı.![int] ı → int → unit and recv : ∀ı.?[int] ı → int for the communication primitives, where we allow polymorphism on channel levels. In this case, the application send a (recv b) consists of two subexpressions, the partial application send a having type int → unit and its argument recv b having type int. Neither of these types hints at the I/O operations performed in these expressions, let alone at the levels of the channels involved. To recover this information we pair types with effects [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF]: the effect of an expression is an abstract description of the operations performed during its evaluation. In our case, we take as effect the level of channels used for I/O operations, or ⊥ in the case of pure expressions that perform no I/O. So, the judgment
b : ?[int] n recv b : int & n
states that recv b is an expression of type int whose evaluation performs an I/O operation on a channel with level n. As usual, function types are decorated with a latent effect saying what happens when the function is applied to its argument. So,
a : ![int] m send a : int → m unit & ⊥
states that send a is a function that, applied to an argument of type int, produces a result of type unit and, in doing so, performs an I/O operation on a channel with level m. By itself, send a is a pure expression whose evaluation performs no I/O operations, hence the effect ⊥. Effects help us detecting dangerous expressions: in a call-by-value language an application e 1 e 2 evaluates e 1 first, then e 2 , and finally the body of the function resulting from e 1 . Therefore, the channels used in e 1 must have smaller level than those occurring in e 2 and the channels used in e 2 must have smaller level than those occurring in the body of e 1 . In the specific case of send a (recv b) we have ⊥ < n for the first condition, which is trivially satisfied, and n < m for the second one. Since the same reasoning on send b (recv a) also requires the symmetric condition (m < n), we detect that the parallel composition of the two threads in (1.1) is ill typed, as desired.
It turns out that the information given by latent effects in function types is not sufficient for spotting some deadlocks. To see why, consider the function
f def = λ x.(send a x; send b x)
which sends its argument x on both a and b and where ; denotes sequential composition. The level of a (say m) should be smaller than the level of b (say n), for a is used before b (we assume that communication is synchronous and that send is a potentially blocking operation). The question is, what is the latent effect that decorates the type of f , of the form int → h unit? Consider the two obvious possibilities: if we take h = m, then
recv a | f 3; recv b (1.2)
is well typed because the effect m of f 3 is smaller than the level of b in recv b, which agrees with the fact that f 3 is evaluated before recv b; if we take h = n, then
recv a; f 3 | recv b (1.3)
is well typed for similar reasons. This is unfortunate because both (1.3) and (1.2) reduce to a deadlock. To flag both of them as ill typed, we must refine the type of f to int → m,n unit where we distinguish the smallest level of the channels that occur in the body of f (that is m) from the greatest level of the channels that are used by f when f is applied to an argument (that is n). The first annotation gives information on the channels in the function's closure, while the second annotation is the function's latent effect, as before. So (1.2) is ill typed because the effect of f 3 is the same as the level of b in recv b and (1.3) is ill typed because the effect of recv a is the same as the level of f in f 3.
In the following, we define a core multithreaded functional language with communication primitives (Section 2), we present a basic type and effect system, extend it to address recursive programs, and state its properties (Section 3). Finally, we briefly discuss closely related work and a few extensions (Section 4). Proofs and additional material can be found in long version of the paper, on the first author's home page.
Language syntax and semantics
In defining our language, we assume a synchronous communication model based on linear channels. This assumption limits the range of systems that we can model. However, asynchronous and structured communications can be encoded using linear channels: this has been shown to be the case for binary sessions [START_REF] Dardha | Session types revisited[END_REF] and for multiparty sessions to a large extent [10, technical report].
We use a countable set of variables x, y, . . . , a countable set of channels a, b, . . . , and a set of constants k. Names u, . . . are either variables or channels. We consider a language of expressions and processes as defined below: We write _ for unused/fresh variables. Constants include the unitary value (), the integer numbers m, n, . . . , as well as the primitives fix, fork, new, send, recv whose semantics will be explained shortly. Processes are either threads e , or the restriction (νa)P of a channel a with scope P, or the parallel composition P | Q of processes. The notions of free and bound names are as expected, given that the only binders are λ 's and ν's. We identify terms modulo renaming of bound names and we write fn(e) (respectively, fn(P)) for the set of names occurring free in e (respectively, in P).
The reduction semantics of the language is given by two relations, one for expressions, another for processes. We adopt a call-by-value reduction strategy, for which we need to define reduction contexts E , . . . and values v, w, . . . respectively as:
E ::= [ ] E e vE v, w ::= k a λ x.e send v
The reduction relation -→ for expressions is defined by standard rules
(λ x.e)v -→ e{v/x} fix λ x.e -→ e{fix λ x.e/x}
and closed under reduction contexts. As usual, e{e /x} denotes the capture-avoiding substitution of e for the free occurrences of x in e.
Table 1. Reduction semantics of expressions and processes.
E [send a v] | E [recv a] a - -→ E [()] | E [v] E [fork v] τ - -→ E [()] | v() E [new()] τ - -→ (νa) E [a] a ∈ fn(E ) e -→ e e τ - -→ e P - -→ P P | Q - -→ P | Q P - -→ Q (νa)P - -→ (νa)Q = a P a - -→ Q (νa)P τ - -→ Q P ≡ - -→≡ Q P - -→ Q
The reduction relation of processes (Table 1) has labels , . . . that are either a channel name a, signalling that a communication has occurred on a, or the special symbol τ denoting any other reduction. There are four base reductions for processes: a communication occurs between two threads when one is willing to send a message v on a channel a and the other is waiting for a message from the same channel; a thread that contains a subexpression fork v spawns a new thread that evaluates v(); a thread that contains a subexpression new() creates a new channel; the reduction of an expression causes a corresponding τ-labeled reduction of the thread in which it occurs. Reduction for processes is then closed under parallel compositions, restrictions, and structural congruence. The restriction of a disappears as soon as a communication on a occurs: in our model channels are linear and can be used for one communication only; structured forms of communication can be encoded on top of this simple model (see Example 2 and [5]). Structural congruence is defined by the standard rules rearranging parallel compositions and channel restrictions, where () plays the role of the inert process.
We conclude this section with two programs written using a slightly richer language equipped with let bindings, conditionals, and a few additional operators. All these constructs either have well-known encodings or can be easily accommodated. The fresh channels a and b are used to collect the results from the recursive, parallel invocations of fibo. Note that expressions are intertwined with I/O operations. It is relevant to ask whether this version of fibo is deadlock free, namely if it is able to reduce until a result is computed without blocking indefinitely on an I/O operation.
Example 2 (signal pipe). In this example we implement a function pipe that forwards signals received from an input stream x to an output stream y: let cont = λx.let c = new() in (fork λ_.send x c); c in let pipe = fix λpipe.λx.λy.pipe (recv x) (cont y)
Note that this pipe is only capable of forwarding handshaking signals. A more interesting pipe transmitting actual data can be realized by considering data types such as records and sums [START_REF] Dardha | Session types revisited[END_REF]. The simplified realization we consider here suffices to illustrate a relevant family of recursive functions that interleave actions on different channels.
Since linear channels are consumed after communication, each signal includes a continuation channel on which the subsequent signals in the stream will be sent/received. In particular, cont x sends a fresh continuation c on x and returns c, so that c can be used for subsequent communications, while pipe x y sends a fresh continuation on y after it has received a continuation from x, and then repeats this behavior on the continuations. The program below connects two pipes: Even if the two pipes realize a cyclic network, we will see in Section 3 that this program is well typed and therefore deadlock free. Forgetting cont on line 4 or not forking the send on line 1, however, produces a deadlock.
Type and effect system
We present the features of the type system gradually, in three steps: we start with a monomorphic system (Section 3.1), then we introduce level polymorphism required by Examples 1 and 2 (Section 3.2), and finally recursive types required by Example 2 (Section 3.3). We end the section studying the properties of the type system (Section 3.4).
Core types
Let L def = Z ∪ {⊥, } be the set of channel levels ordered in the obvious way (⊥ < n < for every n ∈ Z); we use ρ, σ , . . . to range over L and we write ρ σ (respectively, ρ σ ) for the minimum (respectively, the maximum) of ρ and σ . Polarities p, q, . . . are non-empty subsets of {?, !}; we abbreviate {?} and {!} with ? and ! respectively, and {?, !} with #. Types t, s, . . . are defined by t, s ::= B p[t] n t → ρ,σ s where basic types B, . . . include unit and int. The type p[t] n denotes a channel with polarity p and level n. The polarity describes the operations allowed on the channel: ? means input, ! means output, and # means both input and output. Channels are linear resources: they can be used once according to each element in their polarity. The type t → ρ,σ s denotes a function with domain t and range s. The function has level ρ (its closure contains channels with level ρ or greater) and, when applied, it uses channels with level σ or smaller. If ρ = , the function has no channels in its closure; if σ = ⊥, the function uses no channels when applied. We write → as an abbreviation for → ,⊥ , so → denotes pure functions not containing and not using any channel.
Recall from Section 1 that levels are meant to impose an order on the use of channels: roughly, the lower the level of a channel, the sooner the channel must be used. We extend the notion of level from channel types to arbitrary types: basic types have level because there is no need to use them as far as deadlock freedom is concerned; the level of functions is written in their type. Formally, the level of t, written |t|, is defined as:
|B| def = |p[t] n | def = n |t → ρ,σ s| def = ρ (3.1)
Levels can be used to distinguish linear types, denoting values (such as channels) that must be used to guarantee deadlock freedom, from unlimited types, denoting values that have no effect on deadlock freedom and may be disregarded. We say that t is linear if |t| ∈ Z; we say that t is unlimited, written un(t), if |t| = .
Below are the type schemes of the constants that we consider. Some constants have many types (constraints are on the right); we write types(k) for the set of types of k.
() : unit n : int fix : (t → t) → t fork : (unit → ρ,σ unit) → unit new : unit → #[t] n n < |t| recv : ?[t] n → ,n t n < |t| send : ![t] n → t → n,n unit n < |t|
The type of (), of the numbers, and of fix are ordinary. The primitive new creates a fresh channel with the full set # of polarities and arbitrary level n. The primitive recv takes a channel of type ?[t] n , blocks until a message is received, and returns the message. The primitive itself contains no free channels in its closure (hence the level ) because the only channel it manipulates is its argument. The latent effect is the level of the channel, as expected. The primitive send takes a channel of type ![t] n , a message of type t, and sends the message on the channel. Note that the partial application send a is a function whose level and latent effect are both the level of a. Note also that in new, recv, and send the level of the message must be greater than the level of the channel: since levels are used to enforce an order on the use of channels, this condition follows from the observation that a message cannot be used until after it has been received, namely after the channel on which it travels has been used. Finally, fork accepts a thunk with arbitrary level ρ and latent effect σ and spawns the thunk into an independent thread (see Table 1). Note that fork is a pure function with no latent effect, regardless of the level and latent effect of the thunk. This phenomenon is called effect masking [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF], whereby the effect of evaluating an expression becomes unobservable: in our case, fork discharges effects because the thunk runs in parallel with the code executing the fork.
We now turn to the typing rules. A type environment Γ is a finite map u 1 : t 1 , . . . , u n : t n from names to types. We write / 0 for the empty type environment, dom(Γ ) for the domain of Γ , and Γ (u) for the type associated with u in Γ ; we write Γ 1 , Γ 2 for the union of Γ 1 and Γ 2 when dom(Γ 1 )∩dom(Γ 2 ) = / 0. We also need a more flexible way of combining type environments. In particular, we make sure that every channel is used linearly by distributing different polarities of a channel to different parts of the program. To this aim, following [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF], we define a partial combination operator + between types:
t + t def = t if un(t) p[t] n + q[t] n def = (p ∪ q)[t] n if p ∩ q = / 0 (3.2)
that we extend to type environments, thus:
Γ + Γ def = Γ , Γ if dom(Γ ) ∩ dom(Γ ) = / 0 (Γ , u : t) + (Γ , u : s) def = (Γ + Γ ), u : t + s (3.3)
For example, we have
(x : int, a : ![int] n ) + (a : ?[int] n ) = x : int, a : #[int] n
, so we might have some part of the program that (possibly) uses a variable x of type int along with channel a for sending an integer and another part of the program that uses the same channel a but this time for receiving an integer. The first part of the program would be typed in the environment x : int, a : ![int] n and the second one in the environment a : ?[int] n . Overall, the two parts would be typed in the environment x : int, a : #[int] n indicating that a is used for both sending and receiving an integer.
We Table 2. Core typing rules for expressions and processes.
Typing of expressions
[T-NAME] Γ , u : t u : t & ⊥ un(Γ ) Γ 1 + Γ 2 P | Q [T-NEW] Γ , a : #[t] n P Γ (νa)P
We are now ready to discuss the core typing rules, shown in Table 2. Judgments of the form Γ e : t & ρ denote that e is well typed in Γ , it has type t and effect ρ; judgments of the form Γ P simply denote that P is well typed in Γ .
Axioms [T-NAME] and [T-CONST] are unremarkable: as in all substructural type systems the unused part of the type environment must be unlimited. Names and constants have no effect (⊥); they are evaluated expressions that do not use (but may contain) channels.
In rule [T-FUN], the effect ρ caused by evaluating the body of the function becomes the latent effect in the arrow type of the function and the function itself has no effect. The level of the function is determined by that of the environment Γ in which the function is typed. Intuitively, the names in Γ are stored in the closure of the function; if any of these names is a channel, then we must be sure that the function is eventually used (i.e., applied) to guarantee deadlock freedom. In fact, |Γ | gives a slightly more precise information, since it records the smallest level of all channels that occur in the body of the function. We have seen in Section 1 why this information is useful. A few examples:
the identity function λ x.x has type int → ,⊥ int in any unlimited environment; the function λ _.a has type unit→ n,⊥ ![int] n in the environment a : ![int] n ; it contains channel a with level n in its closure (whence the level n in the arrow), but it does not use a for input/output (whence the latent effect ⊥); it is nonetheless well typed because a, which is a linear value, is returned as result; the function λ x.send x 3 has type ![int] n → ,n unit; it has no channels in its closure but it performs an output on the channel it receives as argument; the function λ x.(recv a + x) has type int → n,n int in the environment a : ?[int] n ; note that neither the domain nor the codomain of the function mention any channel, so the fact that the function has a channel in its closure (and that it performs some I/O) can only be inferred from the annotations on the arrow; the function λ x.send x (recv a) has type ![int] n+1 → n,n+1 unit in the environment a : ![int] n ; it contains channel a with level n in its closure and performs input/output operations on channels with level n + 1 (or smaller) when applied.
Rule [T-APP] deals with applications e 1 e 2 . The first thing to notice is the type environments in the premises for e 1 and e 2 . Normally, these are exactly the same as the type environment used for the whole application. In our setting, however, we want to distribute polarities in such a way that each channel is used for exactly one communication. For this reason, the type environment Γ 1 + Γ 2 in the conclusion is the combination of the type environments in the premises. Regarding effects, τ i is the effect caused by the evaluation of e i . As expected, e 1 must result in a function of type t → ρ,σ s and e 2 in a value of type t. The evaluation of e 1 and e 2 may however involve blocking I/O operations on channels, and the two side conditions make sure that no deadlock can arise. To better understand them, recall that reduction is call-by-value and applications e 1 e 2 are evaluated sequentially from left to right. Now, the condition τ 1 < |Γ 2 | makes sure that any I/O operation performed during the evaluation of e 1 involves only channels whose level is smaller than that of the channels occurring free in e 2 (the free channels of e 2 must necessarily be in Γ 2 ). This is enough to guarantee that the functional part of the application can be fully evaluated without blocking on operations concerning channels that occur later in the program. In principle, this condition should be paired with the symmetric one τ 2 < |Γ 1 | making sure that any I/O operation performed during the evaluation of the argument does not involve channels that occur in the functional part. However, when the argument is being evaluated, we know that the functional part has already been reduced a value (see the definition of reduction contexts in Section 2). Therefore, the only really critical condition to check is that no channels involved in I/O operations during the evaluation of e 2 occur in the value of e 1 . This is expressed by the condition τ 2 < ρ, where ρ is the level of the functional part. Note that, when e 1 is an abstraction, by rule [T-FUN] ρ coincides with |Γ 1 |, but in general ρ may be greater than |Γ 1 |, so the condition τ 2 < ρ gives better accuracy. The effect of the whole application e 1 e 2 is, as expected, the combination of the effects of evaluating e 1 , e 2 , and the latent effect of the function being applied. In our case the "combination" is the greatest level of any channel involved in the application. Below are some examples:
-(λ x.x) a is well typed, because both λ x.x and a are pure expressions whose effect is ⊥, hence the two side conditions of [T-APP] are trivially satisfied; -(λ x.x) (recv a) is well typed in the environment a : ?[int] n : the effect of recv a is n (the level of a) which is smaller than the level of the function; send a (recv a) is ill typed in the environment a : #[int] n because the effect of evaluating recv a, namely n, is the same as the level of send a;
-(recv a) (recv b) is well typed in the environment a : ?[int → int] 0 , b : ?[int] 1 . The effect of the argument is 1, which is not smaller than the level of the environment a : ?[int → int] 0 used for typing the function. However, 1 is smaller than , which is the level of the result of the evaluation of the functional part of the application. This application would be illegal had we used the side condition
τ 2 < |Γ 1 | in [T-APP].
The typing rules for processes are standard: [T-PAR] splits contexts for typing the processes in parallel, [T-NEW] introduces a new channel in the environment, and [T-THREAD] types threads. The effect of threads is ignored: effects are used to prevent circular dependencies between channels used within the sequential parts of the program (i.e., within expressions); circular dependencies that arise between parallel threads are indirectly detected by the fact that each occurrence of a channel is typed with the same level (see the discussion of (1.1) in Section 1).
Level polymorphism
Looking back at Example 1, we notice that fibo n c may generate two recursive calls with two corresponding fresh channels a and b. Since the send operation on c is blocked by recv operations on a and b (line 5), the level of a and b must be smaller than that of c. Also, since expressions are evaluated left-to-right and recv a + recv b is syntactic sugar for the application (+) (recv a) (recv b), the level of a must be smaller than that of b. Thus, to declare fibo well typed, we must allow different occurrences of fibo to be applied to channels with different levels. Even more critically, this form of level polymorphism of fibo is necessary within the definition of fibo itself, so it is an instance of polymorphic recursion [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF].
The core typing rules in Table 2 do not support level polymorphism. Following the previous discussion on fibo, the idea is to realize level polymorphism by shifting levels in types. We define level shifting as a type operator ⇑ n , thus:
⇑ n B def = B ⇑ n p[t] m def = p[⇑ n t] n+m ⇑ n (t → ρ,σ s) def = ⇑ n t → n+ρ,n+σ ⇑ n s (3.4)
where + is extended from integers to levels so that n+ = and n+⊥ = ⊥. The effect of ⇑ n t is to shift all the finite level annotations in t by n, leaving and ⊥ unchanged. Now, we have to understand in which cases we can use a value of type ⇑ n t where one of type t is expected. More specifically, when a value of type ⇑ n t can be passed to a function expecting an argument of type t. This is possible if the function has level . We express this form of level polymorphism with an additional typing rule for applications:
[T-APP-POLY] Γ 1 e 1 : t → ,σ s & τ 1 Γ 2 e 2 : ⇑ n t & τ 2 Γ 1 + Γ 2 e 1 e 2 : ⇑ n s & (n + σ ) τ 1 τ 2 τ 1 < |Γ 2 | τ 2 <
This rule admits an arbitrary mismatch n between the level the argument expected by the function and that of the argument supplied to the function. The type of the application and the latent effect are consequently shifted by the same amount n.
Soundness of [T-APP-POLY
] can be intuitively explained as follows: a function with level has no channels in its closure. Therefore, the only channels possibly manipulated by the function are those contained in the argument to which the function is applied or channels created within the function itself. Then, the fact that the argument has level n + k rather than level k is completely irrelevant. Conversely, if the function has channels in its closure, then the absolute level of the argument might have to satisfy specific ordering constraints with respect to these channels (recall the two side conditions in [T-APP]). Since level polymorphism is a key distinguishing feature of our type system, and one that accounts for much of its expressiveness, we elaborate more on this intuition using an example. Consider the term fwd def = λ x.λ y.send y (recv x) which forwards on y the message received from x. The derivation . . .
[T-APP] y : ![int] 1 send y : int → 1,1 unit & ⊥ . . . [T-APP] x : ?[int] 0 recv x : int & 0 [T-APP] x : ?[int] 0 , y : ![int] 1 send y (recv x) : unit & 1 [T-FUN] x : ?[int] 0 λ y.send y (recv x) : ![int] 1 → 0,1 unit & ⊥ [T-FUN] fwd : ?[int] 0 → ![int] 1 → 0,1 unit & ⊥
does not depend on the absolute values 0 and 1, but only on the level of x being smaller than that of y, as required by the fact that the send operation on y is blocked by the recv operation on x. Now, consider an application fwd a, where a has type ?[int] 2 . The mismatch between the level of x (0) and that of a (2) is not critical, because all the levels in the derivation above can be uniformly shifted up by 2, yielding a derivation for
fwd : ?[int] 2 → ![int] 3 → 2,3 unit & ⊥
This shifting is possible because fwd has no free channels in its body (indeed, it is typed in the empty environment). Therefore, using [T-APP-POLY], we can derive
a : ?[int] 2 fwd a : ![int] 3 → 2,3 unit & ⊥
Note that (fwd a) is a function having level 2. This means that (fwd a) is not level polymorphic and can only be applied, through [T-APP], to channels with level 3. If we allowed (fwd a) to be applied to a channel with level 2 using [T-APP-POLY] we could derive
a : #[int] 2 fwd a a : unit & 2
which reduces to a deadlock. Example 3. To show that the term in Example 1 is well typed, consider the environment
Γ def = fibo : int → ![int] 0 → ,0 unit, n : int, c : ![int] 0
In the proof derivation for the body of fibo, this environment is eventually enriched with the assignments a : #[int] -2 and b : #[int] -1 . Now we can derive . . .
[T-APP] Γ fibo (n -2) : ![int] 0 → ,0 unit & ⊥ [T-NAME] a : ![int] -2 a : ![int] -2 & ⊥ [T-APP-POLY] Γ , a : ![int] -2 fibo (n -2) a : unit & -2
where the application fibo (n -2) a is well typed despite the fact that fibo (n -2) expects an argument of type ![int] 0 , while a has type ![int] -2 . A similar derivation can be obtained for fibo (n -1) b, and the proof derivation can now be completed.
Recursive types
Looking back at Example 2, we see that in a call pipe x y the channel recv x is used in the same position as x. Therefore, according to [T-APP-POLY], recv x must have the same type as x, up to some shifting of its level. Similarly, channel c is both sent on y and then used in the same position as y, suggesting that c must have the same type as y, again up to some shifting of its level. This means that we need recursive types in order to properly describe x and y. Instead of adding explicit syntax for recursive types, we just consider the possibly infinite trees generated by the productions for t shown earlier. In light of this broader notion of types, the inductive definition of type level (3.1) is still well founded, but type shift (3.4) must be reinterpreted coinductively, because it has to operate on possibly infinite trees. The formalities, nonetheless, are well understood.
It is folklore that, whenever infinite types are regular (that is, when they are made of finitely many distinct subtrees), they admit finite representations either using type variables and the familiar µ notation, or using systems of type equations [START_REF] Courcelle | Fundamental properties of infinite trees[END_REF]. Unfortunately, a careful analysis of Example 2 suggests that -at least in principle -we also need non-regular types. To see why, let a and c be the channels to which (recv x) and (cont y) respectively evaluate on line 2 of the example. Now:
x must have smaller level than a since a is received from x (cf. the types of recv).
y must have smaller level than c since c is sent on y (cf. the types of send).
x must have smaller level than y since x is used in the functional part of an application in which y occurs in the argument (cf. line 2 and [T-APP-POLY]).
Overall, in order to type pipe in Example 2 we should assign x and y the types t n and s n that respectively satisfy the equations
t n = ?[t n+2 ] n s n = ![t n+3 ] n+1 (3.5)
Unfortunately, these equations do not admit regular types as solutions. We recover typeability of pipe with regular types by introducing a new type constructor
t ::= • • • t n
that wraps types with a pending shift: intuitively t n and ⇑ n t denote the same type, except that in t n the shift ⇑ n on t is pending. For example, ?[int] 0 1 and ?[int] 2 -1 are both possible wrappings of ?
[int] 1 , while int → 0,⊥ ![int] 0 is the unwrapping of int → 1,⊥ ![int] 1 -1 .
To exclude meaningless infinite types such as • • • n n n we impose a contractiveness condition requiring every infinite branch of a type to contain infinite occurrences of channel or arrow constructors. To see why wraps help finding regular representations for otherwise non-regular types, observe that the equations
t n = ?[ t n 2 ] n s n = ![ t n+1 2 ] n+1 (3.6)
denote -up to pending shifts -the same types as the ones in (3.5), with the key difference that (3.6) admit regular solutions and therefore finite representations. For example, t n could be finitely represented as a familiar-looking µα.?[ α 2 ] n term. We should remark that t n and ⇑ n t are different types, even though the former is morally equivalent to the latter: wrapping is a type constructor, whereas shift is a type operator. Having introduced a new constructor, we must suitably extend the notions of type level (3.1) and type shift (3.4) we have defined earlier. We postulate
| t n | def = n + |t| ⇑ n t m def = ⇑ n t m
in accordance with the fact that • n denotes a pending shift by n (note that | • | extended to wrappings is well defined thanks to the contractiveness condition).
We also have to define introduction and elimination rules for wrappings. To this aim, we conceive two constants, wrap and unwrap, having the following type schemes:
wrap : ⇑ n t → t n unwrap : t n → ⇑ n t
We add wrap v to the value forms. Operationally, we want wrap and unwrap to annihilate each other. This is done by enriching reduction for expressions with the axiom and we are now able to find a typing derivation for it that uses regular types. In particular, we assign cont the type s n → s n+2 and pipe the type t n → s n → n, unit where t n and s n are the types defined in (3.6). Note that cont is a pure function because its effects are masked by fork and that pipe has latent effect since it loops performing recv operations on channels with increasing level. Because of the side conditions in [T-APP] and [T-APP-POLY], this means that pipe can only be used in tail position, which is precisely what happens above and in Example 2.
unwrap (wrap v) -→ v
Properties
To formulate subject reduction, we must take into account that linear channels are consumed after communication (last but one reduction in Table 1). This means that when a process P communicates on some channel a, a must be removed from the type environment used for typing the residual of P. To this aim, we define a partial operation Γthat removes from Γ , when is a channel. Formally:
Theorem 1 (subject reduction). If Γ P and P - -→ Q, then Γ - Q where Γ -τ def = Γ and (Γ , a : #[t] n ) -a def = Γ .
Note that Γa is undefined if a ∈ dom(Γ ). This means that well-typed programs never attempt at using the same channel twice, namely that channels in well-typed programs are indeed linear channels. This property has important practical consequences, since it allows the efficient implementation (and deallocation) of channels [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF].
Deadlock freedom means that if the program halts, then there must be no pending I/O operations. In our language, the only halted program without pending operations is (structurally equivalent to) () . We can therefore define deadlock freedom thus: Definition 1. We say that P is deadlock free
if P τ - -→ * Q -→ implies Q ≡ () .
As usual, τ --→ * is the reflexive, transitive closure of τ --→ and Q -→ means that Q is unable to reduce further. Now, every well-typed, closed process is free from deadlocks: Theorem 2 (soundness). If / 0 P, then P is deadlock free.
Theorem 2 may look weaker than desirable, considering that every process P (even an ill-typed one) can be "fixed" and become part of a deadlock-free system if composed in parallel with the diverging thread fix λ x.x . It is not easy to state an interesting property of well-typed partial programs -programs that are well typed in uneven environments -or of partial computations -computations that have not reached a stable (i.e., irreducible) state. One might think that well-typed programs eventually use all of their channels. This property is false in general, for two reasons. First, our type system does not ensure termination of well-typed expressions, so a thread like send a (fix λ x.x) never uses channel a, because the evaluation of the message diverges. Second, there are threads that continuously generate (or receive) new channels, so that the set of channels they own is never empty; this happens in Example 2. What we can prove is that, assuming that a well-typed program does not internally diverge, then each channel it owns is eventually used for a communication or is sent to the environment in a message. To formalize this property, we need a labeled transition system describing the interaction of programs with their environment. Labels π, . . . We formalize the assumption concerning the absence of internal divergences as a property that we call interactivity. Interactivity is a property of typed processes, which we write as pairs Γ P, since the messages exchanged between a process and the environment in which it executes are not arbitrary in general.
Definition 2 (interactivity). Interactivity is the largest predicate on well-typed processes such that Γ P interactive implies Γ P and:
1. P has no infinite reduction P Clause [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF] says that an interactive process does not internally diverge: it will eventually halt either because it terminates or because it needs interaction with the environment in which it executes. Clause (2) states that internal reductions preserve interactivity. Clause (3) states that a process with a pending output on a channel a must reduce to an interactive process after the output is performed. Finally, clause (4) states that a process with a pending input on a channel a may reduce to an interactive process after the input of a particular message v is performed. The definition looks demanding, but many conditions are direct consequences of Theorem 1. The really new requirements besides well typedness are convergence of P (1) and the existence of v (4). It is now possible to prove that well-typed, interactive processes eventually use their channels.
1 -→ P 1 2 -→ P 2 3 -→ • • • , and 2. if P -→ Q, then Γ -Q is interactive, and 3. if P a!v -→ Q and Γ = Γ , a : ![t] n , then Γ Q is interactive for some Γ ⊆ Γ ,
Theorem 3 (interactivity). Let Γ P be an interactive process such that a ∈ fn(P). Then P
π 1 -→ P 1 π 2 -→ • • • π n
-→ P n for some π 1 , . . . , π n such that a ∈ fn(P n ).
Concluding remarks
We have demonstrated the portability of a type system for deadlock freedom of πcalculus processes [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] to a higher-order language using an effect system [START_REF] Amtoft | Type and Effect Systems: Behaviours for Concurrency[END_REF]. We have shown that effect masking and polymorphic recursion are key ingredients of the type system (Examples 1 and 2), and also that latent effects must be paired with one more annotation -the function level. The approach may seem to hinder program modularity, since it requires storing levels in types and levels have global scope. In this respect, level polymorphism (Section 3.2) alleviates this shortcoming of levels by granting them a relative -rather than absolute -meaning at least for non-linear functions.
Other type systems for higher-order languages with session-based communication primitives have been recently investigated [START_REF] Gay | Linear type theory for asynchronous session types[END_REF][START_REF] Wadler | Propositions as sessions[END_REF][START_REF] Bono | Polymorphic Types for Leak Detection in a Session-Oriented Functional Language[END_REF]. In addition to safety, types are used for estimating bounds in the size of message queues [START_REF] Gay | Linear type theory for asynchronous session types[END_REF] and for detecting memory leaks [START_REF] Bono | Polymorphic Types for Leak Detection in a Session-Oriented Functional Language[END_REF]. Since binary sessions can be encoded using linear channels [START_REF] Dardha | Session types revisited[END_REF], our type system can address the same family of programs considered in these works with the advantage that, in our case, well-typed programs are guaranteed to be deadlock free also in presence of session interleaving. For instance, the pipe function in Example 2 interleaves communications on two different channels. The type system described by Wadler [START_REF] Wadler | Propositions as sessions[END_REF] is interesting because it guarantees deadlock freedom without resorting to any type annotation dedicated to this purpose. In his case the syntax of (well-typed) programs prevents the modeling of cyclic network topologies, which is a necessary condition for deadlocks. However, this also means that some useful program patterns cannot be modeled. For instance, the program in Example 2 is ill typed in [START_REF] Wadler | Propositions as sessions[END_REF].
The type system discussed in this paper lacks compelling features. Structured data types (records, sums) have been omitted for lack of space; an extended technical report [START_REF] Padovani | Types for Deadlock-Free Higher-Order Concurrent Programs[END_REF] and previous works [START_REF] Padovani | Type Reconstruction for the Linear π-Calculus with Composite and Equi-Recursive Types[END_REF][START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] show that they can be added without issues. The same goes for non-linear channels [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF], possibly with the help of dedicated accept and request primitives as in [START_REF] Gay | Linear type theory for asynchronous session types[END_REF]. True polymorphism (with level and type variables) has also been studied in the technical report [START_REF] Padovani | Types for Deadlock-Free Higher-Order Concurrent Programs[END_REF]. Its impact on the overall type system is significant, especially because level and type constraints (those appearing as side conditions in the type schemes of constants, Section 3.1) must be promoted from the metatheory to the type system. The realization of level polymorphism as type shifting that we have adopted in this paper is an interesting compromise between impact and flexibility. Our type system can also be relaxed with subtyping: arrow types are contravariant in the level and covariant in the latent effect, whereas channel types are invariant in the level. Invariance of channel levels can be relaxed refining levels to pairs of numbers as done in [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF]. This can also improve the accuracy of the type system in some cases, as discussed in [START_REF] Padovani | Deadlock and Lock Freedom in the Linear π-Calculus[END_REF] and [START_REF] Carbone | Progress as compositional lock-freedom[END_REF]. It would be interesting to investigate which of these features are actually necessary for typing concrete functional programs using threads and communication/synchronization primitives.
Type reconstruction algorithms for similar type systems have been defined [START_REF] Padovani | Type Reconstruction for the Linear π-Calculus with Composite and Equi-Recursive Types[END_REF][START_REF] Padovani | Type Reconstruction Algorithms for Deadlock-Free and Lock-Free Linear π-Calculi[END_REF]. We are confident to say that they scale to type systems with arrow types and effects.
e
::= k u λ x.e ee P, Q ::= e (νa)P P | Q Expressions comprise constants k, names u, abstractions λ x.e, and applications e 1 e 2 .
Example 1 (
1 parallel Fibonacci function). The fibo function below computes the n-th number in the Fibonacci sequence and sends the result on a channel c: fix λfibo.λn.λc.if n ≤ 1 then send c n else let a = new() and b = new() in (fork λ_.fibo (n -1) a); (fork λ_.fibo (n -2) b); send c (recv a + recv b)
3 4 (
4 let a = new() and b = new() in fork λ_.pipe a b); (fork λ_.pipe b (cont a))
extend the function | • | to type environments so that |Γ | def = u∈dom(Γ ) |Γ (u)| with the convention that | / 0| = ; we write un(Γ ) if |Γ | = .
Example 4 .
4 We suitably dress the code in Example 2 using wrap and unwrap: 1 let cont = λx.let c = new() in (fork λ_.send x (wrap c)); c in 2 let pipe = fix λpipe.λx.λy.pipe (unwrap (recv x)) (cont y)
of transitions are defined by π ::= a?e a!v and the transition relation π -→ extends reduction with the rules a ∈ bn(C ) C [send a v] a!v -→ C [()] a ∈ bn(C ) fn(e) ∩ bn(C ) = / 0 C [recv a] a?e -→ C [e] where C ranges over process contexts C ::= E | (C | P) | (P | C ) | (νa)C . Messages of input transitions have the form a?e where e is an arbitrary expression instead of a value. This is just to allow a technically convenient formulation of Definition 2 below.
and Γ = Γ , a : ?[t] n , then Γ Q{v/x} is interactive for some v and Γ ⊇ Γ such that n < |Γ \ Γ |.
[T-CONST] Γ k : t & ⊥ un(Γ ) t ∈ types(k) [T-FUN] Γ , x : t e : s & ρ Γ λ x.e : t → |Γ |,ρ s & ⊥ [T-APP] Γ 1 e 1 : t → ρ,σ s & τ 1 Γ 2 e 2 : t & τ 2 Γ 1 + Γ 2 e 1 e 2 : s & σ τ 1 τ 2 τ 1 < |Γ 2 | τ 2 < ρTyping of processes[T-THREAD] Γ e : unit & ρ Γ e [T-PAR] Γ 1 P Γ 2 Q
Acknowledgments. The authors are grateful to the reviewers for their detailed comments and useful suggestions. The first author has been supported by Ateneo/CSP project SALT, ICT COST Action IC1201 BETTY, and MIUR project CINA. |
01767335 | en | [
"info",
"info.info-ni"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767335/file/978-3-319-19195-9_3_Chapter.pdf | Ritwika Ghosh
email: <[email protected]
Sayan Mitra
email: mitras>@illinois.edu
A Strategy for Automatic Verification of Stabilization of Distributed Algorithms
Automatic verification of convergence and stabilization properties of distributed algorithms has received less attention than verification of invariance properties. We present a semi-automatic strategy for verification of stabilization properties of arbitrarily large networks under structural and fairness constraints. We introduce a sufficient condition that guarantees that every fair execution of any (arbitrarily large) instance of the system stabilizes to the target set of states. In addition to specifying the protocol executed by each agent in the network and the stabilizing set, the user also has to provide a measure function or a ranking function. With this, we show that for a restricted but useful class of distributed algorithms, the sufficient condition can be automatically checked for arbitrarily large networks, by exploiting the small model properties of these conditions. We illustrate the method by automatically verifying several well-known distributed algorithms including linkreversal, shortest path computation, distributed coloring, leader election and spanning-tree construction.
Introduction
A system is said to stabilize to a set of states X * if all its executions reach some state in X * [START_REF] Dolev | Self-stabilization[END_REF]. This property can capture common progress requirements like absence of deadlocks and live-locks, counting to infinity, and achievement of selfstabilization in distributed systems. Stabilization is a liveness property, and like other liveness properties, it is generally impossible to verify automatically. In this paper, we present sufficient conditions which can be used to automatically prove stabilization of distributed systems with arbitrarily many participating processes.
A sufficient condition we propose is similar in spirit to Tsitsiklis' conditions given in [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF] for convergence of iterative asynchronous processes. We require the user to provide a measure function, parameterized by the number of processes, such that its sub-level sets are invariant with respect to the transitions and there is a progress making action for each state. 1 Our point of departure is a non-interference condition that turned out to be essential for handling models of distributed systems. Furthermore, in order to handle non-deterministic communication patterns, our condition allows us to encode fairness conditions and different underlying communication graphs.
Next, we show that these conditions can be transformed to a forall-exists form with a small model property. That is, there exists a cut-off number N 0 such that if the condition(s) is(are) valid in all models of sizes up to N 0 , then it is valid for all models. We use the small model results from [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF] to determine the cutoff parameter and apply this approach to verify several well-known distributed algorithms.
We have a Python implementation based on the sufficient conditions for stabilization we develop in Section 3. We present precondition-effect style transition systems of algorithms in Section 4 and they serve as pseudo-code for our implementation. The SMT-solver is provided with the conditions for invariance, progress and non-interference as assertions. We encode the distributed system models in Python and use the Z3 theorem-prover module [START_REF] Moura | Z3: An efficient smt solver[END_REF] provided by Python to check the conditions for stabilization for different model sizes.
We have used this method to analyze a number of well-known distributed algorithms, including a simple distributed coloring protocol, a self-stabilizing algorithm for constructing a spanning tree of the underlying network graph, a link-reversal routing algorithm, and a binary gossip protocol. Our experiments suggest that this method is effective for constructing a formal proof of stabilization of a variety of algorithms, provided the measure function is chosen carefully. Among other things, the measure function should be locally computable: changes from the measure of the previous state to that of the current state only depend on the vertices involved in the transition. It is difficult to determine whether such a measure function exists for a given problem. For instance, consider Dijkstra's self-stabilizing token ring protocol [START_REF] Edsger | Self-stabilization in spite of distributed control[END_REF]. The proof of correctness relies on the fact that the leading node cannot push for a value greater than its previous unique state until every other node has the same value. We were unable to capture this in a locally computable measure function because if translated directly, it involves looking at every other node in the system.
Related Work
The motivation for our approach is from the paper by John Tsitsiklis on convergence of asynchronous iterative processes [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF], which contains conditions for convergence similar to the sufficient conditions we state for stabilization. Our use of the measure function to capture stabilization is similar to the use of Lyapunov functions to prove stability as explored in [START_REF] Oliver | Exploitation of lyapunov theory for verifying self-stabilizing algorithms[END_REF], [START_REF] Oehlerking | Towards automatic convergence verification of self-stabilizing algorithms[END_REF] and [START_REF] Oliver | A new verification technique for self-stabilizing distributed algorithms based on variable structure systems and lyapunov theory[END_REF]. In [START_REF] Dhama | A tranformational approach for designing scheduler-oblivious self-stabilizing algorithms[END_REF], Dhama and Theel present a progress monitor based method of designing self-stabilizing algorithms with a weakly fair scheduler, given a self-stabilizing algorithm with an arbitrary, possibly very restrictive scheduler. They also use the existence of a ranking function to prove convergence under the original scheduler. Several authors [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF] employ functions to prove termination of distributed algorithms, but while they may provide an idea of what the measure function can be, in general they do not translate exactly to the measure functions that our verification strategy can employ. The notion of fairness we have is also essential in dictating what the measure function should be, while not prohibiting too many behaviors. In [START_REF] Oehlerking | Towards automatic convergence verification of self-stabilizing algorithms[END_REF], the assumption of serial execution semantics is compatible with our notions of fair executions.
The idea central to our proof method is the small model property of the sufficient conditions for stabilization. The small model nature of certain invariance properties of distributed algorithms (eg. distributed landing protocols for small aircrafts as in [START_REF] Umeno | Safety verification of an aircraft landing protocol: A refinement approach[END_REF]) has been used to verify them in [START_REF] Johnson | Invariant synthesis for verification of parameterized cyber-physical systems with applications to aerospace systems[END_REF]. In [START_REF] Emerson | Reducing model checking of the many to the few[END_REF], Emerson and Kahlon utilize a small model argument to perform parameterized model checking of ring based message passing systems.
Preliminaries
We will represent distributed algorithms as transition systems. Stabilization is a liveness property and is closely related to convergence as defined in the works of Tsitsiklis [START_REF] Johnn | On the stability of asynchronous iterative processes[END_REF]; it is identical to the concept of region stability as presented in [START_REF] Sridhar | Abstraction refinement for stability[END_REF]. We will use measure functions in our definition of stabilization. A measure function on a domain provides a mapping from that domain to a well-ordered set. A well-ordered set W is one on which there is a total ordering <, such that there is a minimum element with respect to < on every non-empty subset of W . Given a measure function C : A → B, there is a partition of A into sub level-sets. All elements of A which map to the same element b ∈ B under C are in the same sub level-set L b .
We are interested in verifying stabilization of distributed algorithms independent of the number of participating processes or nodes. Hence, the transition systems are parameterized by N -the number of nodes. Given a non-negative integer N , we use [N ] to denote a set of indices {1, 2, . . . , N }. Definition 1. For a natural number N and a set Q, a transition system A(N ) with N nodes is defined as a tuple (X,A,D) where a) X is the state space of the system. If the state space of of each node is Q, X = Q N . b) A is a set of actions. c) D : X × A → X is a transition function, that maps a system-state action pair to a system-state.
For any x ∈ X , the i th component of x is the state of the i th node and we refer to it as x[i]. Given a transition system A(N ) = (X , A, D) we refer to the state obtained by the application of the action a on a state x ∈ X i.e, D(x, a), by a(x).
An execution of A(N ) records a particular run of the distributed system with N nodes. Formally, an execution α of A(N ) is a (possibly infinite) alternating sequence of states and actions x 0 , a 1 , x 1 , . . ., where each x i ∈ X and each a i ∈ A such that D(x i , a i+1 ) = x i+1 . Given that the choice of actions is nondeterministic in the execution, it is reasonable to expect that not all executions may stabilize. For instance, an execution in which not all nodes participate, may not stabilize. Definition 2. A fairness condition F for A(N ) is a finite collection of subsets of actions {A i } i∈I , where I is a finite index set. An action-sequence σ = a 1 , a 2 , . . . is F-Fair if every A i in F is represented in σ infinitely often, that is,
∀ A ∈ F, ∀i ∈ N, ∃k > i, a k ∈ A .
For instance, if the fairness condition is the collection of all singleton subsets of A, then each action occurs infinitely often in an execution. This notion of fairness is similar to action based fairness constraints in temporal logic model checking [START_REF] Huth | Logic in Computer Science: Modelling and reasoning about systems[END_REF]. The network graph itself enforces whether an action is enabled: every pair of adjacent nodes determines a continuously enabled action. An execution is strongly fair, if given a set of actions A such that all actions in A are infinitely often enabled; some action in A occurs infinitely often in the it. An F-fair execution is an infinite execution such that the corresponding sequence of actions is F-fair. Definition 3. Given a system A(N ), a fairness condition F, and a set of states
X * ⊆ X , A(N ) is said to F-stabilize to X * iff for any F-fair execution α = x 0 , a 1 , x 1 , a 2 , . . ., there exists k ∈ N such that x k ∈ X * . X * is called a stabilizing set for A and F.
It is different from the definition of self-stabilization found in the literature [START_REF] Dolev | Self-stabilization[END_REF], in that the stabilizing set X * is not required to be an invariant of A(N ). We view proving the invariance of X * as a separate problem that can be approached using one of the available techniques for proving invariance of parametrized systems in [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF], [START_REF] Johnson | Invariant synthesis for verification of parameterized cyber-physical systems with applications to aerospace systems[END_REF].
Example 1. (Binary Gossip) We look at binary gossip in a ring network composed of N nodes. The nodes are numbered clockwise from 1, and nodes 1 and N are also neighbors. Each node has one of two states : {0, 1}. A pair of neighboring nodes communicates to exchange their values, and the new state is set to the binary Or (∨) of the original values. Clearly, if all the interactions happen infinitely often, and the initial state has at least one node state 1, this transition system stabilizes to the state x = 1 N . The set of actions is specified by the set of edges of the ring. We first represent this protocol and its transitions using a standard precondition-effect style notation similar to one used in [START_REF] Mitra | A verification framework for hybrid systems[END_REF].
Automaton Gossip[N : N] type indices : [N ] type values : {0, 1} variables x[ indices → values ] transitions step (i: indices , j : indices ) pre True eff x[i] = x[j] = x[i] ∨ x[j] measure func C : x → Sum(x)
The above representation translates to the transition system A(N ) = (X , A, D) where 1. The state space of each node is
Q = {0, 1}, i.e X = {0, 1} N . 2. The set of actions is A = {step(i, i + 1) | 1 ≤ i < N } ∪ {(N, 1)}. 3. The transition function is D(x, step(i, j)) = x where x [i] = x [j] = x[i] ∨ x[j].
We define the stabilizing set to be X * = {1 N }, and the fairness condition is
F = {{(i, i+1} | 1 < i < N }∪{1, N },
which ensures that all possible interactions take place infinitely often. In Section 3 we will discuss how this type of stabilization can be proven automatically with a user-defined measure function.
3 Verifying Stabilization
A Sufficient Condition for Stabilization
We state a sufficient condition for stabilization in terms of the existence of a measure function. The measure functions are similar to Lyapunov stability conditions in control theory [START_REF] Hassan | Nonlinear systems[END_REF] and well-founded relations used in proving termination of programs and rewriting systems [START_REF] Dershowitz | Termination of rewriting[END_REF].
Theorem 1. Suppose A(N ) = X , A, D is a transition system parameterized by N , with a fairness condition F, and let X * be a subset of X . Suppose further that there exists a measure function C : X → W , with minimum element ⊥ such that the following conditions hold for all states x ∈ X:
-(invariance) ∀ a ∈ A, C(a(x)) ≤ C(x), -(progress) ∃ A x ∈ F, ∀a ∈ A x , C(x) =⊥⇒ C(a(x)) < C(x), -(noninterference) ∀a, b ∈ A, C(a(x)) < C(x) ⇒ C(a(b(x))) < C(x), and
-(minimality) C(x) = ⊥ ⇒ x ∈ X * . Then, A[N ] F-stabilizes to X * .
Proof. Consider an F-fair execution α = x 0 a 1 x 1 . . . of A(N ) and let x i be an arbitrary state in that execution. If C(x i ) = ⊥, then by minimality, we have x i ∈ X * . Otherwise, by the progress condition we know that there exists a set of actions
A xi ∈ F and k > i, such that a k ∈ A xi , and C(a k (x i )) < C(x i ).
We perform induction on the length of the sub-sequence
x i a i+1 x i+1 . . . a k x k and prove that C(x k ) < C(x i ). For any sequence β of intervening actions of length n, C(a k (x i )) < C(x i ) ⇒ C(a k (β(x i ))) < C(x i ).
The base case of the induction is n = 0, which is trivially true. By induction hypothesis we have: for any j < n, with length of β equal to j,
C(a k (β(x i )) < C(x i ).
We have to show that for any action b ∈ A,
C(a k (β(b(x i ))) < C(x i ).
There are two cases to consider. If C(b(x i )) < C(x i ) then the result follows from the invariance property. Otherwise, let x = b(x i ). From the invariance of b we have C(x ) = C(x i ). From the noninterference condition we have
C(a(b(x i )) < C(x i ),
which implies that C(a(x )) < C(x ). By applying the induction hypothesis to x we have the required inequality C(a k (β(b(x i ))) < C(x i ). So far we have proved that either a state x i in an execution is already in the stabilizing set, or there is a state
x k , k > i such that C(x k ) < C(x i ).
Since < is a well-ordering on C(X ), there cannot be an infinite descending chain. Thus
∃j(j > i ∧ C(j) = ⊥).
By minimality , x j ∈ X * . By invariance again, we have F-stabilization to X *
We make some remarks on the conditions of Theorem 1. It requires the measure function C and the transition system A(N ) to satisfy four conditions. The invariance condition requires the sub-level sets of C to be invariant with respect to all the transitions of A(N ). The progress condition requires that for every state x for which the measure function is not already ⊥, there exists a fair set of actions A x that takes x to a lower value of C.
The minimality condition asserts that C(x) drops to ⊥ only if the state is in the stabilizing set X * . This is a part of the specification of the stabilizing set.
The noninterference condition requires that if a results in a decrease in the value of the measure function at state x, then application of a to another state x that is reachable from x also decreases the measure value below that of x. Note that it doesn't necessarily mean that a decreases the measure value at x , only that either x has measure value less than x at the time of application of a or it drops after the application. In contrast, the progress condition of Theorem 1 requires that for every sub-level set of C there is a fair action that takes all states in the sub-level set to a smaller sub-level set.
To see the motivation for the noninterference condition, consider a sub-level set with two states x 1 and x 2 such that b(x 1 ) = x 2 , a(x 2 ) = x 1 and there is only one action a such that C(a(x 1 )) < C(x 1 ). But as long as a does not occur at x 1 , an infinite (fair) execution x 1 bx 2 ax 1 bx 2 . . . may never enter a smaller sub-level set.
In our examples, the actions change the state of a node or at most a small set of nodes while the measure functions succinctly captures global progress conditions such as the number of nodes that have different values. Thus, it is often impossible to find actions that reduce the measure function for all possible states in a level-set. In Section 4, we will show how a candidate measure function can be checked for arbitrarily large instances of a distributed algorithm, and hence, lead to a method for automatic verification of stabilization.
Automating Stabilization Proofs
For finite instances of a distributed algorithm, we can use formal verification tools to check the sufficient conditions in Theorem 1 to prove stabilization. For transition systems with invariance, progress and noninterference conditions that can be encoded appropriately in an SMT solver, these checks can be performed automatically. Our goal, however, is to prove stabilization of algorithms with an arbitrary or unknown number of participating nodes. We would like to define a parameterized family of measure functions and show that ∀N ∈ N, A(N ) satisfies the conditions of Theorem 1. This is a parameterized verification problem and most of the prior work on this problem has focused on verifying invariant properties (see Section 1 for related works). Our approach will be based on exploiting the small model nature of the logical formulas representing these conditions.
Suppose we want to check the validity of a logical formula of the form ∀ N ∈ N, φ(N ). Of course, this formula is valid iff the negation ∃ N ∈ N, ¬φ(N ) has no satisfying solution. In our context, checking if ¬φ(N ) has a satisfying solution over all integers is the (large) search problem of finding a counter-example. That is, a particular instance of the distributed algorithm and specific values of the measure function for which the conditions in Theorem 1 do not hold. The formula ¬φ(N ) is said to have a small model property if there exists a cut-off value N 0 such that if there is no counter-example found in any of the instances A(1), A(2), . . . , A(N 0 ), then there are no counter-examples at all. Thus, if the conditions of Theorem 1 can be encoded in such a way that they have these small model properties then by checking them over finite instances, we can infer their validity for arbitrarily large systems.
In [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF], a class of ∀∃ formulas with small model properties were used to check invariants of timed distributed systems on arbitrary networks. In this paper, we will use the same class of formulas to encode the sufficient conditions for checking stabilization. We use the following small model theorem as presented in [START_REF] Johnson | A small model theorem for rectangular hybrid automata networks[END_REF]:
Theorem 2. Let Γ (N ) be an assertion of the form ∀i 1 , . . . , i k ∈ [N ]∃j 1 , . . . , m ∈ [N ], φ(i 1 , . . . , i k , j 1 , . . . , j m )
where φ is a quantifier-free formula involving the index variables, global and local variables in the system. Then, ∀N ∈ N : Γ (N ) is valid iff for all n ≤ N 0 = (e + 1)(k + 2), Γ (n) is satisfied by all models of size n, where e is the number of index array variables in φ and k is the largest subscript of the universally quantified index variables in Γ (N ).
Computing the Small Model Parameter
Computing the small model parameter N 0 for verifying a stability property of a transition system first requires expressing all the conditions of Theorem 1 using formulas which have the structure specified by Theorem 2. There are a few important considerations while doing so.
Translating the sufficient conditions In their original form, none of the conditions of Theorem 1 have the structure of ∀∃-formulas as required by Theorem 2. For instance, a leading ∀x ∈ X quantification is not allowed by Theorem 2, so we transform the conditions into formulas with implicit quantification. Take for instance the invariance condition: ∀x ∈ X , ∀a ∈ A, (C(a(x)) ≤ C(x)). Checking the validity of the invariance condition is equivalent to checking the satisfiability of ∀a ∈ A, (a(x) = x ⇒ C(x ) ≤ C(x)), where x and x are free variables, which are checked over all valuations. Here we need to check that x and x are actually states and they satisfy the transition function. For instance in the binary gossip example, we get
Invariance : ∀x ∈ X , ∀a ∈ A, C(a(x)) ≤ C(x) is verified as ∀a ∈ A, x = a(x) ⇒ C(x ) ≤ C(x). ≡ ∀i, j ∈ [N ], x = step(i, j)(x) ⇒ Sum(x ) ≤ Sum(x). Progress : ∀x ∈ X , ∃a ∈ A, C(x) = ⊥ ⇒ C(a(x)) < C(x) is verified as C(x) = 0 ⇒ ∃i, j ∈ [N ], x = step(i, j)(x) ∧ Sum(x) < Sum(x). Noninterference : ∀x ∈ X , ∀a, b ∈ A, (C(a(x)) < C(x) ≡ C(a(b(x))) < C(x)) is verified as ∀i, j, k, l ∈ [N ], x = step(i, j)(x) ∧ x = step(k, l)(x) ∧x = step(i, j)(x ) ⇒ (C(x ) < C(x) ⇒ C(x ) < C(x)).
Interaction graphs In distributed algorithms, the underlying network topology dictates which pairs of nodes can interact, and therefore the set of actions. We need to be able to specify the available set of actions in a way that is in the format demanded by the small-model theorem. In this paper we focus on specific classes of graphs like complete graphs, star graphs, rings, k-regular graphs, and k-partite complete graphs, as we know how to capture these constraints using predicates in the requisite form. For instance, we use edge predicates E(i, j) : i and j are node indices, and the predicate is true if there is an undirected edge between them in the interaction graph. For a complete graph, E(i, j) = true. In the Binary Gossip example, the interaction graph is a ring, and
E(i, j) = (i < N ∧ j = i + 1) ∨ (i > 1 ∧ j = i -1) ∨ i = 1 ∧ j = N ).
If the graph is a d-regular graph, we express use d arrays, reg 1 , . . . , reg d , where ∃i, reg i [k] = l if there is an edge between k and l, and i = j
≡ reg i [k] = reg j [k]
. This only expresses that the degree of each vertex is d, but there is no information about the connectivity of the graph. For that, we can have a separate index-valued array which satisfies certain constraints if the graph is connected. These constraints need to be expressed in a format satisfying the small model property as well. Other graph predicates can be introduced based on the model requirements, for instance, P arent(i, j), Child(i, j), Direction(i, j). In our case studies we verify stabilization under the assumption that all pairs of nodes in E interact infinitely often. For the progress condition, the formula simplifies to ∃a ∈ A, C(x) = ⊥ ⇒ C(a(x)) < C(x)). More general fairness constraints can be encoded in the same way as we encode graph constraints.
Case studies
In this section, we will present the details of applying our strategy to various distributed algorithms. We begin by defining some predicates that are used in our case studies. Recall that we want wanted to check the conditions of Theorem 1 using the transformation outlined in Section 3.3 involving x, x etc., representing the states of a distributed system that are related by the transitions. These conditions are encoded using the following predicates, which we illustrate using the binary gossip example given in Section 2:
-isState(x) returns true iff the array variable x represents a state of the system. In the binary gossip example, isState(x
) = ∀i ∈ [N ], x[i] = 0 ∨ x[i] = 1.
-isAction(a) returns true iff a is a valid action for the system. Again, for the binary gossip example isAction(step(i, j)) = True for all i, j ∈ [N ] in the case of a complete communication graph. -isTransition(x, step(i, j), x ) returns true iff the state x goes to x when the transition function for action step(i, j) is applied to it. In case of the binary gossip example, isTransition(x, step(i, j), x ) is
(x [j] = x [i] = x[i] ∨ x[j]) ∧ (∀p, p / ∈ {i, j} ⇒ x[p] = x [p]).
-Combining the above predicates, we define P (x, x , i, j) as
isState(x) ∧ isState(x ) ∧ isTransition(x, step(i, j), x ) ∧ isAction(step(i, j)).
Using these constructions, we rewrite the conditions of Theorem 1 as follows:
Invariance : ∀i, j, P (x, x , i, j) ⇒ C(x ) ≤ C(x).
(1)
Progress : C(x) = ⊥ ⇒ ∃i, j, P (x, x , i, j) ∧ C(x ) < C(x). (2)
Noninterference : ∀p, r, s, t, P (x, x , p, q) ∧ P (x, x , s, t) ∧ P (x , x , p, q)
⇒ (C(x ) < C(x) ⇒ C(x ) < C(x)). (3)
Minimality : C(x) = ⊥ ⇒ x ∈ X * . (4)
Graph Coloring
This algorithm colors a given graph in d + 1 colors, where d is the maximum degree of a vertex in the graph [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF]. Two nodes are said to have a conflict if they have the same color. A transition is made by choosing a single vertex, and if it has a conflict with any of its neighbors, then it sets its own state to be the least available value which is not the state of any of its neighbours. We want to verify that the system stabilizes to a state with no conflicts. The measure function is chosen as the set of pairs with conflicts.
Automaton Coloring[N : N] type indices : [N ] type values : {1, . . . , N } variables x[ indices → values ] transitions internal step (i: indices ) pre ∃j ∈ [N ](E(j, i) ∧ x[j] = x[i]) eff x[i] = min(values \{c | j ∈ [N ] ∧ E(i, j) ∧ x[j] = c}) measure func C : x → {(i, j) | E(i, j) ∧ x[i] = x[j]}
Here, the ordering on the image of the measure function is set inclusion.
Invariance : ∀i ∈ [N ], P (x, x , i) ⇒ C(x ) ⊆ C(x). (From (1)) ≡ ∀i, j, k ∈ [N ], P (x, x , i) ⇒ ((j, k) ∈ C(x ) ⇒ (j, k) ∈ C(x)). ≡ ∀i, j, k ∈ [N ], P (x, x , i) ⇒ (E(j, k) ∧ x[j] = x[k] ⇒ x [j] = x [k]).
(E is the set of edges in the underlying graph)
Progress : ∃m ∈ [N ], C(x) = ∅ ⇒ C(step(m)(x)) < C(x). ≡ ∀i, j ∈ [N ], ∃m, n ∈ [N ], (E(i, j) ∧ x[i] = x[j]) ∨ (P (x, x , m) ∧ E(m, n) ∧ x[m] = x[n] ∧ x [m] = x [n]).
Noninterference : ∀q, r, s, t ∈ [N ], (P (x, x , q) ∧ P (x, x , s) ∧ P (x , x , q))
⇒ (E(q, r) ∧ x[q] = x[r] ∧ x [q] = x [r] ⇒ E(s, t) ∧(x [s] = x [t] ⇒ x [s] = x [t]) ∧ x [r] = x [q])).
(from (3 and expansion of ordering)
Minimality : C(x) = ∅ ⇒ x ∈ X * .
From the above conditions, using Theorem 2 N 0 is calculated to be 24.
Leader Election
This algorithm is a modified version of the Chang-Roberts leader election algorithm [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF]. We apply Theorem 1 directly by defining a straightforward measure function. The state of each node in the network consists of a) its own uid, b) the index and uid of its proposed candidate, and c) the status of the election according to the node (0 : the node itself is elected, 1 : the node is not the leader, 2 : the node is still waiting for the election to finish). A node i communicates its state to its clockwise neighbor j (i + 1 if i < N , 0 otherwise) and if the UID of i's proposed candidate is greater than j, then j is out of the running. The proposed candidate for each node is itself to begin with. When a node gets back its own index and uid, it sets its election status to 0. This status, and the correct leader identity propagates through the network, and we want to verify that the system stabilizes to a state where a leader is elected. The measure function is the number of nodes with state 0.
[i] = 1 ∧ uid[candidate[i]] > uid[candidate[j]] eff leader[j] = 1 ∧ candidate[j] = candidate[i] pre leader[j] = 2 ∧ candidate[i] = j eff leader [ j ] = 0∧candidate[j] = j pre leader[i] = 0 eff leader[j] = 1 ∧ candidate[j] = i measure func C : x → Sum(x.leader[i])
The function Sum() represents the sum of all elements in the array, and it can be updated when a transition happens by just looking at the interacting nodes. We encode the sufficient conditions for stabilization of this algorithm using the strategy outlined in Section 3.2.
Invariance : ∀i, j ∈ [N ], P (x, x , i, j) ⇒ (Sum(x .leader) ≤ Sum(x.leader)).
≡ ∀i, j ∈ [N ], (P (x, x , i, j) ⇒ (Sum(x.leader) -x.leader[i] -
x.leader[j] + x .leader[i] + x .leader[j] ≤ Sum(x.leader)).
(difference only due to interacting nodes)
≡ ∀i, j ∈ [N ], P (x, x , i, j) (one element still waiting for election to end)
⇒ (x .leader[i] + x .leader[j] ≤ x.leader[i] + x.leader[j]) Progress : ∃m, n ∈ [N ], Sum(x.leader) = N -1 ⇒ Sum(step(m, n)(x).
Noninterference : ∀q, r, s, t ∈ [N ], P (x, x , q, r) ∧ P (x, x , s, t) ∧ P (x , x , q, r)
⇒ (x [q] + x [r] < x[q] + x[r] ⇒ (x [q] + x [r] + x [s] + x [t] < x[q] + x[r] + x[s] + x[t])).
(expanding out Sum)
Minimality : C(x) = N -1 ⇒ x ∈ X * .
From the above conditions, using Theorem 2, N 0 is calculated to be 35.
Shortest path
This algorithm computes the shortest path to every node in a graph from a root node. It is a simplified version of the Chandy-Misra shortest path algorithm [START_REF] Ghosh | Distributed systems: an algorithmic approach[END_REF].
We are allowed to distinguish the nodes with indices 1 or N in the formula structure specified by Theorem 2. The state of the node represents the distance from the root node. The root node (index 1) has state 0. Each pair of neighboring nodes communicates their states to each other, and if one of them has a lesser value v, then the one with the larger value updates its state to v + 1. This stabilizes to a state where all nodes have the shortest distance from the root stored in their state. We don't have an explicit value of ⊥ for the measure function for this, but it can be seen that we don't need it in this case. Let the interaction graph be a d-regular graph. The measure function is the sum of distances.
) pre x[j] > x[i] + 1 eff x[j] = x[i] + 1 pre x[i] = 0 eff x[j] = 1 measure func C : x → Sum(x[i])
Ordering on the image of measure function is the usual one on natural numbers.
Invariance : ∀i, j ∈ [N ], P (x, x , i, j) ⇒ Sum(x ) ≤ Sum(x). ≡ ∀, j ∈ [N ], P (x, x , i, j) ⇒ Sum(x) -x[i] -x[j] + x [i] + x [j] ≤ Sum(x). ≡ ∀i, j ∈ [N ], P (x, x , i, j) ⇒ x [i] + x [j] ≤ x[i] + x[j). Progress : ∃m, n ∈ [N ], C(x) = ⊥ ⇒ P (x, x , m, n) ∧ Sum(x) < Sum(x). ≡ ∀k, l ∈ [N ], (E(k, l) ⇒ x[k] ≤ x[l] + 1) ∨∃m, n ∈ [N ](P (x, x , m, n) ∧ E(m, n) ∧x[m] + x[n] > x [m] + x [n]).
(C(x) = ⊥ if there is no pair of neighboring vertices more than 1 distance apart from each other )
Noninterference : ∀q, r, s, t ∈ [N ], P (x, x , q, r) ∧ P (x, x , s, t) ∧ P (x , x , q, r)
⇒ (x [q] + x [r] < x[q] + x[r] ⇒ (x [q] + x [r] + x [s] + x [t] < x[q] + x[r] + x[s] + x[t])). Minimality : C(x) = ⊥ ⇒ x ∈ X * ≡ ∀i, j(E(i, j) ⇒ x[i] -x[j] ≤ 1 ⇒ x ∈ X * ) (definition) N 0 is 7(d + 1)
where the graph is d-regular.
Link Reversal
We describe the full link reversal algorithm as presented by Gafni and Bertsekas in [START_REF] Eli | Distributed algorithms for generating loopfree routes in networks with frequently changing topology[END_REF], where, given a directed graph with a distinguished sink vertex, it outputs a graph in which there is a path from every vertex to the sink. There is a distinguished sink node(index N). Any other node which detects that it has only incoming edges, reverses the direction of all its edges with its neighbours. We use the vector of reversal distances (the least number of edges required to be reversed for a node to have a path to the sink, for termination. The states store the reversal distances, and the measure function is identity.
i = N ∧ ∀j ∈ [N ](E(i, j) ∧ (direction(i, j) = -1) eff ∀j ∈ [N ](E(i, j) ⇒ (Reverse(i, j)) ∧ x(i) = min(x(j))) measure func C : x → x
The ordering on the image of the measure function is component-wise comparison:
V 1 < V 2 ⇔ ∀i(V 1 [i] < V 2 [i])
We mentioned earlier that the image of C has a well-ordering. That is a condition formulated with the idea of continuous spaces in mind. The proposed ordering for this problem works because the image of the measure function is discrete and has a lower bound (specifically, 0 N ). We elaborate a bit on P here, because it needs to include the condition that the reversal distances are calculated accurately. The node N has reversal distance 0. Any other node has reversal distance rd(i) = min(rd(j 1 ), . . . rd(j m ), rd(k 1 ) + 1, . . . rd(k n ) + 1) where j p (p = 1 . . . m) are the nodes to which it has outgoing edges, and k q (q = 1 . . . n) are the nodes it has incoming edges from. P also needs to include the condition that in a transition, reversal distances of no other nodes apart from the transitioning nodes change.
The interaction graph in this example is complete. Noninterference : ∀i, j ∈ [N ], P (x, x , i) ∧ P (x , x , j) ∧ P (x , x , i)
⇒ (x [i] < x[i] ∧ x [i] < x[i]).
(decreasing measure)
Minimality : C(x) = 0 N ⇒ x ∈ X * .
From the above conditions, using Theorem 2, N 0 is calculated to be 21.
Experiments and Discussion
We verified that instances of the aforementioned systems with sizes less than the small model parameter N 0 satisfy the four conditions(invariance, progress, non-interference, minimality) of Theorem 1 using the Z3 SMT-solver [START_REF] Moura | Z3: An efficient smt solver[END_REF]. The models are checked by symbolic execution. The interaction graphs were complete graphs in all the experiments. In Figure 5, the x-axis represents the problem instance sizes, and the y-axis is the log of the running time (in seconds) for verifying Theorem 1 for the different algorithms. We observe that the running times grow rapidly with the increase in the model sizes. For the binary gossip example, the program completes in ∼ 17 seconds for a model size 7, which is the N 0 value. In case of the link reversal, for a model size 13, the program completes in ∼ 30 mins. We have used complete graphs in all our experiments, but as we mentioned earlier in Section 3.2, we can encode more general graphs as well. This method is a general approach to automated verification of stabilization properties of distributed algorithms under specific fairness constraints, and structural constraints on graphs. The small model nature of the conditions to be verified is crucial to the success of this approach. We saw that many distributed graph algorithms, routing algorithms and symmetry-breaking algorithms can be verified using the techniques discussed in this paper. The problem of finding a suitable measure function which satisfies Theorem 2, is indeed a non-trivial one in itself, however, for the problems we study, the natural measure function of the algorithms seems to work.
→ [N ]] candidate [ indices → [N ]] leader [ indices → {0, 1, 2}] transitions internal step (i: indices , j : indices ) pre leader
leader) < Sum(x.leader)).≡ ∀p ∈ [N ], x.leader[p] = 2 ⇒ ∃m, n ∈ [N ], (P (x, x , m, n) ∧ E(m, n) ∧ x .leader[m] + x .leader[n] < x.leader[m] + x.leader[n]).
Invariance
: ∀i, j ∈ [N ], P (x, x , i) ⇒ x [j] ≤ x[j] (ordering) Progress : ∃m ∈ [N ], C(x) = ⊥ ⇒ (C(step(m)(x)) < C(x)). ≡ ∀n ∈ [N ], (x[n] = 0) ∨ ∃m ∈ [N ](P (x, x , m) ∧ x [m] < x[m]).
2
2
Fig. 1 .
1 Fig. 1. Instance size vs log 10 (T ), where T is the running time in seconds
A sub-level set of a function comprises of all points in the domain which map to the same value or less. |
01767336 | en | [
"info",
"info.info-ni"
] | 2024/03/05 22:32:15 | 2015 | https://inria.hal.science/hal-01767336/file/978-3-319-19195-9_14_Chapter.pdf | Benoit Claudel
Quentin Sabah
Jean-Bernard Stefani
Simple Isolation for an Actor Abstract Machine
Keywords: mkcf { acs : actors, ows : owners, mbs : mailboxes, shp : heap } prod msgid addr. Definition queue := list message. Record mbox : Type := mkmb { own : aid, msgs : queue}
Introduction
Motivations. The actor model of concurrency [START_REF] Agha | Actors: A Model of Concurrent Computation in Distributed Systems[END_REF], where isolated sequential threads of execution communicate via buffered asynchronous message-passing, is an attractive alternative to the model of concurrency adopted e.g. for Java, based on threads communicating via shared memory. The actor model is both more congruent to the constraints of increasingly distributed hardware architectures -be they local as in multicore chips, or global as in the world-wide web -, and more adapted to the construction of long-lived dynamic systems, including dealing with hardware and software faults, or supporting dynamic update and reconfiguration, as illustrated by the Erlang system [START_REF] Armstrong | [END_REF]. Because of this, we have seen in the recent years renewed interest in implementing the actor model, be that at the level of experimental operating systems as in e.g. Singularity [START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF], or in language libraries as in e.g. Java [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF] and Scala [START_REF] Haller | Actors that unify threads and events[END_REF].
When combining the actor model with an object-oriented programming model, two key questions to consider are the exact semantics of message passing, and its efficient implementation, in particular on multiprocessor architectures with shared physical memory. To be efficient, an implementation of message passing on a shared memory architecture ought to use data transfer by reference, where the only data exchanged is a pointer to the part of the memory that contains the message. However, with data transfer by reference, enforcing the share-nothing semantics of actors becomes problematic: once an arbitrary memory reference is exchanged between sender and receiver, how do you ensure the sender can no longer access the referenced data ? Usual responses to this question, typically involve restricting the shape of messages, and controlling references (usually through a reference uniqueness scheme [START_REF] Minsky | Towards alias-free pointers[END_REF]) by various means, including runtime support, type systems and other static analyses, as in Singularity [START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF], Kilim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF], Scala actors [START_REF] Haller | Capabilities for uniqueness and borrowing[END_REF], and SOTER [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF].
Contributions. In this paper, we study a point in the actor model design space which, despite its simplicity, has never, to our knowledge, been explored before. It features a very simple programming model that places no restriction on the shape and type of messages, and does not require special types or annotations for references, yet still enforces the share nothing semantics of the actor model. Specifically, we introduce an actor abstract machine, called Siaam. Siaam is layered on top of a sequential object-oriented abstract machine, has actors running concurrently using a shared heap, and enforces strict actor isolation by means of run-time barriers that prevent an actor from accessing objects that belong to a different actor. The contributions of this paper can be summarized as follows. We formally specify the Siaam model, building on the Jinja specification of a Java-like sequential language [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF]. We formally prove, using the Coq proof assistant, the strong isolation property of the Siaam model. We describe our implementation of the Siaam model as a modified Jikes RVM [16]. We present a novel static analysis, based on a combination of points-to, alias and liveness analyses, which is used both for improving the run-time performance of Siaam programs, and for providing useful debugging support for programmers. Finally, we evaluate the performance of our implementation and of our static analysis.
Outline. The paper is organized as follows. Section 2 presents the Siaam machine and its formal specification. Section 3 presents the formal proof of its isolation property. Section 4 describes the implementation of the Siaam machine. Section 5 presents the Siaam static analysis. Section 6 presents an evaluation of the Siaam implementation and of the Siaam analysis. Section 7 discusses related work and concludes the paper. Because of space limitations, we present only some highlights of the different developments. Interested readers can find all the details in the second author's PhD thesis [START_REF] Sabah | SIAAM: Simple Isolation for an Abstract Actor Machine[END_REF], which is available online along with the Coq code [25].
Siaam: model and formal specification
Informal presentation. Siaam combines actors and objects in a programming model with a single shared heap. Actors are instances of a special class. Each actor is equipped with at least one mailbox for queued communication with other actors, and has its own logical thread of execution that runs concurrently with other actor threads. Every object in Siaam belongs to an actor, we call its owner. An object has a unique owner. Each actor is its own owner. At any point in time the ownership relation forms a partition of the set of objects. A newly created object has its owner set to that of the actor of the creating thread.
Siaam places absolutely no restriction on the references between objects, including actors. In particular objects with different owners may reference each other. Siaam also places no constraint on what can be exchanged via messages: the contents of a message can be an arbitrary object graph, defined as the graph of objects reachable (following object references in object fields) from a root object specified when sending a message. Message passing in Siaam has a zerocopy semantics, meaning that the object graph of a message is not copied from unexpected updates of the objects' fields it is allowed to reach and access. By unexpected we mean field updates that are not immediate side-e ects of the considered actor.
Mail system. Each actor may have zero, one, or several mailboxes from which it can retrieve messages at will. Mailboxes are created dynamically and may be communicated without restriction. Any actor of the system may send messages through a mailbox. However each mailbox is associated with a receiver actor, such that only the receiver may retrieve messages from a mailbox. More detailed informations on the mailboxes are deferred to the mailboxes paragraph.
Actors. The local state of each actor is represented by an object, and the associated behaviour is a method of that object. The behaviour method is free to implement any algorithm, the actor terminates when that method returns. Here we clearly deviate from the definition of an actor given in 1.1.1, where actors "react" to received communications. Siaam's actor are more active in the sense that they can arbitrarily chose when to receive a message, and from what mailbox. Although it is possible to replicate Agha's actor model with the Siaam actor model and conversely. Simply fix a unique mailbox for each Siaam's actor and write a infinite loop behaviour processing messages one by one to obtain the Agha's model. In configuration (a) all the objects but the actor 0 may be employed as the starting object Fig. 1. Ownership and ownership transfer in Siaam the sender actor to the receiver actor, only the reference to the root object of a message is communicated. An actor is only allowed to send objects it owns 4 , and it cannot send itself as part of a message content.
Figure 1 illustrates ownership and ownership transfer in Siaam. On the left side (a) is a configuration of the heap and the ownership relation where each actor, presented in gray, owns the objects that are part of the same dotted convex hull. Directed edges are heap references. On the right side (b), the objects 1, 2, 3 have been transferred from a to b, and object 1 has been attached to the data structure maintained in b's local state. The reference from a to 1 has been preserved, but actor a is no longer allowed to access the fields of 1, 2, 3.
To ensure isolation, Siaam enforces the following invariant: an object o (in fact an executing thread) can only access fields of an object that has the same owner than o; any attempt to access the fields of an object of a different owner than the caller raises a run-time exception. To enforce this invariant, message exchange in Siaam involves twice changing the owner of all objects in a message contents graph: when a message is enqueued in a receiver mailbox, the owner of objects in the message contents is changed atomically to a null owner ID that is never assigned to any actor ; when the message is dequeued by the receiver actor, the owner of objects in the message contents is changed atomically to the receiver actor. This scheme prevents pathological situations where an object passed in a message m may be sent in another message m by the receiver actor without the latter having dequeued (and hence actually received) message m. Since Siaam does not modify object references in any way, the sender actor can still have references to objects that have been sent, but any attempt from this sender actor to access them will raise an exception.
Siaam: model and formal specification. The formal specification of the Siaam model defines an operational semantics for the Siaam language, in the form of a reduction semantics. The Siaam language is a Java-like language, for its sequential part, extended with special classes with native methods corresponding to operations of the actor model, e.g. sending and receiving messages. The semantics is organized in two layers, the single-actor semantics and the global semantics. The single-actor semantics deals with evolutions of individual actors, and reduces actor-local state. The global semantics maintains a global state not directly accessible from the single-actor semantics. In particular, the effect of reading or updating object fields by actors belongs to the single-actor semantics, but whether it is allowed is controlled by the global semantics. Communications are handled by the global semantics.
The single actor semantics extends the Jinja formal specification in HOL of the reduction semantics of a (purely sequential) Java-like language [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF] 5 . Jinja gives a reduction semantics for its Java-like language via judgments of the form P e, (lv, h) → e , (lv , h ) , which means that in presence of program P (a list of class declarations), expression e with a set of local variables lv and a heap h reduces to expression e with local variables lv and heap h .
We extend Jinja judgments for our single-actor semantics to take the form P, w e, (lv, h) -wa → e , (lv , h ) where e, lv corresponds to the local actor state, h is the global heap, w is the identifier of the current actor (owner), and wa is the actor action requested by the reduction. Actor actions embody the Siaam model per se. They include creating new objects (with their initial owner), including actors and mailboxes, checking the owner of an object, sending and receiving messages. For instance, succesfully accessing an object field is governed by rule Read in Figure 2. Jinja objects are pairs (C, f s) of the object class name C and the field table f s. A field table is a map holding a value for each field of an object, where fields are identified by pairs (F, D) of the field name F and the name D of the declaring class. The premisses of rule Read retrieve the object referenced by a from the heap (hp s a = Some (C, f s) -where hp is the projection function that retrieves the heap component of a local actor state, and the heap itself is an association table modelled as a function that given an object reference returns an object), and the value v held in field F . In the conclusion of rule Read, reading the field F from a returns the value v, with the local state s (local variables and heap) unchanged. The actor action OwnerCheck a T rue indicates that object a has the current actor as its owner. Apart from the addition of the actor action label, rule Read is directly lifted from the small step semantics of Jinja in [START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF]. In the case of field access, the rule Read is naturally complemented with rule ReadX, that raises an exception if the owner check fails, and which is specific to Siaam. Actor actions also include a special silent action, that corresponds to single-actor reductions (including exception handling) that require no access to the global state. Non silent actor actions are triggered by object creation, object field access, and native calls, i.e. method calls on the special actor and mailbox classes.
The global semantics is defined by the rule Global in Figure 2. The judgment, written P s → s , means in presence of program P , global state s reduces to global state s . The global state (xs, ws, ms, h) of a Siaam program execution comprises four components: the actor table xs, an ownership relation ws, the mailbox table ms, and a shared heap h. The projection functions acs, ows, mbs, shp return respectively the actor table, the ownerships relation, the mailbox table, and the shared heap component of the global state. The actor table associates an actor identifier to an actor local state consisting of a pair e, lv of expression and local variables. The rule Global reduces the global state by applying a single step of the single-actor semantics for actor w. In the premises of the rule, the shared heap shp s and the current local state x (expression and local variables) for w are retrieved from the global state. The actor can reduce to x with new shared heap h and perform the action wa. ok act tests the actor action precondition against s. If it is satisfiable, upd act applies the effects of wa to the global state, yielding the new tuple of state components (xs , ws , ms , ) where the heap is left unchanged. The new state s is assembled from the new mailbox table, the new ownership relation, the new heap from the single actor reduction and the new actor table where the state for actor w is updated with its new local state x . We illustrate the effect of actor actions in the next section.
Siaam: Proof of isolation
The key property we expect the Siaam model to uphold is the strong isolation (or share nothing) property of the actor model, meaning actors can only exchange information via message passing. We have formalized this property and proved it using the Coq proof assistant (v8.4) [START_REF]Coq development team[END_REF]. We present in this section some key elements of the formalization and proof, using excerpts from the Coq code. The formalization uses an abstraction of the operational semantics presented in the previous section. Specifically, we abstract away from the single-actor semantics. The local state of an actor is abstracted as being just a table of local variables (no expression), which may change in obvious ways: adding or removing a local variable, changing the value held by a local variable. The formalization (which we call Abstract Siaam) is thus a generalization of the Siaam operational semantics.
A message is just a pair consisting of a message identifier and a reference to a root object. A value can be either the null value (vnull), the mark value (vmark ), an integer (vnat), a boolean (vbool), an object reference, an actor id or a mailbox id. The special mark value is simply a distinct value used to formalize the isolation property.
Abstract Siaam: Transition rules. Evolution of a Siaam system are modeled in Abstract Siaam as transitions between configurations, which are in turn governed by transition rules. Each transition rule in Abstract Siaam corresponds to an an instance of the Global rule in the Siaam operational semantics, specialized for dealing with a given actor action. For instance, the rule governing field access, which abstracts the global semantics reduction picking the OwnerCheck a T rue action offered by a Read reduction of the single-actor semantics (cf. Figure 2) carrying the identifier of actor e, and accessing field f of object o referenced by a is defined as follows:
Inductive redfr : conf → aid → conf → Prop := | redfr_step : ∀ (c1 c2 : conf)(e : aid)(l1 l2 : locals)(i j : vid)(v w : value)(a: addr) (o : object)(f: fid), set_In (e,l1) (acs c1) → set_In (i, w) l1 → set_In (j,vadd a) l1 → set_In (a,o) (shp c1) → set_In (f,v) o → set_In (a, Some e) (ows c1) → v_compat w v → l2 = up_locals i v l1 → c2 = mkcf (up_actors e l2 (acs c1)) (ows c1) (mbs c1) (shp c1) →
c1 =fr e ⇒ c2 where " t '=fr' a '⇒' t' " := (redfr t a t').
The conclusion of the rule, c1 =fr e ⇒ c2, states that configuration c1 can evolve into configuration c2 by actor e doing a field access fr. The premises of the rule are the obvious ones: e must designate an actor of c1; the table l1 of local variables of actor e must have two local variables i and j, one holding a reference a to the accessed object (set_In (j,vadd a) l1), the other some value w (set_In ( i, w) l1) compatible with that read in the accessed object field (v_compat w v); a must point to an object o in the heap of c1 (set_In (a,o) (shp c1) ), which must have a field f, holding some value v (set_In (f,v) o) ; and actor e must be the owner of object o for the field access to succeed (set_In (a, Some e) (ows c1)). The final configuration c2 has the same owernship relation, mailbox table and shared heap than the initial one c1, but its actor table is updated with new local state of actor e (c2 = mkcf (up_actors e l2 (acs c1)) (ows c1) (mbs c1) (shp c1)), where variable i now holds the read value v (l2 = up_locals i v l1).
Another key instance of the Abstract Siaam transition rules is the rule presiding over message send:
Inductive redsnd : conf → aid → conf → Prop := | redsnd_step : ∀ (c1 c2 : conf)(e : aid) (a : addr) (l : locals) (ms: msgid)(mi: mid) (mb mb': mbox)(owns : owners), set_In (e,l) (acs c1) → set_In (vadd a) (values_from_locals l) → trans_owner_check (shp c1) (ows c1) (Some e) a = true → set_In (mi,mb) (mbs c1) → not (set_In ms (msgids_from_mbox mb)) → Some owns = trans_owner_update (shp c1) (ows c1) None a → mb' = mkmb (own mb) ((ms,a)::(msgs mb)) → c2 = mkcf (acs c1) owns (up_mboxes mi mb' (mbs c1)) (shp c1) → c1 =snd e ⇒ c2
where " t '=snd' a '⇒' t' " := (redsnd t a t').
The conclusion of the rule, c1 =snd e ⇒ c2, states that configuration c1 can evolve into configuration c2 by actor e doing a message send snd. The premises of the rule expects the owner of the objects reachable from the root object (referenced by a) of the message to be e; this is checked with function trans_owner_check : trans_owner_check (shp c1) (ows c1) (Some e) a = true. When placing the message in the mailbox mb of the receiver actor, the owner of all the objects reachable is set to None; this is done with function trans_owner_update: Some owns = trans_owner_update (shp c1) (ows c1) None a. Placing the message with id ms and root object referenced by a in the mailbox is just a matter of queuing it in the mailbox message queue: mb' = mkmb (own mb) ((ms,a)::(msgs mb)).
The transition rules of Abstract Siaam also include a rule governing silent transitions, i.e. transitions that abstract from local actor state reductions that elicit no change on other elements of a configuration (shared heap, mailboxes, ownership relation, other actors). The latter are just modelled as transitions arbitrarily modifying a given actor local variables, with no acquisition of object references that were previously unknown to the actor.
Isolation proof. The Siaam model ensures that the only means of information transfer between actors is message exchange. We can formalize this isolation property using mark values. We call an actor a clean if its local variables do not hold a mark, and if all objects reachable from a and belonging to a hold no mark in their fields. An object o is reachable from an actor a if a has a local variable holding o's reference, or if, recursively, an object o' is reachable from a which holds o's reference in one of its fields. The isolation property can now be characterized as follows: a clean actor in any configuration remains clean during an evolution of the configuration if it never receives any message. In Coq:
Theorem ac_isolation : ∀ (c1 c2 : conf) (a1 a2: actor), wf_conf c1 → set_In a1 (acs c1) → ac_clean (shp c1) a1 (ows c1) → c1 =@ (fst a1) ⇒ * c2 → Some a2 = lookup_actor (acs c2) (fst a1) → ac_clean (shp c2) a2 (ows c2).
The theorem states that, in any well-formed configuration c1, an actor a1 which is clean (ac_clean (shp c1) a1 (ows c1)), remains clean in any evolution of c1 that does not involve a reception by a1. This is expressed as c1 =@ (fst a1) ⇒ * c2 and ac_clean (shp c2) a2 (ows c2), where fst a1 just extracts the identifier of actor a1, and a2 is the descendant of actor a1 in the evolution (it has the same actor identifier than a1: Some a2 = lookup_actor (acs c2) (fst a1)). The relation =@ a ⇒ * , which represents evolutions not involving a message receipt by actor a, is defined as the reflexive and transitive closure of relation =@ a ⇒, which is a one step evolution not involving a receipt by a. The isolation theorem is really about transfer of information between actors, the mark denoting a distinguished bit of information held by an actor. At first sight it appears to say nothing about about ownership, but notice that a clean actor a is one such that all objects that belong to a are clean, i.e. hold no mark in their fields. Thus a corollary of the theorem is that, in absence of message receipt, actor a cannot acquire an object from another actor (if that was the case, transferring the ownership of an unclean object would result in actor a becoming unclean).
A well-formed configuration is a configuration where each object in the heap has a single owner, all identifiers are indeed unique, where mailboxes hold messages sent by actors in the actor table, and all objects referenced by actors (directly or indirectly, through references in object fields) belong to the heap. To prove theorem ac_isolation, we first prove that well-formedness is an invariant in any configuration evolution:
Theorem red_preserves_wf : ∀ (c1 c2 : conf), c1 ⇒ c2 → wf_conf c1 → wf_conf c2.
The theorem red_preserves_wf is proved by induction on the derivation of the assertion c1 ⇒ c2. To prove the different cases, we rely mostly on simple reasoning with sets, and a few lemmas characterizing the correctness of table manipulation functions, of the trans_owner_check function which verifies that all objects reachable from the root object in a message have the same owner, and of the trans_owner_update function which updates the ownership table during message transfers. Using the invariance of well-formedness, theorem ac_isolation is proved by induction on the derivation of the assertion c1 =@ (fst a1) ⇒ * c2. To prove the different cases, we rely on several lemmas dealing with reachability and cleanliness.
The last theorem, live_mark, is a liveness property that shows that the isolation property is not vacuously true. It states that marks can flow between actors during execution. In Coq:
Theorem live_mark : ∃ (c1 c2 : conf)(ac1 ac2 : actor), c1 ⇒ * c2 ∧ set_In ac1 (acs c1) ∧ ac_clean (shp c1) ac1 (ows c1) ∧ Some ac2 = lookup_actor (acs c2) (fst ac1) ∧ ac_mark (shp c2) ac2 (ows c2).
Siaam: Implementation
We have implemented the Siaam abstract machine as a modified Jikes RVM [16]. Specifically, we extended the Jikes RVM bytecode and added a set of core primitives supporting the ownership machinery, which are used to build trusted APIs implementing particular programming models. The Siaam programming model is available as a trusted API that implements the formal specification presented in Section 2. On top of the Siaam programming model, we implemented the ActorFoundry API as described in [START_REF] Karmani | Actor frameworks for the JVM platform: a comparative analysis[END_REF], which we used for some of our evaluation. Finally we implemented a trusted event-based actor programming model on top of the core primitives, which can dispatch thousand of lightweight actors over pools of threads, and enables to build high-level APIs similar to Kilim with Siaam's ownership-based isolation.
Bytecode. The standard Java VM instructions are extended to include: a modified object creation instruction New, which creates an object on the heap and sets its owner to that of the creating thread; modified field read and write acess instructions getfield and putfield with owner check; modified instructions load and store array instructions aload and astore with owner check.
Virtual machine core. Each heap object and each thread of execution have an owner reference, which points to an object implementing the special Owner interface. A thread can only access objects belonging to the Owner instance referenced by its owner reference. Core primitives include operations to retrieve and set the owner of the current thread, to retrieve the owner of an object, to withdraw and acquire ownership over objects reachable from a given root object. In the Jikes RVM, objects are represented in memory by a sequence of bytes organized into a leading header section and the trailing scalar object's fields or array's length and elements. We extended the object header with two reference-sized words, OWNER and LINK. The OWNER word stores a reference to the object owner, whereas the LINK word is introduced to optimize the performance of object graph traversal operations.
Contexts. Since the JikesRVM is fully written in Java, threads seamlessly execute application bytecode and the virtual machine internal bytecode. We have introduced a notion of execution context in the VM to avoid subjecting VM bytecode to the owner-checking mechanisms. A method in the application context is instrumented with all the isolation mechanisms whereas methods in the VM context are not. If a method can be in both context, it must be compiled in two versions, one for both contexts. When a method is invoked, the context of the caller is used to deduce which version of the method should be called. The decision is taken statically when the invoke instruction is compiled.
Owernship transfer. Central to the performance of the Siaam virtual machine are operations implementing ownership transfer, withdraw and acquire. In the formal specification, owner-checking an object graph and updating the owner of objects in the graph is done atomically (see e.g. the message send transition rule in Section 3). However implementing the withdraw operation as an atomic operation would be costly. Furthermore, an implementation of ownership transfer must minimize graph traversals. We have implemented an iterative algo-procedural points-to analysis and an intra-procedural live variable analysis. Both phases depends on the transfered abstract objects analysis that propagates unsafe abstract objects from the communication sites downward the call graph edges.
By combining results from the two phases, the algorithm computes conservative approximations of unsafe runtime objects and safe variables at any control-flow point in the program. The owner-check elimination for a given instruction s accessing the reference in variable V processes as following (figure 5.1). First the unsafe objects analysis is queried to know whether V may points-to an unsafe runtime object at s. If not, the instruction can skip the ownercheck for V . Otherwise, the safe reference analysis is consulted to know whether the reference in variable V is considered safe at s thanks to dominant owner-checks of the reference in the control-flow graph.
The two phases of our analysis are independent, it is possible to disable one and replace it with a very conservative approximation. Disabling one phase allows faster computation but less accurate results. We implemented the safe references analysis as a code optimization pass in the Siaam virtual machine, so that intra-procedural owner-check eliminations are performed without the need of a costly whole-program analysis. rithm for withdraw that chains objects that are part of a message through their LINK word. The list thus obtained is maintained as long as the message exists so that the acquire operation can efficiently traverse the objects of the message.
The algorithm leverages specialized techniques, initially introduced in the Jikes RVM to optimize the reference scanning phase during garbage collection [START_REF] Garner | A comprehensive evaluation of object scanning techniques[END_REF], to efficiently enumerate the reference offsets for a given base object.
Siaam: Static Analysis
We describe in this section some elements of Siaam static analysis to optimize away owner-checking on field read and write instructions. The analysis is based on the observation that an instruction accessing an object's field does not need an owner-checking if the object accessed belongs to the executing actor. Any object that has been allocated or received by an actor and has not been passed to another actor ever since, belongs to that actor. The algorithm returns an under-approximation of the owner-checking removal opportunities in the analyzed program. Considering a point in the program, we say an object (or a reference to an object) is safe when it always belongs to the actor executing that point, regardless of the execution history. By opposition, we say an object is unsafe when sometimes it doesn't belong to the current actor. We extend the denomination to instructions that would respectively access a safe object or an unsafe object. A safe instruction will never throw an OwnerException, whereas an unsafe instruction might.
Analysis. The Siaam analysis is structured in two phases. First the safe dynamic references analysis employs a local must-alias analysis to propagate owner-checked references along the control-flow edges. It is optionally refined with an inter-procedural pass propagating safe references through method arguments and returned values. Then the safe objects analysis tracks safe runtime objects along call-graph and method control-flow edges by combining an interprocedural points-to analysis and an intra-procedural live variable analysis. Both phases depend on the transfered abstract objects analysis that propagates unsafe abstract objects from the communication sites downward the call graph edges.
By combining results from the two phases, the algorithm computes conservative approximations of unsafe runtime objects and safe variables at any controlflow point in the program. The owner-check elimination for a given instruction s accessing the reference in variable V proceeds as illustrated in Figure 3. First the unsafe objects analysis is queried to know whether V may points-to an unsafe runtime object at s. If not, the instruction can skip the owner-check for V . Otherwise, the safe reference analysis is consulted to know whether the reference in variable V is considered safe at s, thanks to dominant owner-checks of the reference in the control-flow graph.
The Siaam analysis makes use of several standard intra and inter-prodedural program analyses: a call-graph representation, an inter-procedural points-to analysis, an intra-procedural liveness analysis, and an intra-procedural must-alias analysis. Each of these analyses exists in many different variants offering various tradeoffs of results accuracy and algorithmic complexity, but regardless of the implementation, they provide a rather standard querying interface. Our analysis is implemented as a framework that can make use of different instances of these analyses.
Implementations. The intra-procedural safe reference analysis which is part of the Siaam analysis has been included in the Jikes RVM optimizing compiler. Despite its relative simplicity and its very conservative assumptions, it efficiently eliminates about half of the owner-check barriers introduced by application bytecode and the standard library for the benchmarks we have tested (see Section 6). The safe reference analysis and the safe object analyses from the Siaam analysis have been implemented in their inter-procedural versions as an offline tool written in Java. The tool interfaces with the Soot analysis framework [23], that provides the program representation, the call graph, the interprocedural pointer analysis, the must-alias analysis and the liveness analysis we use.
Programming assistant. The Siaam programming model is quite simple, requiring no programmer annotation, and placing no constraint on messages. However, it may generate hard to understand runtime exceptions due to failed owner-checks. The Siaam analysis is therefore used as the basis of a programming assistant that helps application developers understand why a given program statement is potentially unsafe and may throw an owernship exception at runtime. The Siaam analysis guarantees that there will be no false negative, but to limit the amount of false positives it is necessary to use a combination of the most accurate standard (points-to, must-alias and liveness) analyses. The programming assistant tracks a program P backward, starting from an unsafe statement s with a non-empty set of unverified ownerhip preconditions (as given by the ok act function in Section 2), trying to find every program points that may explain why a given precondition is not met at s. For each unsatisfied precondition, the assistant can exhibit the shortest execution paths that result in an exception being raised at s. An ownership precondition may comprise requirements that a variable or an object be safe. When a requirement is not satisfied before s, it raises one or several questions of the form "why is x unsafe before s?". The assistant traverses the control-flow backward, looks for immediate answers at each statement reached, and propagates the questions further if necessary, until all questions have found an answer.
Siaam Implementation. We present first an evaluation of the overall performance of our Siaam implementation based on the DaCapo benchmark suite [START_REF] Blackburn | The DaCapo benchmarks: Java benchmarking development and analysis[END_REF], representative of various real industrial workloads. These applications use regular Java. The bytecode is instrumented with Siaam's owner-checks and all threads share the same owner. With this benchmark we measure the overhead of the dynamic ownership machinery, encompassing the object owner initialization and the owner-checking barriers, plus the allocation and collection costs linked to the object header modifications.
We benchmarked five configurations. no siaam is the reference Jikes RVM without modifications. opt designates the modified Jikes RVM with JIT ownerchecks elimination. noopt designates the modified Jikes RVM without JIT ownerchecks elimination. sopt is the same as opt but the application bytecode has safety annotations issued by the offline Siaam static analysis tool. Finally soptnc is the same as sopt without owner-check barriers for the standard library bytecode. We executed the 2010-MR2 version of the DaCapo benchamrks, with two workloads, the default and the large. Table 1 shows the results for the Dacapo 2010-MR2 runs. The results were obtained using a machine equipped with an Intel Xeon W3520 2.67Ghz processor. The execution time results are normalized with respect to the no-siaam configuration for each program of the suite: lower is better. The geometric mean summarizes the typical overhead for each configuration. The opt figures in Table 1 show that the modified virtual machine including JIT barrier elimination has an overhead of about 30% compared to the not-isolated reference. The JIT elimination improves the performances by about 20% compared to the noopt configuration. When the bytecode is annotated by the whole-program static analysis the performance is 10% to 20% better than with the runtime-only optimization. However, the DaCapo benchmarks use the Java reflection API to load classes and invoke methods, meaning our static analysis was not able to process all the bytecode with the best precision. We can expect better results with other programs for which the call graph can be entirely built with precision. Moreover we used for the benchmarks a context-insensitive, flow-insensitive pointer analysis, meaning the Siaam analysis could be even more accurate with sensitive standard analyses. Finally the standard library bytecode is not annotated by our tool, it is only treated by the JIT elimination optimization. The soptnc configuration provides a good indication of what the full optimization would yield. The results show an overhead (w.r.t. application) with a mean of 15%, which can be considered as an acceptable price to pay for the simplicity of developing isolated programs with Siaam.
The Siaam virtual machine consumes more heap space than the unmodified Jikes RVM due to the duplication of the standard library used by both the virtual machine and the application, and because of the two words we add in every object's header. The average object size in the DaCapo benchmarks is 62 bytes, so our implementations increases it by 13%. We have measured a 13% increase in the full garbage collection time, which accounts for the tracing of the two additional references and the memory compaction. Siaam Analysis. We compare the efficiency of the Siaam whole-program analysis to the SOTER algorithm, which is closest to ours. Table 2 contains the results that we obtained for the benchmarks reported in [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF], that use Actor-Foundry programs. For each analyzed application we give the total number of owner-checking barriers and the total number of message passing sites in the bytecode. The columns "Ideal safe" show the expected number of safe sites for each criteria. The column " Siaam safe" gives the result obtained with the Siaam analysis. The analysis execution time is given in the third main colum. The last column compares the result ratio to ideal for both SOTER and Siaam. Our analysis outperforms SOTER significantly. SOTER relies on an inter-procedural live-analysis and a points-to analysis to infer message passing sites where a byreference semantics can applies safely. Given an argument a i of a message passing site s in the program, SOTER computes the set of objects passed by a i and the set of objects transitively reachable from the variables live after s. If the intersection of these two sets is empty, SOTER marks a i as eligible for by-reference argument passing, otherwise it must use the default by-value semantic. The weakness to this pessimistic approach is that among the live objects, a significant part won't actually be accessed in the control-flow after s. On the other hand, Siaam do care about objects being actually accessed, which is a stronger evidence criterion to incriminate message passing sites. Although Siaam's algorithm wasn't designed to optimize-out by-value message passing, it is perfectly adapted for that task. For each unsafe instruction detected by the algorithm, there is one or several guilty dominating message passing sites. Our diagnosis algorithm tracks back the application control-flow from the unsafe instruction to the incriminated message passing sites. These sites represent a subset of the sites where SOTER cannot optimize-out by-value argument passing.
Related Work and Conclusion
Enforcing isolation between different groups of objects, programs or threads in presence of a shared memory has been much studied in the past two decades. Although we cannot give here a full survey of the state of the art (a more in depth analysis is available in [START_REF] Sabah | SIAAM: Simple Isolation for an Abstract Actor Machine[END_REF]), we can point out three different kinds of related works: those relying on type annotations to ensure isolation, those relying on run-time mechanisms, and those relying on static analyses. Much work has been done on controlling aliasing and encapsulation in objectoriented languages and systems, in a concurrent context or not. Much of the works in these areas rely on some sort of reference uniqueness, that eliminates object sharing by making sure that there is only one reference to an object at any time, e.g. [START_REF] Clarke | External uniqueness is unique enough[END_REF][START_REF] Haller | Capabilities for uniqueness and borrowing[END_REF][START_REF] Hogg | Islands: aliasing protection in object-oriented languages[END_REF][START_REF] Minsky | Towards alias-free pointers[END_REF][START_REF] Müller | Ownership transfer in universe types[END_REF]. All these systems restrict the shape of object graphs or the use of references in some way. In contrast, Siaam makes no such restriction. A number of systems rely on run-time mechanisms for achieving isolation, most using either deep-copy or special message heaps for communication, e.g. [START_REF] Czajkowski | Multitasking without compromise: A virtual machine evolution[END_REF][START_REF] Fahndrich | Language Support for Fast and Reliable Messagebased Communication in Singularity OS[END_REF][START_REF] Geoffray | I-JVM: a java virtual machine for component isolation in osgi[END_REF][START_REF] Gruber | Ownership-based isolation for concurrent actors on multicore machines[END_REF]. Of these, O-Kilim [START_REF] Gruber | Ownership-based isolation for concurrent actors on multicore machines[END_REF], which builds directly on the PhD work of the first author of this paper [START_REF] Claudel | Mécanismes logiciels de protection mémoire[END_REF], is the closest to Siaam: it places no constraint on transferred object graphs, but at the expense of a complex programming model and no programmer support, in contrast to Siaam. Finally several works develop static analyses for efficient concurrency or ownership transfer, e.g. [START_REF] Carlsson | Message analysis for concurrent programs using message passing[END_REF][START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF][START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF]. Kilim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF] relies in addition on type annotations to ensure tree-shaped messages. The SOTER [START_REF] Negara | Inferring ownership transfer for efficient message passing[END_REF] analysis is closest to the Siaam analysis and has been discussed in the previous section.
With its annotation-free programming model, which places no restriction on object references and message shape, we believe Siaam to be really unique compared to other approaches in the literature. In addition, we have not found an equivalent of the formal proof of isolation we have conducted for Siaam. Our evaluations demonstrate that the Siaam approach to isolation is perfectly viable: it suffers only from a limited overhead in performance and memory consumption, and our static analysis can significantly improve the situation. The one drawback of our programming model, raising possibly hard to understand runtime exceptions, is greatly alleviated by the use of the Siaam analysis in a programming assistant.
Figure 1 . 1 :
11 Figure 1.1: (a) Three ownership domains with their respective actor in gray. (b) The configuration after the ownership of objects 1, 2, 3 was transferred from actor a to actor b.
Figures 1 .
1 Figures 1.2 to 1.4 features some examples of valid and invalid message starting objects.In configuration (a) all the objects but the actor 0 may be employed as the starting object
Figure 5 . 1 :
51 Figure 5.1: Owner-check elimination decision diagram. The left-most question is answered by the safe objects analysis. The right-most question is answered by the safe references analysis.
Fig. 3 .
3 Fig. 3. Owner-check elimination decision diagram
Table 1 .
1 DaCapo benchmarks
Benchmark opt noopt sopt soptnc opt noopt sopt soptnc
workload default large
antlr 1.20 1.32 1.09 1.11 1.21 1.33 1.11 1.10
bloat 1.24 1.41 1.17 1.05 1.40 1.59 1.14 0.96
hsqldb 1.24 1.36 1.09 1.06 1.45 1.60 1.29 1.10
jython 1.52 1.73 1.41 1.24 1.45 1.70 1.45 1.15
luindex 1.25 1.46 1.09 1.05 1.25 1.43 1.09 1.03
lusearch 1.31 1.45 1.17 1.18 1.33 1.49 1.21 1.21
pmd 1.32 1.37 1.29 1.24 1.34 1.44 1.39 1.30
xalan 1.24 1.39 1.33 1.35 1.29 1.41 1.38 1.40
geometric mean 1.28 1.43 1.20 1.16 1.34 1.50 1.25 1.15
Table 2 .
2 ActorFoundry analyses.
Ownercheck Message Passing ratio to Ideal
Ideal Siaam Ideal Siaam Time
Sites safe safe Sites safe safe (sec) Siaam SOTER
ActorFoundry
threadring 24 24 24 8 8 8 0.1 100% 100%
(1) concurrent 99 99 99 15 12 10 0.1 98% 58%
(2) copymessages 89 89 84 22 20 15 0.1 91% 56%
performance 54 54 54 14 14 14 0.2 100% 86%
pingpong 28 28 28 13 13 13 0.1 100% 89%
refmessages 4 4 4 6 6 6 0.1 100% 67%
Benchmarks
chameneos 75 75 75 10 10 10 0.1 100% 33%
fibonacci 46 46 46 13 13 13 0.2 100% 86%
leader 50 50 50 10 10 10 0.1 100% 17%
philosophers 35 35 35 10 10 10 0.2 100% 100%
pi 31 31 31 8 8 8 0.1 100% 67%
shortestpath 147 147 147 34 34 34 1.2 100% 88%
Synthetic
quicksortCopy 24 24 24 8 8 8 0.2 100% 100%
(3) quicksortCopy2 56 56 51 10 10 5 0.1 85% 75%
Real world
clownfish 245 245 245 102 102 102 2.2 100% 68%
(4) rainbow fish 143 143 143 83 82 82 0.2 99% 99%
swordfish 181 181 181 136 136 136 1.7 100% 97%
Siaam enforces the constraint that all objects reachable from a message root object have the same owner -the sending actor. If the constraint is not met, sending the message fails. However, this constraint, which makes for a simple design, is just a design option. An alternative would be to consider that a message contents consist of all the objects reachable from the root object which have the sending actor as their owner. This alternate semantics would not change the actual mechanics of the model and the strong isolation enforced by it.
Jinja, as described in[START_REF] Klein | A machine-checked model for a java-like language, virtual machine, and compiler[END_REF], only covers a subset of the Java language. It does not have class member qualifiers, interfaces, generics, or concurrency. |
01401473 | en | [
"phys.hphe"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01401473/file/1611.00804.pdf | M Beneke
A Bharucha
A Hryczuk
S Recksiegel
P Ruiz-Femenía
The last refuge of mixed wino-Higgsino dark matter
We delineate the allowed parameter and mass range for a wino-like dark matter particle containing some Higgsino admixture in the MSSM by analysing the constraints from diffuse gamma-rays from the dwarf spheroidal galaxies, galactic cosmic rays, direct detection and cosmic microwave background anisotropies. A complete calculation of the Sommerfeld effect for the mixed-neutralino case is performed. We find that the combination of direct and indirect searches poses significant restrictions on the thermally produced wino-Higgsino dark matter with correct relic density. For µ > 0 nearly the entire parameter space considered is excluded, while for µ < 0 a substantial region is still allowed, provided conservative assumptions on astrophysical uncertainties are adopted.
Introduction
Many remaining regions in the parameter space of the Minimal Supersymmetric Standard Model (MSSM), which yield the observed thermal relic density for neutralino dark matter, rely on very specific mechanisms, such as Higgs-resonant annihilation in the socalled funnel region, or sfermion co-annihilation. In [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] we identified new regions, where the dark matter particle is a mixed-as opposed to pure-wino, has mass in the TeV region, and yields the observed relic density. These new regions are driven to the correct relic abundance by the proximity of the resonance of the Sommerfeld effect due to electroweak gauge boson exchange. In such situations, the annihilation cross section is strongly velocity dependent, and the present-day annihilation cross section is expected to be relatively large, potentially leading to observable signals in indirect searches for dark matter (DM). On the other hand, a substantial Higgsino fraction of a mixed dark matter particle leads to a large, potentially observable dark matter-nucleon scattering cross section.
In this paper we address the question of which part of this region survives the combination of direct and indirect detection constraints. For the latter we consider diffuse gamma-rays from the dwarf spheroidal galaxies (dSphs), galactic cosmic rays (CRs) and cosmic microwave background (CMB) anisotropies. These have been found to be the most promising channels for detecting or excluding the pure-wino DM model [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]. Stronger limits can be obtained only from the non-observation of the gamma-line feature and to a lesser extent from diffuse gamma-rays both originating in the Galactic Centre (GC). Indeed, it has been shown [START_REF] Cohen | Wino Dark Matter Under Siege[END_REF][START_REF] Fan | In Wino Veritas? Indirect Searches Shed Light on Neutralino Dark Matter[END_REF] that the pure-wino model is ruled out by the absence of an excess in these search channels, unless the galactic dark matter profile develops a core, which remains a possibility. Since the viability of wino-like DM is a question of fundamental importance, we generally adopt the weaker constraint in case of uncertainty, and hence we take the point of view that wino-like DM is presently not excluded by gamma-line and galactic diffuse gamma-ray searches. Future results from the Čerenkov Telescope Array (CTA) are expected to be sensitive enough to resolve this issue (see e.g. [START_REF] Roszkowski | Prospects for dark matter searches in the pMSSM[END_REF][START_REF] Lefranc | Dark Matter in γ lines: Galactic Center vs dwarf galaxies[END_REF]), and will either observe an excess in gamma-rays or exclude the dominantly wino DM MSSM parameter region discussed in the present paper.
Imposing the observed relic density as a constraint, the pure-wino DM model has no free parameters and corresponds to the limit of the MSSM when all other superpartner particles and non-standard Higgs bosons are decoupled. Departing from the pure wino in the MSSM introduces many additional dimensions in the MSSM parameter space and changes the present-day annihilation cross section, branching ratios (BRs) for particular primary final states, and the final gamma and CR spectra leading to a modification of the limits. The tools for the precise computation of neutralino dark matter (co-) annihilation in the generic MSSM when the Sommerfeld enhancement is operative have been developed in [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF] and applied to relic density computations in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF]. The present analysis is based on an extension of the code to calculate the annihilation cross sections for all exclusive two-body final states separately, rather than the inclusive cross section.
Further motivation for the present study is provided by the spectrum of the cosmic 1 antiproton-to-proton ratio reported by the AMS-02 collaboration [START_REF] Aguilar | Antiproton Flux, Antiproton-to-Proton Flux Ratio, and Properties of Elementary Particle Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station[END_REF], which appears to be somewhat harder than expected from the commonly adopted cosmic-ray propagation models. In [START_REF] Ibe | Wino Dark Matter in light of the AMS-02 2015 Data[END_REF] it has been shown that pure-wino DM can improve the description of this data. Although our understanding of the background is insufficient to claim the existence of a dark matter signal in antiprotons, it is nevertheless interesting to check whether the surviving mixed-wino DM regions are compatible with antiproton data. The outline of this paper is as follows. In Section 2 we summarize the theoretical input, beginning with a description of the dominantly wino MSSM parameter region satisfying the relic-density constraint, then providing some details on the computation of the DM annihilation rates to primary two-body final states. The following Section 3 supplies information about the implementation of the constraints from diffuse gammarays from the dSphs, galactic CRs, direct detection and the CMB, and the data employed for the analysis. The results of the indirect detection analysis are presented in Section 4 as constraints in the plane of the two most relevant parameters of the MSSM, the wino mass parameter M 2 and |µ| -M 2 , where µ is the Higgsino mass parameter. In Section 5 the indirect detection constraints are combined with that from the non-observation of dark matter-nucleon scattering. For the case of µ < 0 we demonstrate the existence of a mixed wino-Higgsino region satisfying all constraints, while for µ > 0 we show that there is essentially no remaining parameter space left. Section 6 concludes.
CR fluxes from wino-like dark matter 2.1 Dominantly-wino DM with thermal relic density in the MSSM
In [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] the Sommerfeld corrections to the relic abundance computation for TeV-scale neutralino dark matter in the full MSSM have been studied. The ability to perform the computations for mixed dark matter at a general MSSM parameter space point [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF] revealed a large neutralino mass range with the correct thermal relic density, which opens mainly due to the proximity of the resonance of the Sommerfeld effect and its dependence on MSSM parameters. In this subsection we briefly review the dominantlywino parameter region identified in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], which will be studied in this paper. "Dominantlywino" or "wino-like" here refers to a general MSSM with non-decoupled Higgs bosons, sfermions, bino and Higgsinos as long as the mixed neutralino dark matter state is mainly wino. We also require that its mass is significantly larger than the electroweak scale.
The well-investigated pure-wino model refers to the limit in this parameter space, when all particles other than the triplet wino are decoupled. Despite the large number of parameters needed to specify a particular MSSM completely, in the dominantly-wino region, the annihilation rates depend strongly only on a subset of parameters. These are the wino, bino and Higgsino mass parameters M 2 , M 1 and µ, respectively, which control the neutralino composition and the chargino-neutralino mass difference, and the common sfermion mass parameter M sf . In this work we assume that the bino is much heavier that the wino, that is, the lightest neutralino is a mixed wino-Higgsino. Effectively a value of |M 1 | larger than M 2 by a few 100 GeV is enough to decouple the bino in the TeV region. 1 The wino mass parameter determines the lightest neutralino (LSP) mass, and the difference |µ| -M 2 the wino-Higgsino admixture. In the range M 2 = 1 -5 TeV considered here, the relation m LSP M 2 remains accurate to a few GeV, when some Higgsino fraction is added to the LSP state, and values of |µ| -M 2 > ∼ 500 GeV imply practically decoupled Higgsinos. Increasing the Higgsino component of the wino-like LSP lowers its coupling to charged gauge bosons, to which wino-like neutralinos annihilate predominantly, and therefore increases the relic density. Larger mixings also imply that the mass difference between the lightest chargino and neutralino increases, which generically reduces the size of the Sommerfeld enhancement of the annihilation cross section. These features are apparent in the contours of constant relic density in the |µ|-M 2 vs. M 2 plane for the wino-Higgsino case shown in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], which are almost straight for large |µ|-M 2 , but bend to lower values of m LSP as |µ| -M 2 is reduced. A representative case is reproduced in Fig. 1. The contours also bend towards lower M 2 when sfermions become lighter, as they mediate the t-and uchannel annihilation into SM fermions, which interferes destructively with the s-channel annihilation, effectively lowering the co-annihilation cross section. By choosing small values of M sf (but larger than 1.25 m LSP to prevent sfermion co-annihilation, not treated by the present version of the code), LSP masses as low as 1.7 TeV are seen to give the correct thermal density, to be compared with the pure-wino result, m LSP 2.8 TeV.
For M 2 > 2.2 TeV a resonance in the Sommerfeld-enhanced rates is present, which extends to larger M 2 values as the Higgsino fraction is increased. The enhancement of the cross section in the vicinity of the resonance makes the contours of constant relic density cluster around it and develop a peak that shifts m LSP to larger values. In particular, the largest value of M 2 , which gives the correct thermal relic density, is close to 3.3 TeV, approximately 20% higher than for the pure-wino scenario. The influence of the less relevant MSSM Higgs mass parameter M A is also noticeable when the LSP contains some Higgsino admixture, which enhances the couplings to the Higgs (and Z) bosons in s-channel annihilation. This is more pronounced if M A is light enough such that final states containing heavy Higgs bosons are kinematically accessible. The corresponding increase in the annihilation cross section results in positive shifts of around 100 to 250 GeV in the value of M 2 giving the correct relic density on decreasing M A from 10 TeV to 800 GeV. In summary, a large range of lightest neutralino masses, 1.7 -3.5 TeV, provides the correct relic density for the mixed wino-Higgsino state as a consequence of the Sommerfeld corrections.
The MSSM parameter points considered in this paper have passed standard collider, flavour and theoretical constraints as discussed in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. In the dominantly-wino parameter space, most of the collider and flavour constraints are either satisfied automatically or receive MSSM corrections that are suppressed or lie within the experimental and theoretical uncertainties. Ref. [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF] further required compatibility with direct dark matter (µ -M 2 ) plane for µ > 0, as computed in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. The (green) band indicates the region within 2σ of the observed dark matter abundance. Parameters are as given in the header, and the trilinear couplings are set to A i = 8 TeV for all sfermions except for that of the stop, which is fixed by the Higgs mass value. The black solid line corresponds to the old LUX limit [START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF] on the spinindependent DM-nucleon cross section, which excludes the shaded area below this line. Relaxing the old LUX limit by a factor of two to account for theoretical uncertainties eliminates the direct detection constraint on the shown parameter space region. detection constraints by imposing that the DM-nucleon spin-independent cross section was less than twice the LUX limits reported at the time of publication [START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF]. This did not affect the results significantly, see Fig. 1, as in most of the parameter space of interest the scattering cross section was predicted to be much above those limits. Recently the LUX collaboration has presented a new limit, stronger than the previous one by approximately a factor of four [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF], potentially imposing more severe constraints on the dominantly-wino neutralino region of the MSSM parameter space. The details of the implementation of the limits from indirect detection searches for the mixed wino, which were not included in our previous analysis, and from the new LUX results are given in Section 3.
���� ���� ���� ���� ������ ���� ���� ���� 0.22 ���������� �������� ���� ���� ���� ���� � � � � � � � ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] μ-� � [��� ] � �� =�� ���� � � =�� � � � � =�� ���� ���β=��
Branching fractions and primary spectra
The annihilation of wino-like DM produces highly energetic particles, which subsequently decay, fragment and hadronize into stable SM particles, producing the CR fluxes.
The primary particles can be any of the SM particles, and the heavy MSSM Higgs bosons, H 0 , A 0 and H ± , when they are kinematically accessible. We consider neutralino dark matter annihilation into two primary particles. The number of such exclusive twobody channels is 31, and the corresponding neutralino annihilation cross sections are computed including Sommerfeld loop corrections to the annihilation amplitude as described in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. As input for this calculation we need to provide the tree-level exclusive annihilation rates of all neutral neutralino and chargino pairs, since through Sommerfeld corrections the initial LSP-LSP state can make transitions to other virtual states with heavier neutralinos or a pair of conjugated charginos, which subsequently annihilate into the primaries. The neutralino and chargino tree-level annihilation rates in the MSSM have been derived analytically in [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF], and including v2 -corrections in [START_REF] Hellmann | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos II. P-wave and next-to-next-to-leading order S-wave coefficients[END_REF], in the form of matrices, where the off-diagonal entries refer to the interference of the short-distance annihilation amplitudes of different neutralino/chargino two-particle states into the same final state. For the present analysis the annihilation matrices have been generalized to vectors of matrices, such that the components of the vector refer to the 31 exclusive final states. The large number of different exclusive final states can be implemented without an increase in the CPU time for the computation relative to the inclusive case.
Since the information about the exclusive annihilation rates only enters through the (short-distance) annihilation matrices, the two-particle wave-functions that account for the (long-distance) Sommerfeld corrections only need to be computed once. On the contrary, since the v 2 -corrections to the annihilation of DM in the present Universe are very small, they can be neglected, which results in a significant reduction in the time needed to compute the annihilation matrices. 2 It further suffices to compute the present-day annihilation cross section for a single dark matter velocity, and we choose v = 10 -3 c. The reason for this choice is that the Sommerfeld effect saturates for very small velocities, and the velocity dependence is negligible for velocities smaller than 10 -3 c. The energy spectrum dN f /dx of a stable particle f at production per DM annihilation can be written as
dN f dx = I Br I dN I→f dx , (1)
where x = E f /m LSP , and dN I→f /dx represents the contribution from each two-body primary final state I with branching fraction Br I to the spectrum of f after the decay, fragmentation and hadronization processes have taken place. We compute Br I from our MSSM Sommerfeld code as described above and use the tables for dN I→f /dx provided with the PPPC4DMID code [START_REF] Cirelli | PPPC 4 DM ID: A Poor Particle Physicist Cookbook for Dark Matter Indirect Detection[END_REF], which include the leading logarithmic electroweak corrections through the electroweak fragmentation functions [START_REF] Ciafaloni | Weak Corrections are Relevant for Dark Matter Indirect Detection[END_REF]. Two comments regarding the use of the spectra provided by the PPPC4DMID code are in order. The code only considers primary pairs I of a particle together with its antiparticle, both assumed to have the same energy spectrum. For wino-like DM there exist primary final states with different species, i.e. I = ij with j = ī, such as Zγ and Zh 0 . In this case, we compute the final number of particles f produced from that channel as one half of the sum of those produced by channels I = i ī and I = j j. This is justified, since the fragmentation of particles i and j is independent. A second caveat concerns the heavy MSSM Higgs bosons that can be produced for sufficiently heavy neutralinos. These are not considered to be primary channels in the PPPC4DMID code, which only deals with SM particles. A proper treatment of these primaries would first account for the decay modes of the heavy Higgs bosons, and then consider the fragmentation and hadronization of the SM multi-particle final state in an event generator. Instead of a full treatment, we replace the charged Higgs H ± by a longitudinal-polarized W ± -boson, and the neutral heavy Higgses H 0 , A 0 by the light Higgs h 0 when computing the spectra in x. This approximation is not very well justified. However, the branching ratios of the dominantly-wino neutralino to final states with heavy Higgses are strongly suppressed, and we could equally have set them to zero without a noticeable effect on our results.
The branching fractions of primary final states obtained from our code are shown in the left panel of Fig. 2 as a function of the Higgsino fraction for a wino-like LSP with 2 TeV mass. The pure wino annihilates mostly to W + W -and to a lesser extent to other pairs of gauge bosons, including the loop-induced photon final state, which is generated by the Sommerfeld correction. The annihilation to fermions is helicity or p-wave suppressed. The suppression is lifted only for the t t final state as the Higgsino admixture increases, in which case this final state becomes the second most important. Except for this channel, the dominant branching fractions are largely independent of the Higgsino fraction. The annihilation to W + W -is always dominant and above 75%.
The final spectra of photons, positrons and antiprotons per annihilation at production for small (solid lines) and large (dashed lines) Higgsino mixing are plotted in the right panel of Fig. 2. The spectra in these two extreme cases are very similar, because W + W - is the dominant primary final state largely independent of the wino-Higgsino composition, and also the number of final stable particles produced by the sub-dominant primary channels do not differ significantly from each other. The inset in the right-hand plot shows that the relative change between the mixed and pure wino case varies from about +40% to about -40% over the considered energy range. Concerning the variation with respect to the DM mass, the most important change is in the total annihilation cross section, not in the spectra dN f /dx. The branching ratios Br I to primaries depend on the LSP mass in the TeV regime only through the Sommerfeld corrections, which can change the relative size of the different channels. However, since for wino-like neutralinos annihilation into W + W -dominates the sum over I in [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF], the dependence of the final spectra on m LSP is very mild.
Indirect and direct searches
In this section we discuss our strategy for determining the constraints on mixed-wino dark matter from various indirect searches. While the analysis follows that for the pure wino [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF], here we focus on the most relevant search channels: the diffuse gamma-ray [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF]. Right: Comparison of p, e + and gamma-ray spectra per annihilation at production of a 50% mixed wino-Higgsino (dashed) to the pure-wino (solid) model. The gamma-line component is not shown. In the inset at the bottom of the plot the relative differences between the two spectra are shown.
E k [GeV] dN/dLogE k M 2 =2 TeV, M 1 =4.02 TeV, M sf =30TeV, tanΒ=15
emission from dSphs, antiprotons and positron CRs, and the CMB. Moreover, since we consider wino-like DM with a possibly significant Higgsino admixture, we implement the direct detection constraints as well.
Charged cosmic rays
Propagation
The propagation of charged CRs in the Galaxy is best described within the diffusion model with possible inclusion of convection. In this framework the general propagation equation takes the form [17]
∂N i ∂t -∇ • D xx ∇ -v c N i + ∂ ∂p ṗ - p 3 ∇ • v c N i - ∂ ∂p p 2 D pp ∂ ∂p N i p 2 (2) = Q i (p, r, z) + j>i cβn gas (r, z)σ ij N j -cβn gas (r, z)σ in N i - j<i N i τ i→j + j>i N j τ j→i ,
where N i (p, r, z) is the number density of the i-th particle species with momentum p and corresponding velocity v = cβ, written in cylindrical coordinates (r, z), σ in the inelastic scattering cross section, σ ij the production cross section of species i by the fragmentation of species j, and τ i→j , τ j→i are the lifetimes related to decays of i and production from heavier species j, respectively. We solve (2) with the help of the DRAGON code [START_REF] Evoli | Cosmic-Ray Nuclei, Antiprotons and Gamma-rays in the Galaxy: a New Diffusion Model[END_REF], assuming cylindrical symmetry and no convection, v c = 0. With the galacto-centric radius r, the height from the Galactic disk z and rigidity R = pc/Ze, we adopt the following form of the spatial diffusion coefficient:
D xx (R, r, z) = D 0 β η R R 0 δ e |z|/z d e (r-r )/r d . (3)
The momentum-space diffusion coefficient, also referred to as reaccelaration, is related to it via
D pp D xx = p 2 v 2 A /9
, where the Alfvén velocity v A represents the characteristic velocity of a magnetohydrodynamic wave. The free parameters are the normalization D 0 , the spectral indices η and δ, the parameters setting the radial scale r d and thickness z d of the diffusion zone, and finally v A . We fix the normalization at R 0 = 3 GV. The diffusion coefficient is assumed to grow with r, as the large scale galactic magnetic field gets weaker far away from the galactic center.
The source term is assumed to have the form
Q i (R, r, z) = f i (r, z) R R i -γ i , (4)
where f i (r, z) parametrizes the spatial distribution of supernova remnants normalized at R i , and γ i is the injection spectral index for species i. For protons and Helium we modify the source term to accommodate for two breaks in the power-law, as strongly indicated by observations. Leptons lose energy very efficiently, thus those which are very energetic need to be very local, while we do not observe nor expect many local sources of TeV scale leptons. This motivates multiplying (4) by an additional exponential cut-off in energy, e -E/Ec , with E c set to 50 TeV for electron and positron injection spectra. We employ the gas distribution n gas derived in [START_REF] Tavakoli | Three Dimensional Distribution of Atomic Hydrogen in the Milky Way[END_REF][START_REF] Pohl | 3D Distribution of Molecular Gas in the Barred Milky Way[END_REF] and adopt the standard forcefield approximation [START_REF] Gleeson | Solar Modulation of Galactic Cosmic Rays[END_REF] to describe the effect of solar modulation. The modulation potential is assumed to be a free parameter of the fit and is allowed to be different for different CR species.
Background models
In [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF] 11 benchmark propagation models with varying diffusion zone thickness, from z d = 1 kpc to z d = 20 kpc, were identified by fitting to the B/C, proton, Helium, electron and e + + e -data. Since then the AMS-02 experiment provided CR spectra with unprecedented precision, which necessitates modifications of the above benchmark models. Following the same procedure as in [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF] we choose three representative models, which give a reasonable fit to the AMS-02 data, denoted Thin, Med and Thick, corresponding to the previous z d = 1 kpc, z d = 4 kpc and z d = 10 kpc models. 3 The relevant pa- Table 1: Benchmark propagation models. The radial length is always r d = 20 kpc and convection is neglected, v c = 0. The second break in the proton injection spectra is at 300 GV. For primary electrons we use a broken power-law with spectral indices 1.6/2.65 and a break at 7 GV, while for heavier nuclei we assumed one power-law with index 2.25. R i 0,1 refer to the positions of the first and second break, respectively, and γ i 1,2,3 to the power-law in the three regions separated by the two breaks. The propagation parameters were obtained by fitting to B/C, proton and He data and cross-checked with antiproton data, while the primary electrons were obtained from the measured electron flux. rameters are given in Table 1. In Fig. 3 we show the fit to the B/C and the AMS-02 proton data [START_REF] Oliva | AMS results on light nuclei: Measurement of the cosmic rays boron-to-carbon ration with AMS-02[END_REF][START_REF] Haino | Precision measurement of he flux with AMS[END_REF][START_REF] Aguilar | Precision Measurement of the Proton Flux in Primary Cosmic Rays from Rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] and superimpose the older data from PAMELA [START_REF] Adriani | PAMELA Measurements of Cosmic-ray Proton and Helium Spectra[END_REF][START_REF] Adriani | Measurement of boron and carbon fluxes in cosmic rays with the PAMELA experiment[END_REF]. In all these cases, as well as for the lepton data [START_REF] Accardo | High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF][START_REF] Aguilar | Precision Measurement of the (e + + e -) Flux in Primary Cosmic Rays from 0.5 GeV to 1 TeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF], the measurements used in the fits were from AMS-02 results only.
Benchmark Diffusion Injection Model z d δ D 0 /10 28 v A η γ p 1 /γ p 2 /γ p 3 R p 0,1 γ He 1 /γ He 2 /γ He 3 R He 0,1 [kpc] [cm 2 s -1 ] [km s -1 ] GV GV
In the fit we additionally assumed that the normalization of the secondary CR antiprotons can freely vary by 10% with respect to the result given by the DRAGON code. This is motivated by the uncertainty in the antiproton production cross sections. The impact of this and other uncertainties has been studied in detail in e.g. [START_REF] Kappl | AMS-02 Antiprotons Reloaded[END_REF][START_REF] Evoli | Secondary antiprotons as a Galactic Dark Matter probe[END_REF][START_REF] Giesen | AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter[END_REF].
As we will show below, the DM contribution to the lepton spectra is of much less importance for constraining the parameter space of our interest, therefore, we do not discuss the lepton backgrounds explicitly. All the details of the implementation of the lepton limits closely follow [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF], updated to the published AMS-02 data [START_REF] Accardo | High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF][START_REF] Aguilar | Precision Measurement of the (e + + e -) Flux in Primary Cosmic Rays from 0.5 GeV to 1 TeV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF].
Diffuse gamma-rays from dSphs
Recently the Fermi -LAT and MAGIC collaborations released limits from the combination of their stacked analyses of 15 dwarf spheroidal galaxies [START_REF] Ahnen | Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies[END_REF]. Here we use the results of this analysis to constrain the parameter space of the mixed wino-Higgsino neutralino. To this end we compute all exclusive annihilation cross sections for present-day DM annihilation in the halo and take a weighted average of the limits provided by the experimental collaborations. As discussed in Section 2.2, the TeV scale wino-like neutralino annihilates predominantly into W + W -, ZZ and t t, with much smaller rates into leptons models were optimized for pre-AMS data and are based on a semi-analytic diffusion model. Since we rely on the full numerical solution of the diffusion equation, we follow the benchmark models of [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]. This comes at the expense of no guarantee that the chosen models really provide the minimal and maximal number of antiprotons. However, as in this work we are not interested in setting precise limits from antiproton data, we consider this approach as adequate.
E 2 p [GeV -1 m -2 s -1 sr -1 ]
Figure 3: Comparison of the benchmark propagation models: B/C (left) and protons (right). The fit was performed exclusively to the AMS-02 [START_REF] Oliva | AMS results on light nuclei: Measurement of the cosmic rays boron-to-carbon ration with AMS-02[END_REF][START_REF] Haino | Precision measurement of he flux with AMS[END_REF][START_REF] Aguilar | Precision Measurement of the Proton Flux in Primary Cosmic Rays from Rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] measurements, while the other data sets are shown only for comparison: PAMELA [START_REF] Adriani | PAMELA Measurements of Cosmic-ray Proton and Helium Spectra[END_REF][START_REF] Adriani | Measurement of boron and carbon fluxes in cosmic rays with the PAMELA experiment[END_REF], HEAO-3 [START_REF] Engelmann | Charge composition and energy spectra of cosmic-ray for elements from Be to NI -Results from HEAO-3-C2[END_REF], CREAM [START_REF] Ahn | Measurements of cosmic-ray secondary nuclei at high energies with the first flight of the CREAM balloon-borne experiment[END_REF], CRN [START_REF] Swordy | Relative abundances of secondary and primary cosmic rays at high energies[END_REF], ACE [START_REF] George | Elemental composition and energy spectra of galactic cosmic rays during solar cycle 23[END_REF]. and the lighter quarks. In the results from [START_REF] Ahnen | Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies[END_REF] only the W + W -, b b, µ + µ -and τ + τ - final states are given. However, as the predicted spectrum and number of photons from a single annihilation is not significantly different for the hadronic or leptonic final states, we adopt the approximation that the limits from annihilation into ZZ are the same as from W + W -, while those from t t and cc are the same as b b. The differences in the number of photons produced between these annihilation channels in the relevant energy range are maximally of order O(20%) for W + W -vs. ZZ and t t vs. b b. Comparing b b to light quarks these can rise up to factor 2, however due to helicity suppression these channels have negligible branching fractions. Hence, the adopted approximation is expected to be very good and, the corresponding uncertainty is significantly smaller than that related to the astrophysical properties of the dSphs (parametrised by the J-factors).
CMB constraints
The annihilation of dark matter at times around recombination can affect the recombination history of the Universe by injecting energy into the pre-recombination photon-baryon plasma and into the post-recombination gas and background radiation, which has consequences for the power and polarization spectra of the CMB [START_REF] Padmanabhan | Detecting dark matter annihilation with CMB polarization: Signatures and experimental prospects[END_REF][START_REF] Galli | CMB constraints on Dark Matter models with large annihilation cross-section[END_REF][START_REF] Slatyer | CMB Constraints on WIMP Annihilation: Energy Absorption During the Recombination Epoch[END_REF]. In particular, it can result in the attenuation of the temperature and polarization power spectra, more so on smaller scales, and in a shift of the TE and EE peaks. These effects can be traced back to the increased ionization fraction and baryon temperature, resulting in a broadening of the surface of last scattering, which suppresses perturbations on scales less than the width of this surface. Therefore the CMB temperature and polarization angular power spectra can be used to infer upper bounds on the annihilation cross section of dark mat-ter into a certain final state for a given mass. When Majorana dark matter particles annihilate, the rate at which energy E is released per unit volume V can be written as
dE dtdV (z) = ρ 2 crit Ω 2 (1 + z) 6 p ann (z) (5)
where ρ crit is the critical density of the Universe today, and experiment provides constraints on p ann (z), which describes the effects of the DM. These effects are found to be well enough accounted for when the z dependence of p ann (z) is neglected, such that a limit is obtained for the constant p ann . The latest 95% C.L. upper limit on p ann was obtained by Planck [START_REF] Ade | Planck 2015 results. XIII. Cosmological parameters[END_REF], and we adopt their most significant limit 3.4 • 10 -28 cm 3 s -1 GeV -1 from the combination of TT, TE, EE + lowP + lensing data. The constant p ann can further be expressed via
p ann = 1 M χ f eff σv , (6)
where f eff , parametrizing the fraction of the rest mass energy that is injected into the plasma or gas, must then be calculated in order to extract bounds on the DM annihilation cross section in the recombination era. In our analysis, for f eff we use the quantities f I eff,new from [START_REF] Madhavacheril | Current Dark Matter Annihilation Constraints from CMB and Low-Redshift Data[END_REF] for a given primary annihilation channel I. We then extract the upper limit on the annihilation cross section at the time of recombination by performing a weighted average over the contributing annihilation channels, as done for the indirect detection limits discussed in Section 3.2. As the Sommerfeld effect saturates before this time, σv at recombination is the same as the present-day cross section. In the future the cross section bound can be improved by almost an order of magnitude, until p ann is ultimately limited by cosmic variance.
Direct detection
Direct detection experiments probe the interaction of the dark matter particle with nucleons. For the parameter space of interest here, the bounds on spin-independent interactions, sensitive to the t-channel exchange of the Higgs bosons and to s-channel sfermion exchange are more constraining than those on spin-dependent interactions. The coupling of the lightest neutralino to a Higgs boson requires both a Higgsino and gaugino component, and is therefore dependent on the mixing. Note that the relative size of the Higgs Yukawa couplings means that the contribution due to the Higgs coupling to strange quarks dominates the result.
In the pure-wino limit, when the sfermions are decoupled and the coupling to the Higgs bosons vanishes, the direct detection constraints are very weak as the elastic scattering takes place only at the loop level [START_REF] Hisano | Direct Detection of Electroweak-Interacting Dark Matter[END_REF]. Allowing for a Higgsino admixture and/or non-decoupled sfermions introduces tree-level scattering processes mediated by Higgs or sfermion exchange. Direct detection experiments have recently reached the sensitivity needed to measure such low scattering cross sections and with the new data released by the LUX [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF] and PandaX [START_REF] Tan | Dark Matter Results from First 98.7-day Data of PandaX-II Experiment[END_REF] collaborations, a portion of the discussed parameter space is now being probed.
In the analysis below we adopt the LUX limits [START_REF] Akerib | Results from a search for dark matter in LUX with 332 live days of exposure[END_REF], being the strongest in the neutralino mass range we consider. In order to be conservative, in addition to the limit presented by the collaboration we consider a weaker limit by multiplying by a factor of two. This factor two takes into account the two dominant uncertainties affecting the spin-independent cross section, i.e. the local relic density of dark matter and the strange quark content of the nucleon. The former, ρ = 0.3 ± 0.1 GeV/cm 3 , results in an uncertainty of 50% [START_REF] Bovy | On the local dark matter density[END_REF] and the latter result contributes an uncertainty on the cross section of about 20% [START_REF] Dürr | Lattice computation of the nucleon scalar quark contents at the physical point[END_REF], which on combination result in weakening the bounds by a factor of two (denoted as ×2 on the plots). For the computation of the spin-independent scattering cross section for every model point we use micrOMEGAs [START_REF] Belanger | Indirect search for dark matter with micrOMEGAs2.4[END_REF][START_REF] Belanger | micrOMEGAs 3: A program for calculating dark matter observables[END_REF]. Note that the Sommerfeld effect does not influence this computation and the tree-level result is expected to be accurate enough.
Since only mixed Higgsino-gaugino neutralinos couple to Higgs bosons, the limits are sensitive to the parameters affecting the mixing. To be precise, for the case that the bino is decoupled (|M 1 | M 2 , |µ|) and |µ| -M 2 m Z , the couplings of the Higgs bosons h, H to the lightest neutralino are proportional to
c h = m Z c W M 2 + µ sin 2β µ 2 -M 2 2 , c H = -m Z c W µ cos 2β µ 2 -M 2 2 , (7)
where c W ≡ cos θ W , and it is further assumed that M A is heavy such that c h,H can be computed in the decoupling limit cos(α -β) → 0. When tan β increases, the light Higgs coupling c h decreases for µ > 0 and increases for µ < 0. On the other hand the coupling c H increases in magnitude with tan β for both µ > 0 and µ < 0, but is positive when µ > 0 and negative for µ < 0. In addition, in the decoupling limit the coupling of the light Higgs to down-type quarks is SM-like, and the heavy Higgses couple to down-type quarks proportionally to tan β. The sfermion contribution is dominated by the gauge coupling of the wino-like component neutralino to the sfermion and the quarks. We remark that for the parameter range under consideration there is destructive interference between the amplitude for the Higgs and sfermion-exchange diagrams for µ > 0, and for µ < 0 when [49]
m 2 H (1 -2/t β ) m 2 h < t β , (8)
provided M 2 |µ| and t β ≡ tan β 1. For these cases lower values of the sfermion masses reduce the scattering cross section.
In Fig. 4 we show the resulting limits from LUX data in the |µ| -M 2 vs. M 2 plane for different choices of t β , M A , M sf , and the sign of µ. The above discussion allows us to understand the following trends observed:
• On decreasing t β and M A the direct detection bound becomes stronger for positive µ and weaker for negative µ. Note that for µ < 0 the cross section decreases, and the bound weakens, due to the destructive interference between the h and H contributions as the relative sign between the couplings c h and c H changes. Where not stated, the parameter choices correspond to those for the black line. The area below the lines is excluded. The left panel shows the case of µ > 0, while the right of µ < 0.
M A = 1 T eV M A = 0 .8 T e V M A = 0 .5 T e V t Β = 1 5 M sf =6 T eV M sf =30TeV, M A =10TeV, t Β =30 M A = 0 .
M 2 [GeV] |Μ|-M 2 [GeV] M A = 1 T e V M A = 0 .8 T eV t Β = 1 0 M sf = 1 2 T e V M sf = 1. 25 M 2 ,M A = 0. 5T eV ,t Β = 15 M sf =30TeV, M A =10TeV, t Β =30 M A = 0 .5 T eV , t Β =
• The direct detection bound weakens for less decoupled sfermions when there is destructive interference between the t-channel Higgs-exchange and s-channel sfermionexchange diagrams. This always occurs for µ > 0, while for µ < 0 one requires small heavy Higgs masses. For instance, for t β = 15 the maximum value of M A giving destructive interference is slightly above 500 GeV, while for t β = 30 one needs M A < 700 GeV.
Since we consider a point in the |µ|-M 2 vs. M 2 plane to be excluded only if it is excluded for any (allowed) value of the other MSSM parameters, this means that the bounds from direct detection experiments are weakest for µ < 0 in combination with low values of M sf , M A and tan β, and for µ > 0 in combination with high values of M A and tan β but low values of M sf .
Results: indirect detection and CMB limits
In this section we first determine the region of the |µ|-M 2 vs. M 2 plane which satisfies the relic density constraint and is allowed by the gamma-ray limits from dwarf spheroidals, the positron limits from AMS-02, and the CMB limits. 4 We also determine the regions preferred by fits to AMS-02 antiproton results. Over a large part of the considered |µ| -M 2 vs. M 2 plane, the observed relic density can be obtained for some value of the sfermion masses and other MSSM parameters. For the remaining region of the plane, where the relic density constraint is not fulfilled for thermally produced neutralino dark matter, we consider both, the case where the dark matter density is that observed throughout the plane, in which case it cannot be produced thermally, and the case where it is always thermally produced, for which the neutralino relic density does not always agree with that observed, and the limits must be rescaled for each point in the plane by the relic density calculated accordingly. That the neutralino dark matter is not thermally produced, or that it only constitutes a part of the total dark matter density are both viable possibilities.
We then consider various slices through this plane for fixed values of |µ| -M 2 , and show the calculated present-day annihilation cross section as a function of M 2 ∼ m χ 0 1 together with the same limits and preferred regions as above, both for the case that the limits are and are not rescaled according to the thermal relic density.
Limits on mixed-wino DM
In this section we present our results on the limits from indirect searches for wino-like DM in the MSSM, assuming the relic density is as observed. That is, for most parameter points the DM must be produced non-thermally or an additional mechanism for late entropy production is at play. We show each of the considered indirect search channels separately in the |µ| -M 2 vs. M 2 plane (including both µ > 0 and µ < 0), superimposing on this the contours of the correct relic density for three choices of the sfermion mass. Note that while the indirect detection limits are calculated for M sf = 8 TeV, the effect of the choice of sfermion mass on them is minimal, and therefore we display only the relic density contours for additional values of M sf .
In Fig. 5 we show the exclusions from dSphs, e + , and the CMB separately in the |µ| -M 2 vs. M 2 plane. For the positrons we show two limits, obtained on assuming the Thin and Thick propagation models described in Section 3.1.2. We see that the most relevant exclusions come from the diffuse gamma-ray searches from dSphs. Here we show three lines corresponding to the limit on the cross section assuming the Navarro-Frenk-White profile in dSphs, and rescaling this limit up and down by a factor 2. This is done in order to estimate the effect of the uncertainty in the J-factors. For instance, the recent reassessment [START_REF] Ullio | A critical reassessment of particle Dark Matter limits from dwarf satellites[END_REF] of the J-factor for Ursa Minor inferred from observational data suggests 2 to 4 times smaller limits than those commonly quoted. In order to provide conservative bounds, we adopt the weakest of the three as the reference limit. We then compare (lower right plot) this weakest limit from dSphs to the preferred region obtained on fitting to the AMS-02 antiproton results, showing the results for both Thin and Thick propagation models. 5 in our analysis, because for the DM models under consideration, the strongest lepton limits arise from energies below about 100 GeV, in particular the from observed positron fraction (see Fig. 7 of [START_REF] Hryczuk | Indirect Detection Analysis: Wino Dark Matter Case Study[END_REF]). 5 The actual analysis was finalized before the recent antiproton results were published [START_REF] Aguilar | Antiproton Flux, Antiproton-to-Proton Flux Ratio, and Properties of Elementary Particle Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station[END_REF] and hence We find that there are parts of the mixed wino-Higgsino and dominantly wino neutralino parameter space both below and above the Sommerfeld resonance region, where was based on earlier data presented by the AMS collaboration [START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF]. This is expected to have a small effect on the antiproton fit presented in this work, with no significant consequences for the overall results. The antiproton-to-proton ratio: background propagation models (left) and comparison of three DM models with relic density within the observational range and assuming the "Med" propagation (right). The shown data is from AMS-02 [START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF] and PAMELA [START_REF] Adriani | Measurement of the flux of primary cosmic ray antiprotons with energies of 60-MeV to 350-GeV in the PAMELA experiment[END_REF].
� �� = ���� � � � � � = � � � � � � � = � � � � � σ � = � σ � � � � � � σ � = σ � � � � � � σ � = � / � σ � �� �� � � �� = ��� �� � ����� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ��� �� � � � � = � � � � � � � = � � � � � � �� = ���� � � � + ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ��� �� � � � � = � � � � � � � = � � � � � � �� = ��� �� � �������� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � �� = ���� � � � � �� ���� � � � � = � � � � � � � = � � � � � σ � = � σ � ��� �� � �� = ��� �� � � �� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=��
the relic density is as observed and which are compatible with the non-observation of dark matter signals in indirect searches. In the lower right plot of Fig. 5 we see that these further overlap with the regions preferred by fits to the antiproton results. In the smaller region above the resonance, this overlap occurs when the sfermions are decoupled, and hence corresponds to an almost pure-wino case, whereas below the resonance the overlap region is spanned by varying the sfermion masses from 1.25M 2 to being decoupled. The latter region requires substantial Higgsino-mixing of the wino, and extends from M 2 = 1.7 TeV to about 2.5 TeV, thus allowing dominantly-wino dark matter in a significant mass range. Let us comment on the improvement of the fit to the antiproton measurements found for some choices of the parameters. In Fig. 6 we show examples of antiproton-to-proton ratio fits to the data from the background models (left) and including the DM component (right). Although the propagation and antiproton production uncertainties can easily resolve the apparent discrepancy of the background models vs. the observed data [START_REF] Kappl | AMS-02 Antiprotons Reloaded[END_REF][START_REF] Evoli | Secondary antiprotons as a Galactic Dark Matter probe[END_REF][START_REF] Giesen | AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter[END_REF], it is nevertheless interesting to observe that the spectral shape of the DM component matches the observed data for viable mixed-wino dark matter particles.
Indirect search constraints on the MSSM parameter space
In this section we present our results for the limits from indirect searches on wino-like DM, assuming the relic density is always thermally produced. In other words, for the standard cosmological model, these constitute the limits on the parameter space of the MSSM, since even if the neutralino does not account for all of the dark matter, its thermal population can give large enough signals to be seen in indirect searches. In this case a parameter-space point is excluded, if
(σv) 0 th > Ωh 2 | obs Ωh 2 | thermal 2 (σv) 0 exp lim (9)
where (σv) 0 th is the theoretically predicted present-day cross section and (σv) 0 exp lim the limit quoted by the experiment. This is because the results presented by the experiments assume the DM particle to account for the entire observed relic density. Therefore if one wishes to calculate the limits for dark matter candidates which only account for a fraction of the relic density, one needs to rescale the bounds by the square of the ratio of observed relic density Ωh 2 | obs to the thermal relic density Ωh 2 | thermal . Viewed from another perspective, the results below constitute astrophysical limits on a part of the MSSM parameter space, which is currently inaccessible to collider experiments, with the only assumption that there was no significant entropy production in the early Universe after the DM freeze-out. In Fig. 7, as in the previous subsection, we show the exclusions from dSphs, e + , and the CMB individually in the |µ| -M 2 vs. M 2 plane. The limits are calculated as for Fig. 5. We then compare the weakest limit from dSphs to the preferred region obtained on fitting to the AMS-02 antiproton results, where we show the results for both Thin and Thick propagation models. Again we find that parameter regions exist where the relic density is correct and which are not excluded by indirect searches. The marked difference between the previous and present results is that in Fig. 7 the region of the plots for lower M 2 is not constrained by the indirect searches, because in this region the thermal relic density is well below the measured value and therefore the searches for relic neutralinos are much less sensitive. In the bottom lower plot of Fig. 7 we see that the unconstrained regions overlap with the regions preferred by fits to the antiproton results. While the limits themselves do not depend on the sfermion mass, the thermal relic density does, and therefore the rescaling of the limits via (9) induces a dependence on the sfermion mass. Therefore the intersection of the lines of correct relic density for M sf = 8 TeV with the preferred region from antiproton searches is not meaningful, and we do not show them in the plots.
Limits on the present-day cross section for fixed |µ| -M 2
In order to understand how the limits and the present-day annihilation cross section depend on the mass of the DM candidate, we take slices of the |µ| -M 2 vs. M 2 plane for fixed values of |µ|-M 2 , and plot (σv) 0 (black) as a function of M 2 , which is approximately equal to the LSP mass m χ 0 1 in the range shown in Figs. 8 and9. As in Figs. 5 and7 we show the limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dotdashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick propagation models. We consider three choices of µ -M 2 : a very mixed neutralino LSP, |µ| -M 2 = 50 GeV where µ is negative, a mixed case |µ| -M 2 = 220 GeV where µ is positive, and an almost pure-wino scenario, |µ| -M 2 = 1000 GeV. The blue shaded region indicates where the relic density can correspond to the observed by changing M sf . For Fig. 8 we adopt the unrescaled limit, that is, two sections of Fig. 5. In the case of the very mixed wino-Higgsino shown in the upper panel there is a wide range of neutralino masses for which the black curve lies below the conservative dSphs limit and simultaneously within the range of correct relic density spanned by the variation of the sfermion mass. This is different for the almost pure-wino scenario shown in the lower panel, where only a small mass region survives the requirement that the conser- 1 for the Higgsino admixture |µ| -M 2 as indicated. This is compared with exclusion limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dot-dashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick models. We also show the dSphs exclusion limits multiplied and divided by 2 (brown), the weaker of which is the thicker line. The observed relic density is assumed. The blue shaded region indicates where the relic density can correspond to the observed value by changing M sf . vative dSphs limit is respected and the observed relic density is predicted. Moreover, in this mass region the sfermions must be almost decoupled. Fig. 9 shows two cases of mixed wino-Higgsino dark matter, which exhibit similar features, but now for the case of assumed thermal relic density, such that the limits are rescaled. 8, but the thermal relic density is assumed and the limits are rescaled according to [START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. Note the different value of |µ| -M 2 in the lower plot compared to the previous figure. The black-dashed vertical line indicates where the relic density is equal to that observed for the sfermion mass value M sf = 8 TeV.
σ � = � σ � � � � � � σ � = σ � � � � � � σ � = � / � σ � ����� � � � = � � � � ����� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � = � � � � � + ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � = � � � � ��� ���� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=�� � � � � � � � � � � � � = � � � � σ � = � σ � �� �� � � �� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ] � � =� � � � � � =�� ���� � �� =� ���� ���β=��
��� � �� ��� ���� � ���� ��� ���� μ -� � = �� ��� � �� =�� ���� � �� =� ���� � �� =����� � � � � + ���� μ < � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�] ��� � �� ��� ���� � ���� �� � ��� � � �� =� ���� � �� =� ���� � �� =�� ���� � �� =����� � � � |�μ -� � = ���� ��� μ > � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�]
��� � ��� �� � � � � � �� � � �� � ��� � μ -� � = �� ��� � �� =� ���� � + ��� � μ < � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�] ���� ����� � � � � � �� � � �� � ��� � � �� =� ���� � + ��� � |�μ -� � = ��� ��� μ > � ���� ���� ���� ���� ���� ���� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� � � [���] σ � � [�� � /�]
It is evident from both figures that for lower values of |µ| -M 2 , larger regions in M 2 can provide both the correct relic density and present-day cross section below the dSphs bounds. We also see that while the correct relic density can be attained at the Sommerfeld resonance, the mass regions compatible with indirect search constraints typically lie below the Sommerfeld resonance, as was evident from Figs. 5 and7 .
Results: including direct detection limits
We have seen in the previous section that there is a sizeable mixed wino-Higgsino MSSM parameter space where the lightest neutralino has the correct relic abundance and evades indirect detection constraints. A significant Higgsino fraction might, however, be in conflict with the absence of a direct detection signal. In this section we therefore combine the exclusion limits from indirect searches studied in the previous section with those coming from the latest LUX results for direct detection, in order to determine the allowed mixed wino-Higgsino or dominantly-wino dark matter parameter space. To this end we first determine the maximal region in this space that passes relic density and indirect detection limits in the following way. For a given |µ| -M 2 we identify two points in M A , M sf and tan β within the considered parameter ranges, i.e. M A ∈ {0.5 TeV, 10 TeV}, M sf ∈ {1.25M 2 , 30 TeV} and tan β ∈ {5, 30},6 corresponding to maximal and minimal values of M 2 , for which the relic density matches the observed value. Two distinct areas of parameter space arise: the first is larger and corresponds to a mixed wino-Higgsino whereas the second is narrower and corresponds approximately to the pure wino. The relic density criterion therefore defines one (almost pure wino) or two (mixed wino-Higgsino) sides of the two shaded regions, shown in Figs. 10 and11, corresponding to the pure and mixed wino. The dSphs limit defines the other side in the almost pure-wino region, while the remaining sides of the mixed wino-Higgsino area are determined by the dSphs limit (upper), the condition |µ| -M 2 = 0, and the antiproton search (the arc on the lower side of the mixed region beginning at M 2 1.9 TeV). We recall that we consider the central dSphs limit and those obtained by rescaling up and down by a factor of two; the shading in grey within each region is used to differentiate between these three choices.
Next we consider the exclusion limits in the M 2 vs. |µ|-M 2 plane from the 2016 LUX results, which have been obtained as outlined in Section 3.4. As discussed there, the sign of µ can strongly influence the strength of the direct detection limits and consequently the allowed parameter space for mixed wino-Higgsino DM. We therefore consider the two cases separately.
µ > 0
Out of the two distinct regions described above, the close-to-pure wino and the mixed wino-Higgsino, only the former survives after imposing the direct detection constraints, see Fig. 10. If conservative assumptions are adopted for direct detection and dSphs limits a small triangle at the top of the mixed region is still allowed. The fact that the direct detection constraints mainly impact the mixed rather than the pure wino region was discussed in Section 3.4, and is understood in that the Higgs bosons only couple to mixed gaugino-Higgsino neutralinos.
� � σ � � � = � σ � � � � � � σ � �� = � σ � ����� Ω �� �� � = Ω � �� � � Ω � � � � � = Ω � � � � � μ > � �� � � �� = � ���� � β = ��� � � = �� ��� � � � � � � � � � -� � � Ω �� �� � = Ω � ��� � � � � �� � � � � -�� � �� � � �� = ���� � � � β = �� � � = �� ��� �� � � �� = �� ���� � β = ��� � � = ��� ��� 2 ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� ��� � � [���] |μ|-� � [��� ]
Figure 10: Shaded areas denote the maximal region in the M 2 vs |µ| -M 2 plane for µ > 0 where the relic density is as observed and the limit from dSphs diffuse gamma searches is respected within parameter ranges considered. The darker the grey region, the more stringent is the choice of the bound as described in the text. The grey lines mark the weakest possible limit of the region excluded by the 2016 LUX results and the same limit weakened by a factor of two as indicated. The limit from the previous LUX result is the dotted line. The different bounds are calculated at different parameter sets p1, p2 and p3, as indicated.
Note that the direct detection limits presented on the plot are for the choice of MSSM parameters giving the weakest possible constraints. This is possible because the boundaries of the maximal region allowed by indirect searches do not depend as strongly on the parameters governing the wino-Higgsino mixing as the spin-independent scattering cross section does. The only exceptions are the boundaries of the mixed-wino region, arising from the relic density constraint, which indeed depend strongly on M sf . However, as varying these boundaries does not significantly change the allowed region, since it is mostly in the part excluded by the LUX data, we choose to display the LUX bound for a value of M sf different from that defining these boundaries. Therefore, all in all, the case of the mixed wino-Higgsino with µ > 0 is verging on being excluded by a combination of direct and indirect searches, when imposing that the lightest neutralino accounts for the entire thermally produced dark matter density of the Universe. Note, however, that the small close-to-pure wino region is not affected by direct detection constraints.
Ω � � � � � = Ω � � � � � σ � � � = � σ � � � � � � σ � �� = � σ � ���� � Ω �� �� � = Ω � ��� � Ω �� �� � = Ω � �� � � μ < � �� � � �� = ���� � � � � β = ��� � � = ��� ��� ���������-�� �� � � �� = �� ���� � β = ��� � � = ��� ��� �� � � �� = ���� � � � � β = �� � � = �� ��� ���� ���� ���� ���� ���� ���� � ��� ��� ��� ��� � � [���] |μ|-� � [��� ]
Figure 11: Maximal region in the M 2 vs µ -M 2 plane for µ < 0, obtained as in Fig. 10. The limit from the 2016 LUX result weakened by a factor of two is not visible within the ranges considered in the plot. The different bounds are calculated at different parameter sets p1, p2 and p3, as indicated.
µ < 0
When µ < 0 the spin-independent cross section decreases, particularly for smaller values of tan β. This allows for parameter choices with small |µ| -M 2 giving viable neutralino DM, in agreement with the direct detection constraint. Indeed, for appropriate parameter choices the direct detection limits are too weak to constrain any of the relevant regions of the studied parameter space. In particular, the weakest possible limits correspond to M sf = 1.25M 2 , M A = 0.5 TeV and tan β = 15. Note that for M A = 0.5 TeV a significantly lower value of tan β would be in conflict with constraints from heavy Higgs searches at the LHC. The result of varying M A , M sf and tan β is a sizeable mass region for viable mixedwino dark matter in the MSSM, ranging from M 2 = 1.6 to 3 TeV, as shown in Fig. 11. The parameter |µ| -M 2 for the Higgsino admixture varies from close to 0 GeV to 210 GeV below the Sommerfeld resonance, and from 200 GeV upwards above, when the most conservative dSphs limit (shown in light grey) is adopted.
We note that in determining the viable mixed-wino parameter region we did not include the diffuse gamma-ray and gamma line data from observations of the Galactic center, since the more conservative assumption of a cored dark matter profile would not provide a further constraint. However, future gamma data, in particular CTA observations of the Galactic center, are expected to increase the sensitivity to the parameter region in question to the extent (cf. [START_REF] Roszkowski | Prospects for dark matter searches in the pMSSM[END_REF]) that either a dominantly-wino neutralino dark matter would be seen, or the entire plane shown in Fig. 11 would be excluded even for a cored profile.
Conclusions
This study was motivated by the wish to delineate the allowed parameter (in particular mass) range for a wino-like dark matter particle in the MSSM, only allowing some mixing with the Higgsino. More generically, this corresponds to the case where the dark matter particle is the lightest state of a heavy electroweak triplet with potentially significant doublet admixture and the presence of a scalar mediator. The Sommerfeld effect is always important in the TeV mass range, where the observed relic density can be attained, and has been included in this study extending previous work [START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF][START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF][START_REF] Beneke | Heavy neutralino relic abundance with Sommerfeld enhancements -a study of pMSSM scenarios[END_REF]. Our main results are summarized in Figs. 10 and11, which show the viable parameter region for the dominantly-wino neutralino for the cases µ > 0 and µ < 0, respectively. After imposing the collider and flavour constraints (both very weak), we considered the limits from diffuse gamma-rays from the dwarf spheroidal galaxies (dSphs), galactic cosmic rays and cosmic microwave background anisotropies. We also calculated the antiproton flux in order to compare with the AMS-02 results. The choice of indirect search constraints is influenced by the attitude that the fundamental question of the viability of wino-like dark matter should be answered by adopting conservative assumptions on astrophysical uncertainties. The non-observation of an excess of diffuse gamma-rays from dSphs then provides the strongest limit.
It turns out that in addition to these indirect detection bounds, the direct detection results have a significant impact on the parameter space, particularly for the µ > 0 case where the mixed Higgsino-wino region is almost ruled out as shown in Fig. 10. In the µ < 0 case the limits are weaker as seen in Fig. 11, and a sizeable viable region remains. Note that the region of the |µ|-M 2 vs. M 2 plane constrained by direct detection is complementary to that constrained by indirect detection. Therefore while for µ > 0, (almost) the entire mixed region is ruled out, for µ < 0 there is a part of parameter space where M 2 = 1.7 -2.7 TeV which is in complete agreement with all current experimental constraints.
Let us conclude by commenting on the limits from line and diffuse photon spectra from the Galactic center. If a cusped or mildly cored DM profile was assumed, the H.E.S.S. observations of diffuse gamma emission [START_REF] Collaboration | Search for dark matter annihilations towards the inner Galactic halo from 10 years of observations with H.E.S.S[END_REF] would exclude nearly the entire parameter space considered in this paper, leaving only a very narrow region with close to maximal wino-Higgsino mixing. The limits from searches for a line-like feature [START_REF] Abramowski | Search for Photon-Linelike Signatures from Dark Matter Annihilations with H[END_REF] would be even stronger, leaving no space for mixed-wino neutralino DM. However, a cored DM profile remains a possibility, and hence we did not include the H.E.S.S. results. In other words, adopting a less conservative approach, one would conclude that not only the purewino limit of the MSSM, but also the entire parameter region of the dominantly-wino neutralino, even with very large Higgsino or bino admixture, was in strong tension with the indirect searches. Therefore, the forthcoming observations by CTA should either discover a signal of or definitively exclude the dominantly-wino neutralino.
Figure 1 :
1 Figure1: Contours of constant relic density in the M 2 vs. (µ -M 2 ) plane for µ > 0, as computed in[START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF]. The (green) band indicates the region within 2σ of the observed dark matter abundance. Parameters are as given in the header, and the trilinear couplings are set to A i = 8 TeV for all sfermions except for that of the stop, which is fixed by the Higgs mass value. The black solid line corresponds to the old LUX limit[START_REF] Akerib | First results from the LUX dark matter experiment at the Sanford Underground Research Facility[END_REF] on the spinindependent DM-nucleon cross section, which excludes the shaded area below this line. Relaxing the old LUX limit by a factor of two to account for theoretical uncertainties eliminates the direct detection constraint on the shown parameter space region.
Figure 2 :
2 Figure 2: Left: Branching fractions of present-day wino-like neutralino annihilation vs. the Higgsino fraction for decoupled M A and sfermions. |Z 31 | 2 + |Z 41 | 2 refers to the Higgsino fraction of the lightest neutralino in the convention of[START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos I. General framework and S-wave annihilation[END_REF]. Right: Comparison of p, e + and gamma-ray spectra per annihilation at production of a 50% mixed wino-Higgsino (dashed) to the pure-wino (solid) model. The gamma-line component is not shown. In the inset at the bottom of the plot the relative differences between the two spectra are shown.
Figure 4 :
4 Figure4: Direct detection limits for different of the MSSM parameters, assuming the neutralino is completely responsible for the measured dark matter density of the Universe. Where not stated, the parameter choices correspond to those for the black line. The area below the lines is excluded. The left panel shows the case of µ > 0, while the right of µ < 0.
Figure 5 :
5 Figure 5: Results in the M 2 vs. |µ| -M 2 plane. Left: limits from dSphs (upper) and the CMB (lower). The shaded regions are excluded, different shadings correspond to the DM profile uncertainty. Right: the region excluded by AMS-02 leptons (upper), and the best fit contours for antiprotons (lower), where the green solid lines show the Thin and Thick propagation models, while the dotted lines around them denote the 1σ confidence intervals. Contours where the observed relic density is obtained for the indicated value of the sfermion mass are overlaid.
Figure 6 :
6 Figure6: The antiproton-to-proton ratio: background propagation models (left) and comparison of three DM models with relic density within the observational range and assuming the "Med" propagation (right). The shown data is from AMS-02[START_REF] Kounine | Latest results from the alpha magnetic spectrometer: positron fraction and antiproton/proton ratio, presentation at[END_REF] and PAMELA[START_REF] Adriani | Measurement of the flux of primary cosmic ray antiprotons with energies of 60-MeV to 350-GeV in the PAMELA experiment[END_REF].
Figure 7 :
7 Figure 7: Results in the M 2 vs |µ| -M 2 plane for the case where the limits are rescaled according to the thermal relic density for a given point in the plane. Details are as in Fig. 5.
Figure 8 :
8 Figure8: The predicted present-day annihilation cross section (σv) 0 (black) is shown as a function of M 2 ∼ m χ 0 1 for the Higgsino admixture |µ| -M 2 as indicated. This is compared with exclusion limits from dSphs (brown), positrons (blue dashed) and the CMB (magenta dot-dashed), along with the preferred regions from antiproton searches (pale green) adopting the Thin and Thick models. We also show the dSphs exclusion limits multiplied and divided by 2 (brown), the weaker of which is the thicker line. The observed relic density is assumed. The blue shaded region indicates where the relic density can correspond to the observed value by changing M sf .
Figure 9 :
9 Figure9: As in Fig.8, but the thermal relic density is assumed and the limits are rescaled according to[START_REF] Beneke | Non-relativistic pair annihilation of nearly mass degenerate neutralinos and charginos III. Computation of the Sommerfeld enhancements[END_REF]. Note the different value of |µ| -M 2 in the lower plot compared to the previous figure. The black-dashed vertical line indicates where the relic density is equal to that observed for the sfermion mass value M sf = 8 TeV.
Allowing for significant bino admixture leads to other potentially interesting, though smaller regions, as described in[START_REF] Beneke | Relic density of wino-like dark matter in the MSSM[END_REF].
Since we also computed the relic density for every parameter point, which requires including the v 2 -corrections, we did not make use of this simplification in the present analysis.
We loosely follow here the widely adopted MIN, MED, MAX philosophy[START_REF] Donato | Antiprotons in cosmic rays from neutralino annihilation[END_REF], choosing models with as large variation in the DM-originated antiproton flux as possible. However, the MIN, MED, MAX
For the combined e + + e -flux several earlier observations provide data extending to higher energies than the AMS-02 experiment, though with much larger uncertainties. We do not include these data
Moving the lower limit M A = 500 GeV to 800 GeV would result in a barely noticeable change to the boundaries marked by p2.
Acknowledgements
We thank A. Ibarra for comments on the manuscript, and A. Goudelis and V. Rentala for helpful discussions. This work is supported in part by the Gottfried Wilhelm Leibniz programme of the Deutsche Forschungsgemeinschaft (DFG) and the Excellence Cluster "Origin and Structure of the Universe" at Technische Universität München. AH is supported by the University of Oslo through the Strategic Dark Matter Initiative (SDI). We further gratefully acknowledge that part of this work was performed using the facilities of the Computational Center for Particle and Astrophysics (C2PAP) of the Excellence Cluster. |
01767476 | en | [
"info"
] | 2024/03/05 22:32:15 | 2016 | https://inria.hal.science/hal-01767476/file/433330_1_En_13_Chapter.pdf | Peter Csaba Ölveczky
Formalizing and Validating the P-Store Replicated Data Store in Maude
P-Store is a well-known partially replicated transactional data store that combines wide-area replication, data partition, some fault tolerance, serializability, and limited use of atomic multicast. In addition, a number of recent data store designs can be seen as extensions of P-Store. This paper describes the formalization and formal analysis of P-Store using the rewriting logic framework Maude. As part of this work, this paper specifies group communication commitment and defines an abstract Maude model of atomic multicast, both of which are key building blocks in many data store designs. Maude model checking analysis uncovered a non-trivial error in P-Store; this paper also formalizes a correction of P-Store whose analysis did not uncover any flaw.
Introduction
Large cloud applications-such as Google search, Gmail, Facebook, Dropbox, eBay, online banking, and card payment processing-are expected to be available continuously, even under peak load, congestion in parts of the network, server failures, and during scheduled hardware or software upgrades. Such applications also typically manage huge amounts of (potentially important user) data. To achieve the desired availability, the data must be replicated across geographically distributed sites, and to achieve the desired scalability and elasticity, the data store may have to be partitioned across multiple partitions.
Designing and validating cloud storage systems are hard, as the design must take into account wide-area asynchronous communication, concurrency, and fault tolerance. The use of formal methods during the design and validation of cloud storage systems has therefore been advocated recently [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF][START_REF] Ölveczky | Design and validation of cloud computing data stores using formal methods[END_REF]. In [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF], engineers at the world's largest cloud computing provider, Amazon Web Services, describe the use of TLA+ during the development of key parts of Amazon's cloud infrastructure, and conclude that the use of formal methods at Amazon has been a success. They report, for example, that: (i) "formal methods find bugs in system designs that cannot be found though any other technique we know of"; (ii) "formal methods [...] give good return on investment"; (iii) "formal methods are routinely applied to the design of complex real-world software, including public cloud services"; (iv) formal methods can analyze "extremely rare" combination of events, which the engineer cannot do, as "there are too many scenarios to imagine"; and (v) formal methods allowed Amazon to "devise aggressive optimizations to complex algorithms without sacrificing quality."
This paper describes the application of the rewriting-logic-based Maude language and tool [START_REF] Clavel | All About Maude[END_REF] to formally specify and analyze the P-Store data store [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]. P-Store is a well-known partially replicated transactional data store that provides both serializability and some fault tolerance (e.g., transactions can be validated even when some nodes participating in the validation are down).
Members of the University of Illinois Center for Assured Cloud Computing have used Maude to formally specify and analyze complex industrial cloud storage systems such as Google's Megastore and Apache Cassandra [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF][START_REF] Liu | Formal modeling and analysis of Cassandra in Maude[END_REF]. Why is formalizing and analyzing P-Store interesting? First, P-Store is a well-known data store design in its own right with many good properties that combines widearea replication, data partition, some fault tolerance, serializability, and limited use of atomic multicast. Second, a number of recent data store designs can be seen as extensions and variations of P-Store [START_REF] Sovran | Transactional storage for georeplicated systems[END_REF][START_REF] Ardekani | Non-monotonic snapshot isolation: Scalable and strong consistency for geo-replicated transactional systems[END_REF][START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF]. Third, it uses atomic multicast to order concurrent transactions. Fourth, it uses "group communication" for atomic commit. The point is that both atomic multicast and group communication commit are key building blocks in cloud storage systems (see, e.g., [START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF]) that have not been formalized in previous work. Indeed, one of the main contributions of this paper is an abstract Maude model of atomic multicast that allows any possible ordering of message reception consistent with atomic multicast.
I have modeled (both versions of) P-Store, and performed model checking analysis on small system configurations. Maude analysis uncovered some significant errors in the supposedly-verified P-Store algorithm, like read-only transactions never getting validated in certain cases. An author of the original P-Store paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] confirmed that I had indeed found a nontrivial mistake in their algorithm and suggested a way of correcting the mistake. Maude analysis of the corrected algorithm did not find any error. I also found that a key assumption was missing from the paper, and that an important definition was very easy to misunderstand because of how it was phrased in English. All this emphasizes the need for a formal specification and formal analysis in addition to the standard prose-and-pseudo-code descriptions and informal correctness proofs.
The rest of the paper is organized as follows. Section 2 gives a background on Maude. Section 3 defines an abstract Maude model of the atomic multicast "communication primitive." Section 4 gives an overview of P-Store. Sections 5 and 6 present the Maude model and the Maude analysis, respectively, of P-Store, and Section 7 describes a corrected version of P-Store. Section 8 discusses some related work, and Section 9 gives some concluding remarks.
Due to space limitations, only parts of the specifications and analyses are given. I refer to the longer report [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] for more details. Furthermore, the executable Maude specifications of P-Store, together with analysis commands, are available at http://folk.uio.no/peterol/WADT16.
Preliminaries: Maude
Maude [START_REF] Clavel | All About Maude[END_REF] is a rewriting-logic-based formal language and simulation and model checking tool. A Maude module specifies a rewrite theory (Σ, E ∪ A, R), where:
-Σ is an algebraic signature; that is, a set of declarations of sorts, subsorts, and function symbols. -(Σ, E ∪ A) is a membership equational logic theory, with E a set of possibly conditional equations and membership axioms, and A a set of equational axioms such as associativity, commutativity, and identity. The theory (Σ, E∪ A) specifies the system's state space as an algebraic data type. -R is a set of labeled conditional rewrite rules3 l : t -→ t if m j=1 u j = v j specifying the system's local transitions. The rules are universally quantified by the variables in the terms, and are applied modulo the equations E ∪ A. 4I briefly summarize the syntax of Maude and refer to [START_REF] Clavel | All About Maude[END_REF] for more details. Operators are introduced with the op keyword: op f : s 1 . . . s n -> s. They can have user-definable syntax, with underbars '_' marking the argument positions, and equational attributes, such as assoc, comm, and id, stating, for example, that the operator is associative and commutative and has a certain identity element. Equations and rewrite rules are introduced with, respectively, keywords eq, or ceq for conditional equations, and rl and crl. The mathematical variables in such statements are declared with the keywords var and vars, or can be introduced on the fly having the form var:sort. An equation f (t 1 , . . . , t n ) = t with the owise ("otherwise") attribute can be applied to a term f (. . .) only if no other equation with left-hand side f (u 1 , . . . , u n ) can be applied. A class declaration class C | att 1 : s 1 , ... , att n : s n .
declares a class C with attributes att 1 to att n of sorts s 1 to s n . An object of class C is represented as a term < O : C | att 1 : val 1 , ..., att n : val n > of sort Object, where O, of sort Oid, is the object's identifier, and where val 1 to val n are the current values of the attributes att 1 to att n . A message is a term of sort Msg.
The state is a term of the sort Configuration, and is a multiset made up of objects and messages. Multiset union for configurations is denoted by a juxtaposition operator (empty syntax) that is declared associative and commutative, so that rewriting is multiset rewriting supported directly in Maude.
The dynamic behavior of concurrent object systems is axiomatized by specifying each of its transition patterns by a rewrite rule. For example, the rule A subclass inherits all the attributes and rules of its superclasses.
Formal Analysis in Maude. A Maude module is executable under some conditions, such as the equations being confluent and terminating, modulo the structural axioms, and the theory being coherent [START_REF] Clavel | All About Maude[END_REF]. Maude provides a range of analysis methods, including simulation for prototyping, search for reachability analysis, and LTL model checking. This paper uses Maude's search command
(search [[n]] t0 =>* pattern [such that cond ] .)
which uses a breadth-first strategy to search for at most n states that are reachable from the initial state t 0 , match the pattern pattern (a term with variables), and satisfy the (optional) condition cond . If '[n]' is omitted, then Maude searches for all solutions. If the arrow '=>!' is used instead of '=>*', then Maude searches for final states; i.e., states that cannot be further rewritten.
Atomic Multicast in Maude
Messages that are atomically multicast from (possibly) different nodes in a distributed system must be read in (pairwise) the same order: if nodes n 3 and n 4 both receive the atomically multicast messages m 1 and m 2 , they must receive (more precisely: "be served") m 1 and m 2 in the same order. Note that m 2 may be read before m 1 even if m 2 is atomically multicast after m 1 . Atomic multicast is typically used to order events in a distributed system. In distributed data stores like P-Store, atomic multicast is used to order (possibly conflicting) concurrent transactions: When a node has finished its local execution of a transaction, it atomically multicasts a validation request to other nodes (to check whether the transaction can commit). The validation requests therefore impose an order on concurrent transactions.
Atomic multicast does not necessarily provide a global order of all events. If each of the messages m 1 , m 2 , and m 3 is atomically multicast to two of the receivers A, B, and C, then A can read m 1 before m 2 , B can read m 2 before m 3 , and C can read m 3 before m 1 . These reads satisfy the pairwise total order requirement of atomic multicast, since there is no conflict between any pair of receivers. Nevertheless, atomic multicast has failed to globally order the messages m 1 , m 2 , and m 3 . If atomic multicast is used to impose something resembling a global order (e.g., on transactions), it should also satisfy the following uniform acyclic order property: the relation < on (atomic-multicast) messages is acyclic, where m < m holds if there exists a node that reads m before m . Atomic multicast is an important concept in distributed systems, and there are a number of well-known algorithms for achieving atomic multicast [START_REF] Guerraoui | Genuine atomic multicast in asynchronous distributed systems[END_REF]. To model P-Store, which uses atomic multicast, I could of course formalize a specific algorithm for atomic multicast and include it in a model of P-Store. Such a solution would, however, have a number of disadvantages, including:
1. Messy non-modular specifications. Atomic multicast algorithms involve some complexity, including maintaining Lamport clocks during system execution, keeping buffers of received messages that cannot be served, and so on. This solution could also easily yield a messy non-modular specification that fails to separate the specification of P-Store from that of atomic multicast. 2. Increased state space. Running a distributed algorithm concurrently with P-Store would also lead to much larger state spaces during model checking analyses, since also the states generated by the rewrites involving the atomic multicast algorithm would contribute to new states. 3. Lack of generality. Implementing a particular atomic multicast algorithm might exclude behaviors possible with other algorithms. That would mean that model checking analysis might not cover all possible behaviors of P-Store, but only those possible with the selected atomic multicast algorithm.
I therefore instead define, for each of the two "versions" of atomic multicast, a general atomic multicast primitive, which allows all possible ways of reading messages that are consistent with the selected version of atomic multicast. In particular, such a solution will not add states during model checking analysis.
Atomic Multicast in Maude: "User Interface"
To define an atomic multicast primitive, the system maintains a "table" of read and sent-but-unread atomic-multicast messages for each node. This table must be consulted before reading an atomic-multicast message, to check whether it can be read/served already, and must be updated when the message is read.
The "user interface" of my atomic multicast "primitive" is as follows:
-Atomically multicasting a message. A node n that wants to atomically multicast a message m to a set of nodes {n 1 , . . . , n k } just "sends" the "message" -The user must add the term [emptyAME] (denoting the "empty" atomic multicast table) to the initial state.
Maude Specification of Atomic Multicast
To keep track of atomic-multicast messages sent and received, the table The "wrapper" used for atomic multicast takes as arguments the message (content), the sender's identifier, and the (identifiers of the) set of receivers: "distributes" such an atomic-multicast msg from o to o 1 ... o n message by: (1) "dissolving" the above multicast message into a set of messages The update function, which updates the atomic-multicast table when O reads a message MC, just moves MC from the set of unread messages to the end of the list of read messages in O's record in the table.
The expression okToRead(mc, o, amTable) is used to check whether the object o can read the atomic-multicast message mc with the given global atomicmulticast table amTable. The function okToRead is defined differently depending on whether atomic multicast must satisfy the uniform acyclic order requirement.
okToRead for Pairwise Total Order Atomic Multicast. The following equations define okToRead by first characterizing the cases when the message cannot be read; the last equation uses Maude's owise construct to specify that the message can be read in all other cases: In the first equation, O wants to read MC, and its AM-entry shows that O has not read message MC2. However, another object O2 has already read MC2 before MC, which implies that O cannot read MC. In the second equation some object O2 has read MC2 and has MC in its sets of unread atomic-multicast messages, which implies that O cannot read MC yet (it must read MC2 first).
okToRead for Uniform Acyclic Order Atomic Multicast. To define atomic multicast which satisfies the uniform acyclic order requirement, the above definition must be generalized to consider the induced relation < instead of pairwise reads. The above definition checks whether a node o can read a message m 1 by checking whether it has some other unread message m 2 pending such reading m 1 before m 2 would conflict with the m 1 /m 2 -reading order of another node. This happens if another node has read m 2 before reading m 1 , or if it has read m 2 and has m 1 pending (which implies that eventually, that object would read m 2 before m 1 ). In the more complex uniform acyclic order setting, that solution must be generalized to check whether reading m 1 before any other pending message m 2 would violate the current or the (necessary) future "global order." That is, is there some m 1 elsewhere that has been read or must eventually be read after m 2 somewhere? If so, node o obviously cannot read m 1 at the moment.
The function receivedAfter takes a set of messages and the global AMtable as arguments, and computes the < * -closure of the original set of messages; i.e., the messages that cannot be read before the original set of messages: In the above equation, there is a message MC in the current set of messages in the closure. Furthermore, the global atomic-multicast table shows that some node O2 has read MC2 right after reading MC, and MC2 is not yet in the closure. Therefore, MC2 is added to the closure.
op receivedAfter : MsgContSet AM-Table -> MsgContSet .
In the following equation, there is a message MC in the closure; furthermore, some object O2 has already read MC. This implies that all unread messages MCS2 of O2 must eventually be read after MC, and hence they are added to the closure: Finally, the current set is returned when it cannot be extended:
eq receivedAfter(MCS, AM-TABLE) = MCS [owise] .
The function okToRead can then be defined as expected: O can read the pending message MC if MC is not (forced to be) read after any other pending message (in the set MCS): I have model-checked both specifications of atomic multicast on a number of scenarios and found no deadlocks or inconsistent multicast read orders.
P-Store
P-Store [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] is a partially replicated data store for wide-area networks developed by Schiper, Sutra, and Pedone that provides transactions with serializability. P-Store executes transactions optimistically: the execution of a transaction T at site s (which may involve remote reads of data items not replicated at s) proceeds without worrying about conflicting concurrent transactions at other sites. After the transaction T has finished executing, a certification process is executed to check whether or not the transaction T was in conflict with a concurrent transaction elsewhere, in which case T might have to be aborted. More precisely, in the certification phase the site s atomically multicasts a request to certify T to all sites storing data accessed by T . These sites then perform a voting procedure to decide whether T can commit or has to be aborted.
P-Store has a number of attractive features: (i) it is a genuine protocol: only the sites replicating data items accessed by a transaction T are involved in the certification of T ; and (ii) P-Store uses atomic multicast at most once per transaction. Another issue in the certification phase: in principle, the sites certify the transactions in the order in which the certification requests are read. However, if for some reason the certification of the first transaction in a site's certification queue takes a long time (maybe because other sites involved in the voting are still certifying other transactions), then the certification of the next transaction in line will be delayed accordingly, leading to the dreaded convoy effect. P-Store has an "advanced" version that tries to mitigate this problem by allowing a site to start the certification also of other transactions in its certification queue, as long as they are not in a possible conflict with "older" transactions in that queue.
The authors of [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] claim that they have proved the P-Store algorithm correct.
P-Store in Detail
This section summarizes the description of P-Store in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF].
System Model and Assumptions. A database is a set of triples (k, v, ts), where k is a key, v its value, and ts its time stamp. Each site holds a partial copy of the database, with Items(s) denoting the keys replicated at site s. I do not consider failures in this paper (as failure treatment is not described in the algorithms in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]). A transaction T is a sequence of read and write operations, and is executed locally at site proxy(T ). Items(T ) is the set of keys read or written by T ; WReplicas(T ) and Replicas(T ) denote the sites replicating a key written, respectively read or written, by T . A transaction T "is local iff for any site s in Replicas(T ), Items(T ) ⊆ Items(s); otherwise, T is global." Each site ensures order-preserving serializability of its local executions of transactions. As already mentioned, P-Store assumes access to an atomic multicast service that guarantees uniform acyclic order.
Executing a Transaction. While a transaction T is executing (at site proxy(T )), a read on key k is executed at some site that stores k; k and the item time stamp ts read are stored as a pair (k, ts) in T 's read set T.rs. Every write of value v to key k is stored as a pair (k, v) in T 's set of updates T.up. If T reads a key that was previously updated by T , the corresponding value in T.up is returned.
When T has finished executing, it can be committed immediately if T is read-only and local. Otherwise, we need to run the certification protocol, which also propagates T 's updates to the other (write-) replicating sites.
If the certification process, described next, decides that T can commit, all sites in WReplicas(T ) apply T 's updates. In any case, proxy(T ) is notified about the outcome (commit or abort) of the certification. Certification Phase. When T is submitted for certification, T is atomically multicast to all sites storing keys read (to check for stale reads) or written (to propagate the updates) by T . When a site s reads such a request, it checks whether the values read by T are up-to-date by comparing their versions against those currently stored in the database. If they are the same, T passes the certification test; otherwise T fails at s. The site s may not replicate all keys read by T and therefore may not be able to certify T . In this case there is a voting phase where each site s replicating keys read by T sends the result of its local certification test to all sites s w replicating a key written by T . A site s w can decide on T 's outcome when it has received (positive) votes from a voting quorum for T , i.e., a set of sites that together replicate all keys read by T . If some site votes "no," the transaction must be aborted. The pseudo-code description of this certification algorithm in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] is shown in Fig. 1.
As already mentioned, a site does not start the certification of another transaction until it is done certifying the first transaction in its certification queue. To avoid the convoy effect that this can lead to, the paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] also describes a version of P-Store where different transactions in a site's certification queue can be certified concurrently as long as they do not read-write conflict.
Formalizing P-Store in Maude
I have formalized both versions of P-Store (i.e., with and without sites initiating multiple concurrent certifications) in Maude, and present parts of the formalization of the simpler version. The executable specifications of both versions, with analysis commands, are available at http://folk.uio.no/peterol/ WADT16, and the longer report [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] provides more detail.
Class Declarations
Transactions. Although the actual values of keys in the databases are sometimes ignored during analysis of distributed data stores, I choose for purposes of illus-tration to represent the concrete values of keys (or data items). This should not add new states that would slow down the model checking analysis.
A transaction (sometimes also called a transaction request) is modeled as an object of the following class Transaction: The operations attribute denotes the list of read and write operations that remain to be executed. Such an operation is either a read operation x := read k, where x is a "local variable" that stores the value of the (data item with) key k read by the operation, or a write operation write(k, expr ), where expr in our case is a simple arithmetic expression involving the transaction's local variables. waitRemote(k, x) is an "internal operation" denoting that the transaction execution is awaiting the value of a key k (to be assigned to the local variable x) which is not replicated by the transaction's proxy. An operation list is a list of such operations, with list concatenation denoted by juxtaposition. destination denotes the (identity of the) proxy of the transaction; that is, the site that should execute the transaction. The readSet attribute denotes the ','-separated set of pairs versionRead(k, version), each such pair denoting that the transaction has read version version of the key k. The writeSet attribute denotes the write set of the transaction as a map (k 1 |-> val 1 ), ..., (k n |-> val n ). The status attribute denotes the commit state of the transaction, which is either commit, abort, or undecided. Finally, localVars is a map from the transaction's local variables to their current values.
Replicas. A replicating site (or site or replica) stores parts of the database, executes the transactions for which it is the proxy, and takes part in the certification of other transactions. A replica is formalized as an object instance of the following subclass Replica: The datastore attribute represents the replica's local database as a set < key 1 , val 1, ver 1 > , . . . , < key l , val l , ver l > of triples < key i , val i, ver i > denoting a version of the data item with key key i , value val i , and version number ver i . 5 The attributes executing, submitted, committed, and aborted denote the transactions executed by the replica and which are/have been, respectively, currently executing, submitted for certification, committed, and aborted. The queue holds the certification queue of transactions to be certified by the replica (in collaboration with other replicas). transToCertify contains data used for the certification of the first element in the certification queue (in the simpler algorithm), and decidedTranses show the status (aborted/committed) of the transactions that have previously been (partly) certified by the replica.
Clients. Finally, I add an "interface/application layer" to the P-Store specification in the form of clients that send transactions to be executed by P-Store:
class Client | txns : ObjectList, pendingTrans : TransIdSet .
txns denotes the list of transaction (objects) that the client wants P-Store to execute, and pendingTrans is either the empty set or (the identity of) the transaction the client has submitted to P-Store but whose execution is not yet finished.
Initial State.
The following shows an initial state init4 (with some parts replaced by '...') used in the analysis of P-Store. This system has: two clients, c1 and c2, that want P-Store to execute the two transactions t1 and t2; three replicating sites, r1, r2, and r3; and three data items/keys x, y, and z. Transaction t1 wants to execute the operations (xl :=read x) (yl :=read y) at replica r1, while transaction t2 wants to execute write(y, 5) write(x, 8) at replica r2. The initial state also contains the empty atomic multicast table and the table which assigns to each key the sites replicating this key. Initially, the value of each key is [START_REF] Ardekani | G-DUR: a middleware for assembling, analyzing, and improving transactional protocols[END_REF] and its version is 1. Site r2 replicates both x and y.
Local Execution of a Transaction
The execution of a transaction has two phases. In the first phase, the transaction is executed locally by its proxy: the transaction performs its reads and writes, but the database is not updated; instead, the reads are recorded in the transaction's read set, and its updates are stored in the writeSet attribute.
The second phase is the certification (or validation) phase, when all appropriate nodes together decide whether or not the transaction can be committed or must be aborted. If it can be committed, the replicas update their databases.
This section specifies the first phase, which starts when a client without pending transactions sends its next transaction to its proxy. I do not show the variable declarations (see [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF]), but follow the convention that variables are written with (all) capital letters. P-Store assumes that the local executions of multiple transactions on a site are equivalent to some serialized executions. I model this assumption by executing the transactions one-by-one. Therefore, a replica can only receive a transaction request if its set of currently executing transactions is empty (none): There are three cases to consider when executing a read operation X :=read K: (i) the transaction has already written to key K; (ii) the transaction has not written K and the proxy replicates K; or (iii) the key K has not been read and the proxy does not replicate K. I only show the specification for case (i). I do not know what version number should be associated to the read, and I choose not to add the item to the read set. (The paper [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] does not describe what to do in this case; the problem disappears if we make the common assumption that a transaction always reads a key before updating it.) As an effect, the local variable X gets the value V: Write operations are easy: evaluate the expression EXPR to write and add the update to the transaction's writeSet:
Certification Phase
When all the transaction's operations have been executed by the proxy, the proxy's next step is to try to commit the transaction. If the transaction is readonly and local, it can be committed directly; otherwise, it must be submitted to the certification protocol. Some colleagues and I all found the definition of local in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] (and quoted in Section 4) to be quite ambiguous. We thought that "for any site s in Replicas(T ), Items(T ) ⊆ Items(s)" means either "for each site s . . . " or that proxy(T ) replicates all items in T . The first author of [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF], Nicolas Schiper, told me that it actually means "for some s . . . ." In hindsight, we see that this is also a valid interpretation of the definition of local . To avoid misunderstanding, it is probably good to avoid the phrase "for any" and use either "for each" or "for some."
If the transaction T cannot be committed immediately, it is submitted for certification by atomically multicasting a certification request-with the transaction's identity TID, read set RS, and write set WS-to all replicas storing keys read or updated by T (lines 9-10 in Fig. 1 According to lines 7-8 in Fig. 1, a replica's local certification succeeds if, for each key in the transaction's read set that is replicated by the replica in question, the transaction read the same version stored by the replica: op certificationOk : ReadSet DataStore -> Bool . eq certificationOk((versionRead(K, VERSION) , READSET), (< K, V, VERSION2 > , DS)) = (VERSION == VERSION2) and certificationOk(READSET, (< K, V, VERSION2 > , DS)) . eq certificationOk(RS, DS) = true [owise] .
If the transaction to certify is not local, the certifying sites must together decide whether or not the transaction can be committed. Each certifying site therefore checks whether the transaction passes the local certification test, and sends the outcome of this test to the other certifying sites (lines 13 and 19-22): If the local certification fails, the site sends an abort vote to the other write replicas and also notifies the proxy of the outcome. Otherwise, the site sends a commit vote to all other site replicating an item written by the transaction. The voting phase ends when there is a voting quorum; that is, when the voting sites together replicate all keys read by the transaction. This means that a certifying site must keep track of the votes received during the certification of a transaction. The set of sites from which the site has received a (positive) vote is the fourth parameter of the certify record it maintains for each transaction. If a site receives a positive vote, it stores the sender of the vote (lines [START_REF] Ölveczky | Design and validation of cloud computing data stores using formal methods[END_REF][START_REF] Ölveczky | Formal modeling, performance estimation, and model checking of wireless sensor network algorithms in Real-Time Maude[END_REF]. If a site receives a negative vote, it decides the fate of the transaction and notifies the proxy if it replicates an item written by the transaction (lines 28-29).
If a write replica has received positive votes from a voting quorum (lines 23-27 and 29), the transaction can be committed, and the write replica applies the updates and notifies the proxy. The following rule models the behavior when a site has received votes from a voting quorum RIDS for transaction TID: Finally, the proxy of transaction TID receives the outcome from one or more sites in TID's certification set (the abort case is similar): ---notify client
6 Formal Analysis of P-Store
In the absence of failures, P-Store is supposed to guarantee serializability of the committed transactions, and that a decision (commit/abort) is made on all transactions.
To analyze P-Store, I search for all final states-i.e., states that cannot be further rewritten-reachable from a given initial state, and inspect the result. This analysis therefore also discovers undesired deadlocks. In the future, I should instead automatically check serializability, possibly using the techniques in [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF], which adds to the state a "serialization graph" that is updated whenever a transaction commits, and then checks whether the graph has cycles.
The search for final states reachable from state init4 in Section 5.1 yields a state which shows that t1's proxy is not notified about the outcome of the certification (see [START_REF] Ölveczky | Formalizing and validating the P-Store replicated data store in Maude[END_REF] for details). The problem seems to be line 29 in the algorithm in Fig. 1: only sites replicating items written by transaction T (WReplicas(T )) send the outcome of the certification to T 's proxy. It is therefore not surprising that the outcome of the read-only transaction t1 does not reach t1's proxy.
The transactions in init4 are local. What about non-local transactions? The initial state init5 is the same as init4 in Section 5.1, except that item y is only replicated at site r3, which means that t1 and t2 become non-local transactions.
Searching for final states reachable from init5 shows a result where the certification process cannot reach a decision on the outcome of transaction t1: The fate of t1 is not decided: both r2 and r3 are stuck in their certification process. The problem seems to be lines 22 and 23 in the P-Store certification algorithm: why are only write replicas involved in sending and receiving votes during the certification? Shouldn't both read and write replicas vote? Otherwise, it is impossible to certify non-local read-only transactions, such as t1 in init5. are read from the same site and some additional concurrency control is used to ensure serializability), but admitted that this is indeed not mentioned anywhere in their paper. My specifications consider the algorithm as given in [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF], without taking the unstated assumptions into account, and also subjects the local read-only transactions to certification.
Analysis of the Updated Specification. I have analyzed the corrected specification on five small initial configurations (3 sites, 3 data items, 2 transactions, 4 operations). All the final states were correct: the committed transactions were indeed serializable.
The Advanced Algorithm. I have also specified and successfully analyzed the (corrected) version of P-Store where multiple transactions can be certified concurrently. It is beyond the scope of this paper to describe that specification.
Related Work
Different communication forms/primitives have been defined in Maude, including wireless broadcast that takes into account the geographic location of nodes and the transmission strength/radius [START_REF] Ölveczky | Formal modeling, performance estimation, and model checking of wireless sensor network algorithms in Real-Time Maude[END_REF], as well as wireless broadcast in mobile systems [START_REF] Liu | Modeling and analyzing mobile ad hoc networks in Real-Time Maude[END_REF]. However, I am not aware of any model of atomic multicast in Maude. Maude has been applied to a number of industrial and academic cloud storage systems, including Google's Megastore [START_REF] Grov | Formal modeling and analysis of Google's Megastore in Real-Time Maude[END_REF], Apache Cassandra [START_REF] Liu | Formal modeling and analysis of Cassandra in Maude[END_REF], and UC Berkeley's RAMP [START_REF] Liu | Formal modeling and analysis of RAMP transaction systems[END_REF]. However, that work did not address issues like atomic multicast and group communication commit.
Lamport's TLA+ has also been used to specify and model check large industrial cloud storage systems like S3 at Amazon [START_REF] Newcombe | How Amazon Web Services uses formal methods[END_REF] and the academic TAPIR transaction protocol targeting large-scale distributed storage systems.
On the validation of P-Store and similar designs, P-Store itself has been proved to be correct using informal "hand proofs" [START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF]. However, such hand proofs do not generate precise specifications of the systems and tend to be error-prone and rely on missing assumptions, as I show in this paper. I have not found any model checking validation of related designs, such as Jessy [START_REF] Ardekani | Non-monotonic snapshot isolation: Scalable and strong consistency for geo-replicated transactional systems[END_REF] and Walter [START_REF] Sovran | Transactional storage for georeplicated systems[END_REF].
Concluding Remarks
Cloud computing relies on partially replicated wide-area data stores to provide the availability and elasticity required by cloud systems. P-Store is a well-known such data store that uses atomic multicast, group communication commitment, concurrent certification of independent transactions, etc. Furthermore, many other partially replicated data stores are extensions and variations of P-Store.
I have formally specified and analyzed P-Store in Maude. Maude reachability analysis uncovered a numbers of errors in P-Store that were confirmed by one of the P-Store developers: both read and write replicas need to participate in the certification of transactions; write replicas are not enough. I have specified the proposed fix of P-Store, whose Maude analysis did not uncover any error.
Another main contribution of this paper is a general and abstract Maude "primitive" for both variations of atomic multicast.
One important advantage claimed by proponents of formal methods is that even precise-looking informal descriptions tend to be ambiguous and contain missing assumptions. In this paper I have pointed at a concrete case of ambiguity in a precise-looking definition, and at a crucial missing assumption in P-Store.
This work took place in the context of the University of Illinois Center for Assured Cloud Computing, within which we want to identify key building blocks of cloud storage systems, so that they can be built and verified in a modular way by combining such building blocks in different ways. Some of those building blocks are group communication commitment certification and atomic multicast. In more near term, this work should simplify the analysis of other state-of-the-art data stores, such as Walter and Jessy, that can be seen as extensions of P-Store.
The analysis performed was performed using reachability analysis; in the future one should also be able to specify the desired consistency property "directly."
rl [l] : m(O,w) < O : C | a1 : x, a2 : O', a3 : z > => < O : C | a1 : x + w, a2 : O', a3 : z > m'(O',x) .
op atomic-multicast_from_to_ : MsgCont Oid OidSet -> Configuration . The equation eq (atomic-multicast MC from O to OS) [AM-ENTRIES] = (distribute MC from O to OS) [insert(MC, OS, AM-ENTRIES)] .
(
msg msg from o to o1) ... (msg msg from o to on), one for each receiver o k in the set {o 1 , . . . , o n }; and (2) by adding, for each receiver o k , the message (content) msg to the set unread k of unread atomicmulticast messages in the atomic-multicast table.
vars MC MC2 : MsgContent . vars MCS MCS2 : MsgContSet . vars MCL MCL2 MCL3 MCL4 : MsgContList . eq okToRead(MC, O, [am-entry(O, MCL, MCS MC MC2) am-entry(O2, MCL2 :: MC2 :: MCL3 :: MC :: MCL4, MCS2) AM-ENTRIES]) = false . eq okToRead(MC, O, [am-entry(O, MCL, MCS MC MC2) am-entry(O2, MCL2 :: MC2 :: MCL4, MCS2 MC) AM-ENTRIES]) = false . eq okToRead(MC, O, [AM-ENTRIES]) = true [owise] .
ceq receivedAfter(MC MCS, [am-entry(O2, MCL :: MC :: MC2 :: MCL2, MCS2) AM-ENTRIES]) = receivedAfter(MC MCS MC2, [am-entry(O2, MCL :: MC :: MC2 :: MCL2, MCS2) AM-ENTRIES]) if not (MC2 in MCS) .
ceq receivedAfter(MC MCS, [am-entry(O2, MCL2 :: MC :: MCL4, MCS2) AM-ENTRIES]) = receivedAfter(MC MCS MCS2, [am-entry(O2, MCL2 :: MC :: MCL4, emptyMsgContSet) AM-ENTRIES]) if MCS2 =/= emptyMsgContSet .
eq okToRead(MC, O, [am-entry(O, MCL, MCS MC) AM-ENTRIES]) = not (MC in receivedAfter(MCS, [am-entry(O, MCL, MCS) AM-ENTRIES])) .
Fig. 1 .
1 Fig. 1. The P-Store certification algorithm in [14].
x, r2) ;; replicatingSites(y, (r2 , r3)) ;; replicatingSites(z, r1)] < c1 : Client | txns : < t1 : Transaction | operations : ((xl :=read x) (yl :=read y)), destination : r1, readSet : emptyReadSet, status : undecided, writeSet : emptyWriteSet, localVars : (xl |-> [0] , yl |-> [0]) >, pendingTrans : empty > < c2 : Client | txns : < t2 : Transaction | operations : (write(y, 5) write(x, 8)), destination : r2, ... > pendingTrans : empty > < r1 : Replica | datastore : (< z, [2], 1 >), committed : none, aborted : none, executing : none, submitted : none, queue : emptyTransList, transToCertify : noTrans, decidedTranses : noTS > < r2 : Replica | datastore : ((< x, [2], 1 >) , (< y, [2], 1 >)), ... > < r3 : Replica | datastore : (< y, [2], 1 >), ... > .
rl [sendTxn] : < C : Client | pendingTrans : empty, txns : < TID : Transaction | destination : RID > ; TXNS > => < C : Client | pendingTrans : TID, txns : TXNS > (msg executeTrans(< TID : Transaction | >) from C to RID) .
rl [receiveTxn] : (msg executeTrans(< TID : Transaction | >) from C to RID) < RID : Replica | executing : none > => < RID : Replica | executing : < TID : Transaction | > > .
rl [executeRead1] : < RID : Replica | executing : < TID : Transaction | operations : (X :=read K) OPLIST, writeSet : (K |-> V), WS, localVars : VARS > > => < RID : Replica | executing : < TID : Transaction | operations : OPLIST, localVars : insert(X, V, VARS) > > .
rl [executeWrite] : < RID : Replica | executing : < TID : Transaction | operations : write(K, EXPR) OPLIST, localVars : VARS, writeSet : WS > > => < RID : Replica | executing : < TID : Transaction | operations : OPLIST, writeSet : insert(K, eval(EXPR, VARS), WS) > > .
rl [readCommit] : (msg commit(TID) from RID2 to RID) < RID : Replica | submitted : < TID : Transaction | >, committed : TRANSES > => < RID : Replica | submitted : none, committed : (TRANSES < TID : Transaction | >) > done(TID) .
Maude> (search init5 =>! C:Configuration .) ... Solution 4 ... < r1 : Replica | submitted : < t1 : Transaction | localVars :(xl |->[8], yl |->[5]), operations : nil, readSet : versionRead(x,2), versionRead(y,2), ... > , transToCertify : noTrans > < r2 : Replica | committed : < t2 : Transaction | writeSet : (x |-> [8], y |-> [5]), ... >, datastore : < x,[8],2 >, decidedTranses : transStatus(t2,commit), transToCertify : certify(t1,r1,(versionRead(x,2),versionRead(y,2)), emptyWriteSet,r2) , ... > < r3 : Replica | aborted : none, committed : none, datastore : < y,[5],2 >, decidedTranses : transStatus(t2,commit), transToCertify : certify(t1, r1, ..., emptyWriteSet, r3) , ... >
defines a family of transitions in which a message m, with parameters O and w, is read and consumed by an object O of class C, the attribute a1 of the object
O is changed to x + w, and a new message m'(O',x) is generated. Attributes
whose values do not change and do not affect the next state of other attributes
or messages, such as a3, need not be mentioned in a rule. Likewise, attributes
that are unchanged, such as a2, can be omitted from right-hand sides of rules.
):
crl [commit/submit2] :
< RID : Replica | executing :
< TID : Transaction | operations : nil, readSet : RS, writeSet : WS >,
submitted : TRANSES >
REPLICA-TABLE
=>
< RID : Replica | executing : none, submitted : TRANSES < TID : Transaction | > >
REPLICA-TABLE
(atomic-multicast certify(TID, RS, WS) from RID
to replicas((keys(RS) , keys(WS)), REPLICA-TABLE))
if WS =/= emptyWriteSet or not localTrans(keys(RS), REPLICA-TABLE) .
Nicolas Schiper confirmed that the errors pointed out in Section 6 are indeed errors in P-Store. He also suggested the fix alluded to in Section 6: replace WReplicas(T ) with Replicas(T ) in lines 22, 23, and 29. The Maude specification of the proposed correction is given in http://folk.uio.no/peterol/WADT16/.Missing Assumptions. One issue seems to remain: why can read-only local transactions be committed without certification? Couldn't such transactions have read stale values? Nicolas Schiper kindly explained that local read-only transactions are handled in a special way (all values
7 Fixing P-Store
An equational condition ui = wi can also be a matching equation, written ui:= wi, which instantiates the variables in ui to the values that make ui = wi hold, if any.
Operationally, a term is reduced to its E-normal form modulo A before a rewrite rule is applied.
The paper[START_REF] Schiper | P-Store: Genuine partial replication in wide area networks[END_REF] does not specify whether a replica stores multiple versions of a key.
Acknowledgments. I would like to thank Nicolas Schiper for quick and friendly replies to my questions about P-Store, the anonymous reviewers for helpful comments, and Si Liu and José Meseguer for valuable discussions about P-Store and atomic multicast.
This work was partially supported by AFOSR/AFRL Grant FA8750-11-2-0084 and NSF Grant CNS 14-09416. |
01677442 | en | [
"math.math-na"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01677442v2/file/clough_t_splineshal2.pdf | Tom Lyche
Jean-Louis Merrien
email: [email protected]
J.-L Merrien
Simplex-Splines on the Clough-Tocher Element
Keywords: Triangle Mesh, Piecewise polynomials, Interpolation, Simplex Splines, Marsden-like Identity
We propose a simplex spline basis for a space of C 1 -cubics on the Clough-Tocher split on a triangle. The 12 elements of the basis give a nonnegative partition of unity. We derive two Marsden-like identities, three quasi-interpolants with optimal approximation order and prove L ∞ stability of the basis. The conditions for C 1 -junction to neighboring triangles are simple and similar to the C 1 conditions for the cubic Bernstein polynomials on a triangulation. The simplex spline basis can also be linked to the Hermite basis to solve the classical interpolation problem on the Clough-Tocher split.
Introduction
Piecewise polynomials over triangles have applications in several branches of the sciences ranging from finite element analysis, surfaces in computer aided design and other engineering problems. For many of these applications, piecewise linear C 0 surfaces do not suffice. In some cases, we need smoother surfaces for modeling, or higher degrees to increase the approximation order. To obtain C 1 smoothness on an arbitrary triangulation, one needs piecewise quintic polynomials, [START_REF] Lai | Spline Functions on Triangulations[END_REF]. We can use lower degrees if we are willing to split each triangle into a number of subtriangles. Examples are the Clough-Tocher split (CT), [START_REF] Clough | Finite element stiffness matrices for analysis of plate bending[END_REF] and the Powell-Sabin 6 and 12-splits (PS6, PS12), [START_REF] Powell | Piecewise quadratic approximation on triangles[END_REF]. The 1 number of subtriangles is 3, 6 and 12 for CT, PS6 and PS12, respectively. A B-spline like basis both for C 1 cubics and C 2 quintics has been constructed for PS6, [START_REF] Grošelj | Construction and analysis of cubic Powell-Sabin B-splines[END_REF][START_REF] Speleers | A normalized basis for quintic Powell Sabin splines[END_REF] and references therein. Recently a B-spline like basis has also been proposed for a 9 dimensional subspace of C 1 cubics on CT, [START_REF] Grošelj | Construction and analysis of cubic Powell-Sabin B-splines[END_REF]. The PS12-split can be defined as the complete graph obtained by connecting vertices and edge midpoints of each triangle. A B-spline basis for PS12 and the full space of C 1 -cubics on CT seem difficult. An alternative to the B-spline basis is the Hermite basis. Since it uses both values and derivatives it is not as stable as the B-spline basis and it does not form a nonnegative partition of unity.
Here we construct a B-spline basis for one triangle in the coarse triangulation and connect to neighboring triangles using Bernstein-Bézier techniques. This was done for PS12 using C 1 quadratics, [START_REF] Cohen | A B-spline-like basis for the Powell-Sabin 12-split based on simplex splines[END_REF], and C 2 and C 3 quintics, [START_REF] Lyche | A Hermite interpolatory subdivision scheme for C 2 -quintics on the Powell-Sabin 12-split[END_REF][START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF]. These bases, consisting of simplex splines (see for example [START_REF] Micchelli | On a numerically efficient method for computing multivariate B-splines[END_REF] for a general introduction), all share attractive properties of univariate B-splines such as In this paper we consider the full 12 dimensional space of C 1 cubics on the CT-split. We will define a simplex spline basis for this split and show that it has all the B-spline and Bernstein-Bézier properties mentioned above.
The CT-split is interesting for many reasons. To obtain a space of C 1 piecewise polynomials of degree at most 3 on an arbitrary triangulation, we only need to divide each triangle into 3 subtriangles, while 6 and 12 subtriangles are needed for PS6 and PS12. Moreover, the approximation order of the space S 3 of piecewise C 1 cubics on CT is 4 and this is at least as good as for the spaces S 6 and S 12 of piecewise cubics on PS6 and piecewise quadratics on PS12. The degrees of freedom for S 6 are values and gradients of the vertices of the coarse triangulation while for S 3 and S 12 we need in addition cross boundary derivatives at the midpoint of the edges, see Figure 1 (left). For further comparisons of these three spaces see Section 6.6 in [START_REF] Lai | Spline Functions on Triangulations[END_REF].
This paper is organized as follows: In the remaining part of the introduction, we review some properties of CT, introduce our notation and recall the main properties of simplex splines. In Section 2, we construct a cubic simplex spline basis for CT, from which, in Section 3, we derive two Marsden identities and then, in Section 4, three quasi-interpolants, and show L ∞ stability of the basis. In Section 5, conditions to ensure C 0 and C 1 continuity through an edge between two triangles are derived. The conversion between the simplex spline basis and the Hermite basis for CT is considered in Section 6. Lagrange and Hermite interpolation on triangulations using C 1 cubics, quartics and higher degrees have also been considered in [START_REF] Davydov | Interpolation by Splines on Triangulations[END_REF]. We end the paper with numerical examples of interpolation on a triangulation.
The Clough-Tocher split
To describe this split, let T := p 1 , p 2 , p 3 be a nondegenerate triangle in R 2 . Using the barycenter p T := (p 1 + p 2 + p 3 )/3 we can split T into three subtriangles T 1 := p T , p 2 , p 3 , T 2 := p T , p 3 , p 1 and T 3 := p T , p 1 , p 2 . On T we consider the space
S 1 3 ( ) := {f ∈ C 1 (T ) : f |T i is a polynomial of degree at most 3, i = 1, 2, 3}.
(1) This is a linear space of dimension 12, [START_REF] Lai | Spline Functions on Triangulations[END_REF]. Indeed, each element in the space can be determined uniquely by specifying values and gradients at the 3 vertices and cross boundary derivatives at the midpoint of the edges, see Figure 1,(right).
We associate the half open edges
p i , p T ) := {(1 -t)p i + tp T : 0 ≤ t < 1}, i = 1, 2, 3,
with subtriangles of T as follows
p 1 , p T ) ∈ T 2 , p 2 , p T ) ∈ T 3 , p 3 , p T ) ∈ T 1 , (2)
and we somewhat arbitrarily assume p T ∈ T 2 .
Notation
We let N be the set of natural numbers and N 0 := N ∪ {0} the set of nonnegative integers. For a given degree d ∈ N 0 , the space of polynomials of total degree at most d will be denoted by P d . The Bernstein polynomials of degree d on T are given by
B d ijk (p) := B d ijk (β 1 , β 2 , β 3 ) := d! i!j!k! β i 1 β j 2 β k 3 , i, j, k ∈ N 0 , i+j +k = d, (3)
where p ∈ R 2 and β 1 , β 2 , β 3 , given by
p = β 1 p 1 + β 2 p 2 + β 3 p 3 , β 1 + β 2 + β 3 = 1, (4)
are the barycentric coordinates of p. The set
B d := {B d ijk : i, j, k ∈ N 0 , i + j + k = d} (5)
is a partition of unity basis for P d . The points
p d ijk := ip 1 + jp 2 + kp 3 d , i, j, k ∈ N 0 , i + j + k = d, (6)
are called the domain points of B d relative to T . In this paper, we will order the cubic Bernstein polynomials by going counterclockwise around the boundary, starting at p 1 with B 3 300 and ending with B 3 111 , see Figure 2 {B 1 , B 2 , . . . , B 10 } := {B 3 300 , B
p 3 , 2p 3 + p 1 3 , p 3 + 2p 1 3 , p T . (8)
The partial derivatives of a bivariate function
f = f (x 1 , x 2 ) are denoted ∂ 1,0 f := ∂f ∂x 1 , ∂ 0,1 f := ∂f ∂x 2 , and ∂ u f := (u 1 ∂ 1,0 + u 2 ∂ 0,1
)f is the derivative in the direction u := (u 1 , u 2 ). We denote by ∂ β j f , j = 1, 2, 3 the partial derivatives of f (β 1 , β 2 , β 3 ) with respect to the barycentric coordinates of f . The symbols S and S o are the closed and open convex hull of a set S ∈ R m . For k ≤ m, we let vol k (S) be the k dimensional volume of S and define 1 S : R m → R by
1 S (x) := 1, if x ∈ S, 0, otherwise.
By the association (2), we note that for any x ∈ T
1 T 1 (x) + 1 T 2 (x) + 1 T 3 (x) = 1 T (x). (9)
We write #K for the number of elements in a sequence K.
Bivariate simplex splines
In this section we recall some basic properties of simplex splines.
For n ∈ N, d ∈ N 0 , let m := n + d and k 1 , . . . , k m+1 ∈ R n be a sequence of points called knots. The multiplicity of a knot is the number of times it occurs in the sequence. Let σ = k 1 , . . . , k m+1 with vol m (σ) > 0 be a simplex in R m whose projection π : R m → R n onto the first n coordinates satisfies π(k i ) = k i , for i = 1, . . . , m + 1.
With [K] := [k 1 , . . . , k m+1 ], the unit integral simplex spline M [K] can be defined geometrically by
M [K] : R n → R, M [K](x) := vol m-n σ ∩ π -1 (x) vol m (σ) .
For properties of M [K] and proofs see for example [START_REF] Micchelli | On a numerically efficient method for computing multivariate B-splines[END_REF]. Here, we mention:
• If n = 1 then M [K]
is the univariate B-spline of degree d with knots K, normalized to have integral one.
• In general M [K] is a nonnegative piecewise polynomial of total degree d and support K .
• For d = 0 we have
M [K](x) := 1/vol n ( K ), x ∈ K o , 0, if x / ∈ K . (10)
• The value of M [K] on the boundary of K has to be delt with separately, see below.
• If vol n ( K ) = 0 then M [K] can be defined either as identically zero or as a distribution.
We will deal with the bivariate case n = 2, and for our purpose it is convenient to work with area normalized simplex splines, [START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF]. They are defined by Q[K](x) = 0 for all x ∈ R 2 if vol 2 ( K ) = 0, and otherwise
Q T [K] = Q[K] := vol 2 (T ) d+2 2 M [K], (11)
where T in general is some subset of R 2 , and in our case will be the triangle T := p 1 , p 2 , p 3 . The knot sequence is [p 1 , p 2 , p 3 , p T ] taken with multiplicities. Using properties of M [K] and [START_REF] Powell | Piecewise quadratic approximation on triangles[END_REF], we obtain the following for
Q[K]:
• It is a piecewise polynomial of degree d = #K -3 with support K
• knot lines are the lines in the complete graph of K
• local smoothness: Across a knot line, Q[K] ∈ C d+1-µ
, where d is the degree and µ is the number of knots on that knot line, including multiplicities
• differentiation formula: ∂ u Q[K] = d d+3 j=1 a j Q[K \ k j ],
for any u ∈ R 2 and any a 1 , . . . , a d+3 such that j a j k j = u, j a j = 0 (A-recurrence)
• recurrence relation: Q[K](x) = d+3 j=1 b j Q[K \ k j ](x), for any x ∈ R 2 and any b 1 , . . . , b d+3 such that j b j k j = x, j b j = 1 (B-recurrence) • knot insertion formula: Q[K] = d+3 j=1 c j Q[K ∪ y \ k j ],
for any y ∈ R 2 and any c 1 , . . . , c d+3 such that j c j k j = y, j c j = 1 (C-recurrence)
• degree zero: From ( 10) and ( 11) we obtain for d = 0
Q[K](x) := vol 2 (T )/vol 2 ( K ), x ∈ K o , 0, if x / ∈ K . ( 12
)
2 A simplex spline basis for the Clough-Tocher split
In this section we determine and study a basis of C 1 cubic simplex splines on the Clough-Tocher split on a triangle. For fixed x ∈ T we use the simplified notation
i j k := Q[p [i] 1 , p [j] 2 , p [k] 3 , p [l] T ](x), i, j, k, l ∈ N 0 , i + j + k + l ≥ 3,
where the notation p
[n] m denotes that p m is repeated n times. When one of the integers i, j, k, l is zero we have Lemma 1 For i, j, k, l ∈ N 0 , i+j +k+l = d ≥ 0 and x ∈ T with barycentric coordinates β 1 , β 2 , β 3 we have
i = 0, j+1 k+1 + = d! j!k!l! (β 2 -β 1 ) j (β 3 -β 1 ) k (3β 1 ) l 1 1 1 , j = 0, i+1 k+1 + = d! i!k!l! (β 1 -β 2 ) i (β 3 -β 2 ) k (3β 2 ) l 1 1 1 , k = 0, i+1 j+1
+ = d! i!j!l! (β 1 -β 3 ) i (β 2 -β 3 ) j (3β 3 ) l 1 1 1 , l = 0, i+1 j+1 k+1 = d! i!j!k! β i 1 β j 2 β k 3 1 1 1 = B d ijk (x), ( 13
)
where the constant simplex splines are given by
1 1 1 = 3 1 T 1 (x), 1 1 1 = 3 1 T 2 (x), 1 1 1 = 3 1 T 3 (x), 1 1 1 = 1 T (x). ( 14
)
Proof: Suppose i = 0. The first equation in (13) holds for d = 0. Suppose it holds for d -1 and let j + k + l = d. Let β 023 j , j = 0, 2, 3 be the barycentric coordinates of x with respect to T 1 = p 0 , p 2 , p 3 , where p 0 := p T . By the B-recurrence
j+1 k+1 + = β 023 2 j k+1 + + β 023 3 j+1 k + + β 023 0 j+1 k+1 .
It is easily shown that
β 023 2 = β 2 -β 1 , β 023 3 = β 3 -β 1 , β 023 0 = 3β 1 .
Therefore, by the induction hypothesis
j+1 k+1 + = (d -1)! j!k!l! (j + k + l)(β 023 2 ) j (β 023 3 ) k (β 023 0 ) l 1 1 1 Since j + k + l = d we obtain the first equation in (13).
The next two equations in (13) follow similarly using
β 031 1 = β 1 -β 2 , β 031 3 = β 3 -β 2 , β 031 0 = 3β 2 , β 012 1 = β 1 -β 3 , β 012 2 = β 2 -β 3 , β 012 0 = 3β 3 .
Using the B-recurrence repeatedly, we obtain the first equality for l = 0. The values of the constant simplex splines are a consequence of [START_REF] Speleers | A normalized basis for quintic Powell Sabin splines[END_REF].
Remark 2 For i = 0 we note that the expression
d! j!k!l! (β 2 -β 1 ) j (β 3 - β 1 ) k (3β 1 ) l in (13) is a Bernstein polynomial on T 1 . Similar remarks hold for j, k = 0. The set C1 := i j k ∈ S 1 3 ( ) : i j k = 0 (15)
of all nonzero simplex splines that can be used in a basis for S 1 3 ( ) contains precisely the following 13 simplex splines.
Lemma 3 We have
C1 = i j k : i, j, k ∈ N, i + j + k = 6 2 2 1 1 , 1 2 2 1 , 2 1 2 1
.
Proof: For l = 0 it follows from Lemma 1 that i j k ∈ S 1 3 ( ) for all i + j + k = 6. Consider next l = 1. By the local smoothness property, C 1 smoothness implies that each of i, j, k can be at most 2. But then
2 2 1 1 , 1 2 2 1 , 2 1 2 1
are the only possibilities. Now if l = 2 then i + j + k = 4 implies that one of i, j, k must be at least 2 and we cannot have C 1 smoothness. Similarly l > 2 is not feasible. Recall that S 1 3 ( ) is a linear space of dimension 12, [START_REF] Clough | Finite element stiffness matrices for analysis of plate bending[END_REF]. Thus, in order to obtain a possible basis for this space, we need to choose 12 of the 13 elements in C1. Since C1 contains the 10 cubic Bernstein polynomials we have to include at least two of
2 2 1 1 , 1 2 2 1 , 2 1 2 1
. We also want a symmetric basis and therefore, we have to include all of them. But then one of the Bernstein polynomials has to be excluded. To see which one to exclude, we insert the
knot p 3 = -p 1 -p 2 + 3p T into 2 2 1 1
and use the C-recurrence to obtain
2 2 1 1 = - 1 2 2 1 - 2 1 2 1 + 3 2 2 2
, or by (13)
2 2 1 1 + 1 2 2 1 + 2 1 2 1 = 3B 3 111 (x). ( 16
)
Thus, in order to have symmetry and hopefully obtain 12 linearly independent functions, we see that B 3 111 is the one that should be excluded. We obtain the following simplex spline basis for S 1 3 ( ).
Theorem 4 (CTS-basis) The 12 simplex splines S 1 , . . . , S 12 , where
S j (x) := B j (x)
, where B j is given by (7) j = 1, . . . , 9,
S 10 (x) := 1 3 2 2 1 1 = (B 3 210 -B 3 300 )1 T 1 + (B 3 120 -B 3 030 )1 T 2 + (B 3 111 -B 3 102 -B 3 012 + 2B 3 003 )1 T 3 S 11 (x) := 1 3 1 2 2 1 = (B 3 111 -B 3 210 -B 3 201 + 2B 3 300 )1 T 1 + (B 3 021 -B 3 030 )1 T 2 + (B 3 012 -B 3 003 )1 T 3 S 12 (x) := 1 3 2 1 2 1 = (B 3 201 -B 3 300 )1 T 1 + (B 3 111 -B 3 120 -B 3 021 + 2B 3 030 )1 T 2 + (B 3 102 -B 3 003 )1 T 3 .
(17) form a partition of unity basis for the space S 1 3 ( ) given by (1). This basis, which we call the CTS-basis, is the only symmetric simplex spline basis for S 1 3 ( ). On the boundary of T the functions S 10 , S 11 , S 12 have the value zero, while the elements of {S 1 , S 2 , . . . , S 9 } reduce to zero, or to univariate Bernstein polynomials.
Proof: By Lemma 1, it follows that the Bernstein polynomials B 1 , . . . , B 9 are cubic simplex splines, and the previous discussion implies that the functions in (17), apart from scaling, are the only candidates for a symmetric simplex spline basis for S 1 3 ( ).
We can find the explicit form of (see definitions at the end of Section 1) . Consider the C-recurrence. Insert-ing p 1 twice and using p 1 = -p 2p 3 + 3p T and (13) we find
2 2 1 1 = - 3 1 1 1 - 3 2 1 + 3 3 2 1 = 4 1 1 + 4 1 1 -3 4 1 1 - 3 2 1 + 3 3 2 1 = (β 1 -β 2 ) 3 1 1 1 + (β 1 -β 3 ) 3 1 1 1 -3β 3 1 1 1 1 -3(β 1 -β 3 ) 2 (β 2 -β 3 ) 1 1 1 + 9β 2 1 β 2 1 1 1 = (β 1 -β 2 ) 3 1 1 1 + [(β 1 -β 3 ) 3 -3(β 1 -β 3 ) 2 (β 2 -β 3 )] 1 1 1 + 3β 2 1 (3β 2 -β 1 ) 1 1 1 .
(18)
Using ( 9) and Lemma 1, we can write 3
1 1 1 = 1 1 1 + 1 1 1 + 1 1 1 , so that 2 2 1 1 = [(β 1 -β 2 ) 3 + β 2 1 (3β 2 -β 1 )] 1 1 1 + β 2 1 (3β 2 -β 1 ) 1 1 1 + [(β 1 -β 3 ) 2 (β 1 -3β 2 + 2β 3 ) + β 2 1 (3β 2 -β 1 )] 1 1 1 = (3β 2 1 β 2 -β 3 1 ) 1 1 1 + (3β 1 β 2 2 -β 3 2 ) 1 1 1 + (6β 1 β 2 β 3 -3β 1 β 2 3 -3β 2 β 2 3 + 2β 3 3 ) 1 1 1 . ( 19
)
By symmetry we obtain
1 2 2 1 = (6β 1 β 2 β 3 -3β 2 1 β 2 -3β 2 1 β 3 + 2β 3 1 ) 1 1 1 + (3β 2 2 β 3 -β 3 2 ) 1 1 1 + (3β 2 β 2 3 -β 3 3 ) 1 1 1 , 2 1 2 1 = (3β 2 1 β 3 -β 3 1 ) 1 1 1 + (3β 1 β 2 3 -β 3 3 ) 1 1 1 + (6β 1 β 2 β 3 -3β 1 β 2 2 -3β 2 2 β 3 + 2β 3 2 ) 1 1 1 . ( 20
)
The formulas for S 10 , S 11 and S 12 in (17) now follows from ( 19) and (20) using ( 3) and ( 14).
By the partition of unity for Bernstein polynomials we find
12 j=1 S j (x) = i+j+k=3 B 3 ijk (x) = 1, x ∈ T .
It is well known that B 3 ijk reduces to univariate Bernstein polynomials or zero on the boundary of T .
Clearly S j ∈ C(R 2 ), j = 10, 11, 12, since no edge contains more than 4 knots. This follows from general properties of simplex splines. By the local support property they must therefore be zero on the boundary. It also follows that S j ∈ C 1 (T ), j = 10, 11, 12, since no interior knot line contains more than 3 knots.
It remains to show that the 12 functions S j , j = 1, . . . , 12 are linearly independent on T . Suppose that 12 j=1 c j S j (x) = 0 for all x ∈ T and let (β 1 , β 2 , β 3 ) be the barycentric coordinates of x. On the edge p 1 , p 2 , where β 3 = 0, the functions S j , j = 5, . . . 12 vanish, and thus
12 j=1 c j S j (x) = c 1 B 3 300 (x) + c 2 B 3 210 (x) + c 3 B 3 120 (x) + c 4 B 3 030 (x) = 0.
On p 1 , p 2 this is a linear combination of linearly independent univariate Bernstein polynomials and we conclude that c 1 = c 2 = c 3 = c 4 = 0. Similarly c j = 0 for j = 5, . . . , 9. It remains to show that S 10 , S 11 and S 12 are linearly independent on T . For x ∈ T o 3 and β 3 = 0 we find
∂S 10 ∂β 3 | β 3 =0 = 6β 1 β 2 = 0, ∂S j ∂β 3 | β 3 =0 = 0, j = 11, 12.
We deduce that c 10 = 0 and similarly c 11 = c 12 = 0 which concludes the proof.
In Figure 3 we show graphs of the functions S 10 , S 11 , S 12 .
Two Marsden identities and representation of polynomials
We give both a barycentric and a Cartesian Marsden-like identity.
Theorem 5 (Barycentric Marsden-like identity) For u := (u 1 , u 2 , u 3 ), The CTS-basis functions S 10 , S 11 , S 12 on the triangle (0, 0), (1, 0), (0, 1) .
β := (β 1 , β 2 , β 3 ) ∈ R 3 with β i ≥ 0, i = 1, 2, 3 and β 1 + β 2 + β 3 = 1 we have (β T u) 3 = u 3 1 S 1 (β) + u 2 1 u 2 S 2 (β) + u 1 u 2 2 S 3 (β) + u 3 2 S 4 (β) + u 2 2 u 3 S 5 (β) + u 2 u 2 3 S 6 (β) + u 3 3 S 7 (β) + u 1 u 2 3 S 8 (β) + u 2 1 u 3 S 9 (β) + u 1 u 2 u 3 S 10 (β) + S 11 (β) + S 12 (β) .
Proof: By the multinomial expansion we obtain
(β 1 u 1 + β 2 u 2 + β 3 u 3 ) 3 = i+j+k=3 3! i!j!k! (β 1 u 1 ) i (β 2 u 2 ) j (β 3 u 3 ) k = i+j+k=3 u i 1 u j 2 u k 3 B 3 ijk (β).
Using B 3 111 = S 10 + S 11 + S 12 and the ordering in Theorem 4 we obtain (21).
Corollary 6 For l, m, n ∈ N 0 with l + m + n ≤ 3 we have an explicit representation for lower degree Bernstein polynomials in terms of the CTSbasis (17).
B l+m+n lmn = 3 l + m + n -1 3 l 0 m 0 n S 1 + 2 l 1 m 0 n S 2 + 1 l 2 m 0 n S 3 + 0 l 3 m 0 n S 4 + 0 l 2 m 1 n S 5 + 0 l 1 m 2 n S 6 + 0 l 0 m 3 n S 7 + 1 l 0 m 2 n S 8 + 2 l 0 m 1 n S 9 + 1 l 1 m 1 n S 10 + S 11 + S 12 , (22)
where 0 0 := 1 and r s := 0 if s > r.
Proof: Differentiating, for any d ∈ N 0 , (β 1 u 1 +β 2 u 2 +β 3 u 3 ) d a total of l, m, n times with respect to u 1 , u 2 , u 3 , respectively, and setting
u 1 = u 2 = u 3 = 1 we find d! (d -l -m -n)! β l 1 β m 2 β n 3 = i+j+k=d i(i -1) . . . (i -l + 1)j . . . (j -m + 1)k . . . (k -n + 1)B d ijk ,
and by a rescaling
B l+m+n lmn = d l + m + n -1 i+j+k=d i l j m k n B d ijk , l + m + n ≤ d.
(23) Using ( 17) with d = 3, we obtain (22).
As an example, we find
B 1 100 = 1 3 3S 1 + 2S 2 + S 3 + S 8 + 2S 9 + S 10 + S 11 + S 12 .
Theorem 7 (Cartesian Marsden-like identity) We have
(1 + x T v) 3 = 12 j=1 ψ j (v)S j (x), x ∈ T , v ∈ R 2 , ( 24
)
where the dual polynomials in Cartesian form are given by
ψ j (v) := 3 l=1 (1 + d T j,l v), j = 1, . . . , 12, v ∈ R 2 . ( 25
)
Here the dual points d j := [d j,1 , d j,2 , d j,3 ], are given as follows.
d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 8 d 9 d 10 d 11 d 12 := p 1 p 1 p 1 p 1 p 1 p 2 p 1 p 2 p 2 p 2 p 2 p 2 p 2 p 2 p 3 p 2 p 3 p 3 p 3 p 3 p 3 p 1 p 3 p 3 p 1 p 1 p 3 p 1 p 2 p 3 p 1 p 2 p 3 p 1 p 2 p 3 . ( 26
)
The domain points p * j in (8) are the coefficients of x in terms of the CTSbasis
x = 12 j=1 p * j S j (x), ( 27
)
where p * 10 = p * 11 = p * 12 = p T .
Proof: We apply (21) with β 1 , β 2 , β 3 the barycentric coordinates of x and
u i = 1 + p T i v, i = 1, 2, 3.
Then
β 1 u 1 + β 2 u 2 + β 3 u 3 = β 1 + β 2 + β 3 + β 1 p T 1 v + β 2 p T 2 v + β 3 p T 3 v = 1 + x T v.
and ( 24), ( 25), (26) follow from (21). Taking partial derivatives in (24) with respect to v,
∂ v 1 , ∂ v 2 (1 + x T v) 3 = 3x(1 + x T v) 2 = 12 j=1 ∂ v 1 , ∂ v 2 ψ j (v)S j (x),
where
∂ v 1 , ∂ v 2 ψ j (v) := d j,1 (1+d T j,2 v)(1+d T j,3 v)+d j,2 (1+d T j,1 v)(1+d T j,3 v)+ d j,3 (1 + d T j,1 v)(1 + d T j,2 v).
Setting v = 0 we obtain (27). Note that the domain point p T for B 3 111 has become a triple domain point for the CTS-basis.
Following the proof of (27) we can give explicit representations of all the monomials x r y s spanning P 3 . We do not give details here.
Three quasi-interpolants
We consider three quasi-interpolants on S 1 3 ( ). They all use functionals based on point evaluations and the third one will be used to estimate the L ∞ condition number of the CTS-basis.
To start, we consider the following polynomial interpolation problem on T . Find g ∈ P 3 such that g(p * i ) = f i , where f := [f 1 , . . . , f 10 ] T is a vector of given real numbers and the p * i given by ( 8) are the domain points for the cubic Bernstein basis.
Using the ordering (7), we write g in the form 10 j=1 c j B j and obtain the linear system
A = A 1 0 A 2 A 3 , (28)
and if A 1 and A 3 are nonsingular then
A -1 = A -1 1 0 -A -1 3 A 2 A -1 1 A -1 3 = B 1 0 B 2 B 3 . ( 29
)
Using the barycentric form of the domain points in ( 8) we find
A 2 = [1, 3, 3, 1, 3, 3, 1, 3, 3]/27, A 3 = B 3 111 ( 1 3 , 1 3 , 1 3 ) = 2 9 , A 1 := 1 27
27 0 0 0 0 0 0 0 0 8 12 6 1 0 0 0 0 0 1 6 12 8 0 0 0 0 0 0 0 0 27 0 0 0 0 0 0 0 0 8 12 6 1 0 0 0 0 0 1 6 12 8 0 0 0 0 0 0 0 0 27 0 0 1 0 0 0 0 0 8 12 6 8 0 0 0 0 0 1 6 12
∈ R 9×9 (30)
and
B 1 := A -1 1 = 1 6 6 0 0 0 0 0 0 0 0 -5 18 -9 2 0 0 0 0 0 2 -9 18 -5 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 -5 18 -9 2 0 0 0 0 0 2 -9 18 -5 0 0 0 0 0 0 0 0 6 0 0 2 0 0 0 0 0 -5 18 -9 -5 0 0 0 0 0 2 -9 18 , B 3 = [ 9 2 ], B 2 := -B 3 A 2 B 1 = 1 12
λ P i (f )B i , λ P i (f ) := 10 j=1 α i,j f (p * j ), ( 32
)
where the matrix α := A -1 has elements α i,j in row i and column j, i, j = 1, . . . , 10. We have
λ P i (B j ) = 10 k=1 α i,k B j (p * k ) = 10 k=1 α i,k a k,j = δ i,j , i, j = 1, . . . , 10.
It follows that QI P (g) = g for all g ∈ P 3 . Since B j = S j , j = 1, . . . , 9 and B 10 = B 3 111 = S 10 + S 11 + S 12 the quasi-interpolant
QI P : C(T ) → S 1 3 ( ), QI P (f ) := 12 i=1 λ P i (f )S i , λ P 11 = λ P 12 = λ P 10 , ( 33
)
where λ P i (f ) is given by (32), i = 1, . . . , 10, reproduces P 3 . Moreover, since for any f ∈ C(T ) and x ∈ T
|QI P (f )(x)| ≤ max 1≤i≤12 |λ P i (f )| 12 i=1 S i (x) = max 1≤i≤10 |λ P i (f )|,
we obtain
QI P (f ) L∞(T ) ≤ α ∞ f L∞(T ) = 10 f L∞(T ) ,
independently of the geometry of T . Using the construction in [START_REF] Lyche | Stable Simplex Spline Bases for C 3 Quintics on the Powell-Sabin 12-Split[END_REF], we can derive another quasi-interpolant which also reproduces P 3 . It uses more points, but has a slightly smaller norm. Consider the map P : C(T ) → S 1 3 (T ) defined by P (f ) = 12 ℓ=1 M ℓ (f )S ℓ , where
M ℓ (f ) := 1 6 f (d ℓ,1 ) + f (d ℓ,2 ) + f (d ℓ,3 ) + 9 2 f (p * ℓ ) - 4 3 f d ℓ,1 + d ℓ,2 2 + f d ℓ,1 + d ℓ,3 2 + f d ℓ,2 + d ℓ,3 2 .
Here the d ℓ,m are the dual points given by (26) and the p * ℓ are the domain points given by (27). Note that this is an affine combination of function values of f .
We have tested the convergence of the quasi-interpolant, sampling data from the function f (x, y) = e 2x+y + 5x + 7y on the triangle A = [0, 0],
B = h * [1, 0], C = h * [0.2, 1.2]
for h ∈ {0.05, 0.04, 0.03, 0.02, 0.01}. The following array indicates that the error: f -P (f ) L∞(T ) , is O(h 4 ). h 0.05 0.04 0.03 0.02 0.01 error/h 4 0.0550 0.0547 0.0543 0.0540 0.0537 Using a standard argument the following Proposition shows that the error is indeed O(h 4 ) for sufficiently smooth functions.
Proposition 8
The operator P is a quasi-interpolant that reproduces P 3 . For any f ∈ C(T )
P (f ) L∞(T ) ≤ 9 f L∞(T ) , (34)
independently of the geometry of T . Moreover,
f -P (f ) L∞(T ) ≤ 10 inf g∈P 3 f -g L∞(T ) . (35)
Proof: Since d 10 = d 11 = d 12 and B 3 111 = S 10 + S 11 + S 12 , B 3 ijk = S ℓ for (i, j, k) = (1, 1, 1) and some ℓ, we obtain
P (f ) = i+j+k=3 Mijk (f )B 3 ijk
where Mijk = M ℓ for (i, j, k) = (1, 1, 1) and corresponding ℓ and M111 = 3M 10 .
To prove that P reproduces polynomials up to degree 3, i.e., P (B 3 ijk ) = B 3 ijk , whenever i + j + k = 3, it is sufficient to prove the result for B
p 3 , p 2 + p 3 2 , it is easy to compute that M300 (B 3 300 ) = 1, M300 (B 3 ijk ) = 0 for (i, j, k) = (3, 0, 0), M210 (B 3 210 ) = 1, M210 (B 3 ijk ) = 0 for (i, j, k) = (2, 1, 0), M111 (B 3 111 ) = 1, M111 (B 3 ijk ) = 0 for (i, j, k) = (1, 1 , 1).
Therefore, by a standard argument, P is a quasi-interpolant that reproduces P 3 . Since the sum of the absolute values of the coefficients defining M ℓ (f ) is equal to 9, another standard argument shows (34) and (35).
The operators QI P and P do not reproduce the whole spline space S 1 3 ( ). Indeed, since λ P 10 (B 10 ) = M 10 (B 10 ) = 1, we have λ P 10 (S j ) = M 10 (S j ) = 1 3 , j = 10, 11, 12.
To give un upper bound for the condition number of the CTS-basis we need a quasi-interpolant which reproduces the whole spline space. We again use the inverse of the coefficient matrix of an interpolation problem to construct such an operator. We need 12 interpolation points and a natural choice is to use the first 9 cubic Bernstein domain points p * j , j = 1, . . . , 9 and split the barycenter p * 10 = p T into three points. After some experimentation we redefine p * 10 and choose p * 10 := (3, 3, 1)/7, p * 11 := (3, 1, 3)/7 and p * 12 := (1, 3, 3)/7. The problem is to find s = 12 j=1 c j S j such that s(p * i ) = f i , i = 1, . . . , 12. The coefficient matrix for this problem has again the block tridiagonal form (28), where A 1 ∈ R 9×9 and B 1 := A -1 1 are given by ( 30) and (31) as before. Moreover, using the formulas in Theorem 4 we find
A 3 = [S j (p * i )]
α S := A -1 = B 1 0 B 2 B 3 ,
where It follows that the quasi-interpolant QI given by
B 2 = -B 3 A 2 B 1 =
QI : C(T ) → S 1 3 ( ), QI(f ) := 12 i=1 λ S i (f )S i , λ S i (f ) = 12 j=1 α S i,j f (p * j ), (37)
is a projector onto the spline space S 1 3 ( ). In particular
s := 12 i=1 c i S i =⇒ c i = λ S i (s), i = 1, . . . , 12. (38)
The quasi-interpolant (37) can be used to show the L ∞ stability of the CTS-basis. For this we prove that the condition number is independent of the geometry of the triangle.
We define the ∞-norm condition number of the CTS-basis on T by
κ ∞ (T ) := max c =0 b T c L∞(T ) c ∞ max c =0 c ∞ b T c L∞(T )
,
where b T c := 12 j=1 c j S j ∈ S 1 3 ( ).
) |c i | = |λ S i (b T c)| ≤ α S ∞ b T c L∞(T ) . Therefore, c ∞ b T c L∞(T ) ≤ α S ∞ = 27 - 32 405 ,
and the upper bound κ ∞ < 27 follows.
5 C 0 and C 1 -continuity
In the following, we derive conditions to ensure C 0 and C 1 continuity through an edge between two triangles. The conditions are very similar to the classical conditions for continuity of Bernstein polynomials.
Theorem 10
Let s 1 = 12 j=1 c j S j and s 2 = 12 j=1 d j Sj be defined on the triangle T := p 1 , p 2 , p 3 and T := p 1 , p 2 , p3 , respectively, see Figure 4.
The function s
= s 1 on T s 2 on T is continuous on T ∪ T if d 1 = c 1 , d 2 = c 2 , d 3 = c 3 , d 4 = c 4 . (39)
Moreover, s ∈ C 1 (T ∪ T ) if in addition to (39) we have
d 5 = γ 1 c 3 + γ 2 c 4 + γ 3 c 5 , d 9 = γ 1 c 1 + γ 2 c 2 + γ 3 c 9 , d 10 = γ 1 c 2 + γ 2 c 3 + γ 3 c 10 .
(40) where γ 1 , γ 2 , γ 3 are the barycentric coordinates of p3 with respect to T . Suppose next (39) holds and s ∈ C 1 (T ∪ T ). By the continuity property we see that S j , j = 6, 7, 8, 11, 12 are zero and have zero cross boundary derivatives on p 1 , p 2 since they have at most 3 knots on that edge. We take derivatives in the direction u := p3p 1 using the A-recurrence (defined at the end of Section 1) with a := (γ 1 -1, γ 2 , γ 3 , 0) for s 1 and a := (-1, 0, 1, 0)
The Hermite basis
The classical Hermite interpolation problem on the Clough-Tocher split is to interpolate values and gradients at vertices and normal derivatives at the midpoint of edges, see Figure 1. These interpolation conditions can be described by the linear functionals
ρ(f ) = [ρ 1 (f ), . . . , ρ 12 (f )] T := [f (p 1 ), ∂ 1,0 f (p 1 ), ∂ 0,1 f (p 1 ), f (p 2 ), ∂ 1,0 f (p 2 ), ∂ 0,1 f (p 2 ), f (p 3 ), ∂ 1,0 f (p 3 ), ∂ 0,1 f (p 3 ), ∂ n 1 f (p 5 ), ∂ n 2 f (p 6 ), ∂ n 3 f (p 4 )] T ,
where p 4 , p 5 , p 6 , are the midpoints on the edges p 1 , p 2 , p 2 , p 3 , p 3 , p 1 , respectively, and ∂ n j f is the derivative in the direction of the unit normal to that edge in the direction towards p j . We let p j = (x j , y j ) be the coordinates of each point. The coefficient vector c := [c 1 , . . . , c 12 ] T of the interpolant g := 12 j=1 c j S j is solution of the linear system Ac = ρ(f ), where A ∈ R 12×12 with a i,j := ρ i (S j ).
Let H 1 , . . . , H 12 be the Hermite basis for S 1 3 ( ) defined by ρ i (H j ) = δ i,j . The matrix A transforms the Hermite basis to the CTS-basis. Since a basis transformation matrix is always nonsingular, we have
[S 1 , . . . , S 12 ] = [H 1 , . . . , H 12 ]A, [H 1 , . . . , H 12 ] = [S 1 , . . . , S 12 ]A -1 . (44)
To find the elements ρ i (S j ) of A we define for i, j, k = 1, 2, 3
ν ij := p ij 2 , p ij := p i -p j , x ij := x i -x j , y ij := y i -y j , ν ijk := p T i,j p j,k ν ij , for i = j, δ := 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 . (45)
We note that ν ijk is the length of the projection of p j,k in the direction of p i,j and that δ is twice the signed area of T . By the definition of the unit normals and the chain rule for j = 1, . . . , 12 we find ∂ 1,0 S j = (y 23 ∂ β 1 S j + y 31 ∂ β 2 S j + y 12 ∂ β 3 S j )/δ, ∂ 0,1 S j = (x 32 ∂ β 1 S j + x 13 ∂ β 2 S j + x 21 ∂ β 3 S j )/δ, ∂ n 1 S j = (y 23 ∂ 1,0 S j + x 32 ∂ 0,1 S j )/ν 32 = (ν 32 ∂ β 1 S j + ν 231 ∂ β 2 S j + ν 321 ∂ β 3 S j )/δ, ∂ n 2 S j = (y 31 ∂ 1,0 S j + x 13 ∂ 0,1 S j )/ν 31 = (ν 132 ∂ β 1 S j + ν 31 ∂ β 2 S j + ν 312 ∂ β 3 S j )/δ, ∂ n 3 S j = (y 12 ∂ 1,0 S j + x 21 ∂ 0,1 S j )/ν 21 = (ν 123 ∂ β 1 S j + ν 213 ∂ β 2 S j + ν 21 ∂ β 3 S j )/δ. This leads to
A := A 1 0 A 2 A 3 , with A 1 ∈ R 9×9 , A 2 ∈ R 3×9 , A 3 ∈ R 3×3 ,
where We find
A 1 := 3 δ
A -1 := B 1 0 B 2 B 3 = [b i,j ] 12 i,j=1 ,
where
B 1 := A -1 1 = 1 3
3 0 0 0 0 0 0 0 0 3 x 21 y 21 0 0 0 0 0 0 0 0 0 3 x 12 y 12 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 3 x 32 y 32 0 0 0 0 0 0 0 0 0 3 x 23 y 23 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 3 x 13 y 13 3 x 31 y 31 0 0 0 0 0 0
∈ R 9×9 ,
•
a differentiation formula • a stable recurrence relation • a knot insertion formula • they constitute a nonnegative partition of unity • simple explicit dual functionals • L ∞ stability • simple conditions for C 1 and C 2 joins to neighboring triangles • well conditioned collocation matrices for Lagrange and Hermite interpolation using certain sites.
Figure 1 :
1 Figure 1: The PS12-split (left) and the CT-split (right). The C 1 quadratics on PS-12 and C 1 cubics on CT have the same degrees of freedom as indicated.
Figure 2 :
2 Figure 2: The cubic Bernstein basis (left) and the CTS-basis (right), where B 3 111 is replaced by S 10 , S 11 , S 12 .
the B-or C-recurrence
Figure 3:
10 j=1c
10 j B j (p * i ) = f i , i = 1, . . . , 10, or in matrix form Ac = f for the unknown coefficient vector c := [c 1 , . . . , c 10 ] T . Since B 10 (p * i ) = B 3 111 (p * i ) = 0 for i = 1, . . . , 9 the coefficient matrix A is block triangular
4 - 9 -9 4 - 9 -9 4 - 9 - 9 .
4949499
Theorem 9
9 For any triangle T we have κ ∞ (T ) < 27. Proof: Since the S j form a nonnegative partition of unity it follows that max c =0 b T c L∞(T ) / c ∞ = 1. If s = 12 j=1 c j S j = b T c then by (38
c 10 ,c 11 ,c 12
Figure 4 :
4 Figure 4: C 1 -continuity and components
Figure 5 :
5 Figure 5: C 1 smoothness
Figure 6 :
6 Figure 6: The Hermite basis functions H 1 , H 2 , H 3 , H 10 on the unit triangle.
ν 32 , ν 231 , ν 231 -ν 32 , ν 321 -ν 32 , ν 321 , ν 32 , 0 , A 2 (2) := 3 4δ ν 132 , ν 31 , 0, 0, 0, ν 31 , ν 312 , ν 312 -ν 31 ν 132 -ν 31 , A 2 (3) := 3 4δ ν 123 , ν 123 -ν 21 , ν 213 -ν 21 , ν 213 , ν 21 , 0, 0, 0, ν 21 ,
B 3 :
3 3×3 , and the rows ofB 2 = -B 3 A 2 B 1 ∈ R 3×9 are given by B 2 (1) :=
Figure 7 :
7 Figure 7: The triangulation and the C 1 surface
Figure 8 : A C 1
81 Figure 8: A C 1 Hermite interpolating surface on the triangulation
1 6ν 21 -6ν 123 , x 12 ν 123 + ν 21 x 23 , y 12 ν 123 + ν 21 y 23 , -6ν 213 , x 21 ν 213 + ν 21 x 13 , y 21 ν 213 + ν 21 y 13 , 0, 0, 0 , -6ν 231 , x 23 ν 231 + ν 32 x 31 , y 23 ν 231 + ν 32 y 31 , -6ν 321 , x 23 ν 231 + ν 32 x 21 + ν 32 x 23 , y 23 ν 231 + ν 32 y 21 + ν 32 y 23 , 6ν 132 , x 13 ν 132 + ν 31 x 32 , y 13 ν 132 + ν 31 y 32 , 0, 0, 0, -6ν 312 , x 31 ν 312 + ν 31 x 12 , y 31 ν 312 + ν 31 y 12 .
P 6
P 4 P 5
P 3
B 2 (2) := 0, 0, 0, B 2 (3) := 1 6ν 32 1 6ν 31 -P 1 P 2
for s 2 . We find with x ∈ p 1 , p 2
The last equality follows from (13) since β 3 = 0 on p 1 , p 2 so that = 3B 2 110 (x). Consider next Sj . By the same argument as for S j , we see that Sj , j = 6, 7, 8, 11, 12 are zero and have zero cross boundary derivatives on p 1 , p 2 . We find for x ∈ p 1 , p 2
We note that on p 1 , p 2 , the polynomials B 2 101 , B2 101 , B 2 011 , B2 011 vanish and
As an example, on the unit triangle (p 1 , p 2 , p 3 ) = ((0, 0), (1, 0), (0, 1)) we find
Some of the Hermite basis functions are shown in Figure 6.
We have also tested the convergence of the Hermite interpolant, sampling again data from the function f (x, y) = e 2x+y + 5x + 7y on the triangle
Examples
Several examples have been considered for scattered data on the CT-split, see for example [START_REF] Farin | A modified Clough-Tocher interpolant[END_REF][START_REF] Mann | Cubic precision Clough-Tocher interpolation[END_REF]. Here, we consider a triangulation with vertices p 1 = (0, 0), p 2 = (1, 0), p 3 = (3/2, 1/2), p 4 = (-1/2, 1), p 5 = (1/4, 3/4), p 6 = (3/2, 3/2), p 7 = (1/2, 2) and triangles T 1 := p 1 , p 2 , p 5 , T 2 := p 2 , p 3 , p 5 , T 3 := p 4 , p 1 , p 5 , T 4 := p 3 , p 6 , p 5 , T 5 := p 6 , p 4 , p 5 , T 6 := p 4 , p 6 , p 7 ,. We divide each of the 6 triangles into 3 subtriangles using the Clough-Tocher split. We then obtain a space of C 1 piecewise polynomials of dimension 3V + E = 3 × 7 + 12 = 33, where V is the number of vertices and E the number of edges in the triangulation. We can represent a function s in this space by either using the Hermite basis or using CTS-splines on each of the triangles and enforcing the C 1 continuity conditions. The function s on T 1 depends on 12 components, while the C 1 -continuity through the edges gives only 5 free components for T 2 ,T 3 and T 4 . Closing the 1-cell at p 5 gives only one free component for T 5 and 5 free components for T 6 , Figure 7 left.
In the following graph, Figure 7, right, once the 12 first components on T 1 were chosen, the other free ones are set to zero. Then, in Figure 8, we have plotted the Hermite interpolant of the function f (x, y) = e 2x+y +5x+7y and gradients using the CTS-splines. |
01767456 | en | [
"phys.astr"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01767456/file/1610.09377.pdf | Joel C Roediger
Laura Ferrarese
Patrick Côté
Lauren A Macarthur
Rúben Sánchez-Janssen
John P Blakeslee
Eric W Peng
Chengze Liu
Roberto Muñoz
Jean-Charles Cuillandre
Roberto Munoz
Stephen Gwyn
Simona Mei
Samuel Boissier
Alessandro Boselli
Michele Cantiello
Stéphane Courteau
Pierre-Alain Duc
Ariane Lançon
J Christopher Mihos
Thomas H Puzia
James E Taylor
Patrick R Durrell
Elisa Toloba
Puragra Guhathakurta
Hongxin Zhang
The Next Generation Virgo Cluster
Keywords: galaxies: clusters: individual (Virgo), galaxies: dwarf, galaxies: evolution, galaxies: nuclei, galaxies: star clusters: general, galaxies: stellar content
published or not. The documents may come
INTRODUCTION
Despite the complexities of structure formation in a ΛCDM Universe, galaxies are well-regulated systems. Strong evidence supporting this statement are the many fundamental relations to which galaxies adhere: star formation rate versus stellar mass or gas density (Daddi et al. 2007;Elbaz et al. 2007;Noeske et al. 2007;Kennicutt & Evans 2012), rotational velocity versus luminosity or baryonic mass for disks (Courteau et al. 2007;McGaugh 2012;Lelli et al. 2016), the fundamental plane for spheroids (Bernardi et al. 2003;Zaritsky et al. 2012), and the mass of a central compact object versus galaxy mass (Ferrarese et al. 2006;Wehner & Harris 2006; [email protected] Beifiori et al. 2012;Kormendy & Ho 2013), to name several. Moreover, many of these relations are preserved within galaxy groups and clusters, demonstrating that such regulation is maintained in all environments (e.g. Blanton & Moustakas 2009). This paper focusses on the relationship between color and luminosity for quiescent ["quenched"] galaxies: the so-called red sequence [RS].
First identified by de Vaucouleurs (1961) and Visvanathan & Sandage (1977), the RS represents one side of the broader phenomenon of galaxy color bimodality (Strateva et al. 2001;Blanton et al. 2003;Baldry et al. 2004;Balogh et al. 2004;Driver et al. 2006;Cassata et al. 2008;Taylor et al. 2015), the other half being the blue cloud, with the green valley separating them. Based on the idea of passively evolving stellar pop-ulations, color bimodality is widely interpreted as an evolutionary sequence where galaxies transform their cold gas into stars within the blue cloud and move to the RS after star formation ends (e.g. Faber et al. 2007). This evolution has been partly observed through the increase of mass density along the RS towards low redshift (Bell et al. 2004;Kriek et al. 2008;Pozzetti et al. 2010), although the underlying physics of quenching remains a matter of active research. The standard view of color bimodality is a bit simplistic though insofar as the evolution does not strictly proceed in one direction; a fraction of galaxies in the RS or green valley have their stellar populations temporarily rejuvenated by replenishment of their cold gas reservoirs (Schawinski et al. 2014).
Crucial to our understanding of the RS is knowing when and how it formed. The downsizing phenomenon uncovered by spectroscopic analyses of nearby early-type galaxies (ETGs; Nelan et al. 2005;Thomas et al. 2005;Choi et al. 2014) implies that the RS was built over an extended period of time [∼5 Gyr], beginning with the most massive systems (e.g. Tanaka et al. 2005). These results support the common interpretation that the slope of the RS is caused by a decline in the metallicity [foremost] and age of the constituent stellar populations towards lower galaxy masses (Kodama & Arimoto 1997;Ferreras et al. 1999;Terlevich et al. 1999;Poggianti et al. 2001;De Lucia et al. 2007). Efforts to directly detect the formation of the RS have observed color bimodality to z ∼ 2 (Bell et al. 2004;Willmer et al. 2006;Cassata et al. 2008). More recently, legacy surveys such as GOODS, COSMOS, NEWFIRM, and UltraVISTA have shown that massive quiescent galaxies [M * 3 × 10 10 M ] begin to appear as early as z = 4 (Fontana et al. 2009;Muzzin et al. 2013;Marchesini et al. 2014) and finish assembling by z = 1-2 (Ilbert et al. 2010;Brammer et al. 2011). Growth in the stellar mass density of quiescent galaxies since z = 1, on the other hand, has occured at mass scales of M * and lower (Faber et al. 2007), consistent with downsizing.
Owing to their richness, concentration, and uniform member distances, galaxy clusters are an advantageous environment for studying the RS. Moreover, their characteristically high densities likely promote quenching and therefore hasten the transition of galaxies to the RS. In terms of formation, the RS has been identified in [proto-]clusters up to z ∼ 2 (Muzzin et al. 2009;Wilson et al. 2009;Gobat et al. 2011;Spitler et al. 2012;Stanford et al. 2012;Strazzullo et al. 2013;Cerulo et al. 2016). Much of the interest in z > 0 clusters has focussed on the growth of the faint end of the RS. Whereas scant evidence has been found for evolution of either the slope or scatter of the RS (Ellis et al. 1997;Gladders et al. 1998;Stanford et al. 1998;Blakeslee et al. 2003;Holden et al. 2004;Lidman et al. 2008;Mei et al. 2009;Papovich et al. 2010, but see Hao et al. 2009 andHilton et al. 2009), several groups have claimed an elevated ratio of bright-to-faint RS galaxies in clusters up to z = 0.8, relative to local measurements (Smail et al. 1998;De Lucia et al. 2007;Stott et al. 2007;Gilbank et al. 2008;Hilton et al. 2009;Rudnick et al. 2009, see also Boselli & Gavazzi 2014 and references therein). The increase in this ratio with redshift indicates that low-mass galaxies populate the RS at later times than high-mass systems, meaning that the former, on average, take longer to quench and/or are depleted via mergers/stripping at early epochs. These results are not without controversy, however, with some arguing that the inferred evolution may be the result of selection bias, small samples, or not enough redshift baseline (Crawford et al. 2009;Lu et al. 2009;De Propris et al. 2013;Andreon et al. 2014;Romeo et al. 2015;Cerulo et al. 2016).
As a tracer of star formation activity and stellar populations, colors also are a key metric for testing galaxy formation models. Until recently, only semi-analytic models [SAMs] had sufficient statistitcs to enable meaningful comparisons to data from large surveys. Initial efforts indicated that the fraction of red galaxies was too high in models, and thus quenching too efficient, which led to suggestions that re-accretion of SNe ejecta was necessary to maintain star formation in massive galaxies (Bower et al. 2006). Since then, a persistent issue facing SAMs has been that their RSs are shallower than observed (Menci et al. 2008;González et al. 2009;Guo et al. 2011). The common explanation for this is that the stellar metallicity-luminosity relation in the models is likewise too shallow. Font et al. (2008) demonstrated that an added cause of the excessively red colors of dwarf satellites is their being too easily quenched by strangulation, referring to the stripping of halo gas. While Font et al. (2008) increased the binding energy of this gas as a remedy, Gonzalez-Perez et al. (2014) have shown that further improvements are still needed. Studies of other models have revealed similar mismatches with observations (Romeo et al. 2008;Weinmann et al. 2011), indicating that the problem is widespread.
In this paper, we use multi-band photometry from the Next Generation Virgo Cluster Survey (NGVS; Ferrarese et al. 2012) to study galaxy colors in the core of a z = 0 cluster, an environment naturally weighted to the RS. The main novelty of this work is that NGVS photometry probes mass scales from brightest cluster galaxies to Milky Way satellites (Ferrarese et al. 2016b, hereafter F16), allowing us to characterize the RS over an unprecedented factor of >10 5 in luminosity [∼10 6 M in stellar mass] and thus reach a largely unexplored part of the color-magnitude distribution [CMD]. Given the unique nature of our sample, we also take the opportunity to compare our data to galaxy formation models, which have received scant attention in the context of cluster cores.
Our work complements other NGVS studies of the galaxy population within Virgo's core. Zhu et al. (2014) jointly modelled the dynamics of field stars and globular clusters [GCs] to measure the total mass distribution of M87 to a projected radius of 180 kpc. Grossauer et al. (2015) combined dark matter simulations and the stellar mass function to extend the stellar-to-halo mass relation down to M h ∼ 10 10 M . Sánchez-Janssen et al. (2016) statistically inferred the intrinsic shapes of the faint dwarf population and compared the results to those for Local Group dwarfs and simulations of tidal stripping. Ferrarese et al. (2016a) present the deepest luminosity function to date for a rich, volume-limited sample of nearby galaxies. Lastly, Côté et al. (in preparation) and Sánchez-Janssen et al. (in preparation) study the galaxy and nuclear scaling relations, respectively, for the same sample.
In Section 2 we briefly discuss our dataset and preparations thereof. Our analysis of the RS is presented in Section 3, while Sections 4-6 focus on comparisons to previous work, compact stellar systems [CSS] and galaxy formation models. A discussion of our findings and conclusions are provided in Sections 7-8.
DATA
Our study of the RS in the core of Virgo is enabled by the NGVS (Ferrarese et al. 2012). Briefly, the NGVS is an optical imaging survey of the Virgo cluster performed with CFHT/MegaCam. Imaging was obtained in the u * g ız bands1 over a 104 deg2 footprint centered on sub-clusters A and B, reaching out to their respective virial radii (1.55 and 0.96 Mpc, respectively, for an assumed distance of 16.5 Mpc; Mei et al. 2007;Blakeslee et al. 2009). The NGVS also obtained --band imaging for an area of 3.71 deg 2 [0.3 Mpc 2 ], roughly centered on M87, the galaxy at the dynamical center of subcluster A; we refer to this as the core region. NGVS images have a uniform limiting surface brightness of ∼29 gmag arcsec -2 . Further details on the acquisition and reduction strategies for the NGVS are provided in Ferrarese et al. (2012).
This paper focuses on the core of the cluster, whose boundaries are defined as, 12 h 26 m 20 s ≤ RA (J2000) ≤ 12 h 34 m 10 s 11 • 30 22 ≤ Dec (J2000) ≤ 13 • 26 45 and encompass four MegaCam pointings [see Figure 13 of F16]. A catalog of 404 galaxies for this area, of which 154 are new detections, is published in F16, spanning the range 8.9 ≤ g ≤ 23.7 and ≥50% complete to g ∼ 22. As demonstrated there, the galaxy photometry has been thoroughly analysed and cluster membership extensively vetted for this region; below we provide a basic summary of these endeavors. A study of the CMD covering the entire survey volume will be presented in a future contribution.
Faint [g > 16] extended sources in the core were identified using a dedicated pipeline based on ring-filtering of the MegaCam stacks. Ring-filtering replaces pixels contaminated by bright, point-like sources with the median of pixels located just beyond the seeing disk. This algorithm helps overcome situations of low surface brightness sources being segmented into several parts due to contamination. The list of candidates is then culled and assigned membership probabilities by analysing SExtractor and GALFIT (Peng et al. 2002) parameters in the context of a size versus surface brightness diagram, colors and structural scaling relations, and photometric redshifts. A final visual inspection of the candidates and the stacks themselves is made to address issues of false-positives, pipeline failures, and missed detections. After this, the remaining candidates are assigned a membership flag indicating their status as either certain, likely, or possible members.
As part of their photometric analysis, F16 measured surface brightness profiles and non-parametric structural quantities in the u * g iz bands for the core galaxies with the IRAF task ELLIPSE. These data products are complemented with similar metrics from Sérsic fits to both the light profiles and image cutouts for each source [the latter achieved with GAL-FIT]. Our work is based on the growth curves deduced by applying their [non-parametric] g -band isophotal solutions to all other bands while using a common master mask. This allows us to investigate changes in the RS as a function of galactocentric radius, rather than rely on a single aperture. Driver et al. (2006) adopted a similar approach for their CMD analysis, finding that bimodality was more pronounced using core versus global colors; our results support this point [see Fig. 4]. We extract from the growth curves all ten colors covered by the NGVS, integrated within elliptical apertures having semi-major axes of a × R e,g [R e,g = g -band effective radius], where a = 0.5, 1.0, 2.0, 3.0; we also examine colors corresponding to the total light of these galaxies. Errors are estimated following Chen et al. (2010), using the magnitude differences between F16's growth curve and Sérsic analyses, and scaling values for each set of apertures by the fraction of light enclosed. These estimates should probably be regarded as lower limits, since they do not capture all sources of systematic uncertainty.
Absolute magnitudes are computed assuming a uniform distance of 16.5 Mpc (Mei et al. 2007;Blakeslee et al. 2009) for all galaxies and corrected for reddening using the York Extinction Solver 2 (McCall 2004), adopting the Schlegel et al. (1998) dust maps, Fitzpatrick (1999) extinction law, and R V = 3.07. To help gauge the intrinsic scatter along the RS, we use recovered magnitudes for ∼40k artificial galaxies injected into the image stacks [F16] to establish statistical errors in our total light measurements. A more focussed discussion of uncertainties in the NGVS galaxy photometry may be found in Ferrarese et al. (2016a) andF16.
We note that, although the NGVS is well-suited for their detection, ultra-compact dwarfs [UCDs] are excluded from our galaxy sample for two reasons. First, they have largely been omitted from previous analyses of the RS. Second, the nature of these objects is unsettled. While many are likely the remnants of tidally-stripped galaxies (e.g. Bekki et al. 2003;Drinkwater et al. 2003;Pfeffer & Baumgardt 2013;Seth et al. 2014), the contribution of large GCs to this population remains unclear. Readers interested in the photometric properties of the UCD population uncovered by the NGVS are referred to Liu et al. (2015) for those found in the core region; still, we include UCDs in our comparisions of the colors of RS galaxies and CSS in Section 5.
THE RED SEQUENCE IN THE CORE OF VIRGO
Figure 1a plots the (u * -) colors, integrated within 1.0 R e,g , of all 404 galaxies in the core of Virgo as a function of their total g -band magnitudes. One of the most striking features in this plot is the depth to which we probe galaxy colors: at its 50%-completeness limit [M g ∼ -9], the NGVS luminosity function reaches levels that have only been previously achieved in the Local Group [i.e. comparable to the Carina dSph, and only slightly brighter than Draco; Ferrarese et al. 2016a]. This is significant as integrated colors for dwarf galaxies at these scales have, until now, been highly biased to the local volume [D ≤ 4 Mpc], incomplete, and noisy (e.g. Johnson et al. 2013). The NGVS CMD therefore represents the most extensive one to date based on homogeneous photometry, spanning a factor of 2 × 10 5 in luminosity.
Also interesting about Fig. 1a is the dearth of blue galaxies in the core of Virgo. This is more apparent in Figure 1b, where we plot histograms of (u * -) in four bins of luminosity. Three of the four samples are well described as unimodal populations rather than the bimodal color distributions typically found in large galaxy surveys (e.g. Baldry et al. 2004). The absence of a strong color bimodality in Virgo's core is not surprising though (Balogh et al. 2004;Boselli et al. 2014) and suggests that most of these galaxies have been cluster members long enough to be quenched by the environment3 . The minority of blue galaxies we find may be members that are currently making their first pericentric passage or are noncore members projected along the line-of-sight. Since our interest lies in the RS, we have inspected three-color images for every galaxy and exclude from further analysis 24 that are clearly star-forming [blue points in Fig. 1a]. Also excluded are the 56 galaxies that fall below our completeness limit [grey points], 16 whose imaging suffers from significant contamination [e.g. scattered light from bright stars; green points], and 4 that are candidate remnants of tidal stripping [red points]. While we cannot rule out a contribution by reddening to the colors of the remaining galaxies, their threecolor images do not indicate a significant frequency of dust lanes.
Figure 2 plots all ten CMDs for quiescent galaxy candidates in Virgo's core, where the colors again correspond to 1.0 R e,g . Having culled the star-forming galaxies, we can straightforwardly study the shape of the RS as a function of wavelength. In each panel of Fig. 2 we observe a clear trend, whereby for M g -14, colors become bluer towards fainter magnitudes. To help trace this, we have run the Locally Weighted Scatterplot Smoothing algorithm (LOWESS; Cleveland 1979) on each CMD; these fits are represented by the red lines in the figure. The observed trends are notable given that optical colors are marginally sensitive to the metallicities of composite stellar populations with Z 0.1Z . Simple comparisons of our LOWESS curves to stellar population models suggests that, for M g -14, metallicity increases with luminosity
Galaxy mass and possible delay times likely factor into this disagreement.
along the RS [see Fig. 9]; age trends are harder to discern with the colors available to us. A metallicity-luminosity relation for RS galaxies agrees with previous work on the stellar populations of ETGs throughout Virgo (Roediger et al. 2011b) and the quiescent galaxy population at large (e.g. Choi et al. 2014). Our suggestion though is based on fairly restrictive assumptions about the star formation histories of these galaxies [i.e. exponentially-declining, starting ≥8 Gyr ago]; more robust results on age and metallicity variations along the RS in Virgo's core from a joint UV-optical-NIR analysis will be the subject of future work.
A flattening at the bright end of the RS for Virgo ETGs was first identified by Ferrarese et al. (2006) and later confirmed in several colors by Janz & Lisker (2009, hereafter JL09). This seems to be a ubiquitous feature of the quiescent galaxy population, based on CMD analyses for nearby galaxies (Baldry et al. 2004;Driver et al. 2006). This flattening may also be present in our data, beginning at M g ∼ -19, but the small number of bright galaxies in the core makes it difficult to tell. Also, this feature does not appear in colors involving the zband, but this could be explained by a plausible error in this measurement for M87 [e.g. 0.1 mag], the brightest galaxy in our sample.
The flattening seen at bright magnitudes implies that the RS is non-linear. A key insight revealed by the LOWESS fits in Fig. 2 is that the linearity of the RS also breaks down at faint magnitudes, in all colors. The sense of this non-linearity is that, for M g -14, the local slope is shallower than at brighter magnitudes, even flat in some cases [e.g. u * -g ; see Appendix]. For several colors [e.g. --ı], the LOWESS fits sug-
0.8 1.0 1.2 1.4 1.6 u * - g ′ 0.5 0.6 0.7 0.8 0.9 1.0 1.1 g ′ - i ′ 1.2 1.4 1.6 1.8 2.0 2.2 2.4 u * - r ′ 0.4 0.6 0.8 1.0 1.2 1.4 g ′ - z ′ 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 u * - i ′ 0.1 0.2 0.3 0.4 0.5 r ′ - i ′ 1.5 2.
.2 0.3 0.4 0.5 i ′ - z ′ M g ′ (mag) Figure 2.
CMDs for quiescent galaxies in Virgo's core, in all ten colors measured by the NGVS. Fluxes have been measured consistently in all five bands within apertures corresponding to 1.0 R e,g isophote of each galaxy. Black points represent individual galaxies while red lines show non-parametric fits to the data. The RS defines a color-magnitude relation in all colors that flattens at faint magnitudes, which could be explained by a constant mean age and metallicity for the lowest-mass galaxies in this region [albeit with significant scatter; but see Fig. 3]. Representative errors for the same magnitude bins as in Fig. 1 -15 since our tests did not probe brighter magnitudes. The scatter and errors, averaged within three bins of luminosity, match quite well, especially at the faintest luminosities, suggesting minimal intrinsic scatter in the colors and stellar populations of these galaxies.
gest that the behavior at the faint end of the RS may be even more complex, but the scale of such variations is well below the photometric errors [see Fig. 3]. JL09 found that the colormagnitude relation [CMR] of ETGs also changes slope in a similar manner, but at a brighter magnitude than us [M g ∼ -16.5]; we address this discrepancy in Section 4.
An implication of the faint-end flattening of the RS is that the low-mass dwarfs in Virgo's core tend to be more alike in color than galaxies of higher mass. This raises the question of whether the scatter at the faint-end of the RS reflects intrinsic color variations or just observational errors. We address this issue in Figure 3 by comparing the observed scatter in the total colors to error estimates based on the artificial galaxies mentioned in Section 2. Shown there are LOWESS fits to the data and the rms scatter about them [solid and dashed lines, respectively], and the scatter expected from photometric errors [dotted lines]. Both types of scatter have been averaged within three bins of magnitude: -15 < M g ≤ -13, -13 < M g ≤ -11, and -11 < M g ≤ -9; the comparison does not probe higher luminosities because our artificial galaxy catalog was limited to g > 16, by design. We generally find that the scatter and errors both increase towards faint magnitudes and that the two quantities match well, except in the brightest bin, where the scatter mildly exceeds the errors. For the other bins however, the intrinsic scatter must be small, strengthening the assertion that the faintest galaxies possess uniform colors [to within 0.05 mag] and, possibly, stellar populations. Deeper imaging will be needed to improve the constraints on genuine color variations at these luminosities.
The last topic examined in this section is the effect of aperture size on galaxy color. Our most important result, the flattening of the RS at faint magnitudes, is based on galaxy colors integrated within their half-light radii. Aperture effects could be significant in the presence of radial color gradients, as suggested by Driver et al. (2006), and therefore bias our inferences on the shape of the RS. In Figure 4 we show LOWESS fits to the u * -g and g -z RSs for colors measured within 0.5 R e,g , 1.0 R e,g , 2.0 R e,g , and 3.0 R e,g . These particular colors are chosen because, in the absence of deep UV and NIR photometry 4 , they provide the only leverage on stellar populations for the full NGVS dataset. We also include measurements of the scatter about these fits for the 0.5 R e,g and 3.0 R e,g apertures, represented by the shaded envelopes.
The top panel of Fig. 4 shows that u * -g changes by at most 0.04-0.06 mag at M g ≤ -17 between consecutive aperture pairs. Two-sample t-tests of linear fits to the data indicate that these differences are significant at the P = 0.01 level. Conversely, hardly any variation is seen between apertures for galaxies with M g > -16. The bottom panel of Fig. 4 demonstrates that g -z changes little with radius in most of our galaxies. Slight exceptions are the 0.5 R e,g colors for galaxies with M g ≤ -16, which differ from the 2.0 and 3.0 R e,g colors by 0.04 mag. The 1.0 R e,g colors bridge this gap, following the 0.5 R e,g sequence at M g -17 and moving towards the other sequences for brighter magnitudes.
The changes in the RS with galactocentric radius imply the existence of negative color gradients within specific regions of select galaxies. The strongest gradients are found for u *g within bright galaxies, inside 2.0 R e,g , while galaxies with M g > -15 have little-to-none in either color. Mild negative 4 UV and deep NIR imaging of the Virgo cluster exist (Boselli et al. 2011;Muñoz et al. 2014) but can only aid us for brighter galaxies and select fields, respectively. . RS in (u * -g ) and (g -z ), for different sizes of aperture used to measure galaxy colors. All four curves consider the same sample of galaxies. The choice of aperture has an impact on the slope of the RS at M g -16 mag for (u * -g ), with smaller apertures yielding steeper slopes, while the RS is more stable in (g -z ). The shaded envelopes represent the scatter about the RS for the 0.5 R e,g and 3.0 R e,g apertures. gradients are seen in g -z between 0.5 and 1.0 R e,g for galaxies with M g < -17, consistent with previous work on the spatially-resolved colors of galaxies throughout Virgo (Roediger et al. 2011a). The most important insight though from Fig. 4 is that the flattening of the RS at faint magnitudes does not apply to a specific aperture. The implications of those gradients we do detect in our galaxies, in terms of stellar populations and comparisons with galaxy formation models, will be addressed in Section 7.
COMPARISON TO PREVIOUS WORK
Before discussing the implications of our results, over the next two sections we compare our RS to earlier/ongoing work on the colors of Virgo galaxies and CSS, starting with the former. Of the several studies of the galaxy CMD in Virgo Representative errors for the NGVS are included along the bottom. (bottom) As above but restricted to the galaxies common to both samples; measurement pairs are joined with lines. The NGVS extends the CMD for this cluster faintward by ∼5 mag, with much improved photometric errors. We also find that JL09's CMR is steeper than our own at intermediate magnitudes, likely due to their inclusion of systems having recent star formation and possible errors in their sky subtraction. (Bower et al. 1992;Ferrarese et al. 2006;Chen et al. 2010;Roediger et al. 2011a;Kim et al. 2010), that of JL09 is the most appropriate for our purposes. JL09 measured colors for 468 ETGs from the Virgo Cluster Catalog (Binggeli et al. 1985), based on ugriz imaging from SDSS DR5 (Adelman- McCarthy et al. 2007). Their sample is spread throughout the cluster and has B < 18.0. Most interestingly, they showed that these galaxies trace a non-linear relation in all optical CMDs, not unlike what we find for faint members inhabiting the centralmost regions.
In Figure 5 we overlay the u * -g CMD from JL09 against our own, measured within 1.0 R e,g ; the comparison is ap-propriate since JL09 measured colors within their galaxies' r-band half-light radii. We have transformed JL09's photometry to the CFHT/MegaCam system following Equation 4 in Ferrarese et al. (2012). The top panel shows all objects from both samples, along with respective LOWESS fits, while the bottom is restricted to the 62 galaxies in common to both. We focus on the u * -g color because of its importance to stellar population analyses; indeed, this is a reason why accurate u *band photometry was a high priority for the NGVS.
The most notable feature in the top panel of Fig. 5 is the superior depth of the NGVS relative to the SDSS, an extension of ∼5 mag. There is a clear difference in scatter between the two samples, with that for JL09 increasing rapidly for M g > -18, whereas the increase for the NGVS occurs much more gradually5 [cf. Fig. 3; see Fig. 1 -16.5]. The shallower slopes found by JL09 at both ends of their CMR are seen for other colors and so cannot be explained by limitations/biases in the SDSS u-band imaging. The shallower slope at bright magnitudes substantiates what was hinted at in Fig. 2 and is more obvious in JL09 since their sample covers the full cluster6 ; the existence of this feature is also well-known from SDSS studies of the wider galaxy population (e.g. Baldry et al. 2004). The lower zeropoint of the JL09 CMR is seen in other colors too, hinting that calibration differences between SDSS DR5 and DR7 are responsible, where the NGVS is anchored to the latter (Ferrarese et al. 2012).
Lastly, the LOWESS fits in Fig. 5 indicate that, between -19 M g -16.5, the JL09 CMR has a steeper slope than the NGVS RS. This difference is significant [P = 0.01] and holds for other u * -band colors as well. This steeper slope forms part of JL09's claim that the ETG CMR flattens at M g -16.5, a feature not seen in our data. Since JL09 selected their sample based on morphology, recent star formation in dwarf galaxies could help create their steeper slope. For one, the colors of many galaxies in the JL09 sample overlap with those flagged in our sample as star-forming. Also, Kim et al. (2010) find that dS0s in Virgo follow a steeper UV CMR than dEs and have bluer UV-optical colors at a given magnitude. We therefore are unsurprised to have not observed the flattening detected by JL09.
Recent star formation cannot solely explain why JL09 find a steeper slope at intermediate magnitudes though. The bottom panel of Fig. 5 shows that, for the same galaxies, JL09 measure systematically bluer u * -g colors; moreover, this difference grows to fainter magnitudes, creating a steeper CMR. Comparisons of other colors [e.g. g -] and the agreement found therein proves that this issue only concerns JL09's u-band magnitudes. The stated trend in the color discrepancy appears inconsistent with possible errors in our SDSS-MegaCam transformations. Aperture effects can also be ruled out since the differences in size scatter about zero and never exceed 25% for any one object; besides, Fig. 4 demonstrates that color gradients in u * -g are minimal at faint magnitudes. A possible culprit may be under-subtracted backgrounds in JL09's u-band images since they performed their own sky subtraction. Therefore, we suggest that the differences between the JL09 CMR and NGVS RS for M g > -19 can be explained by: (i) a drop in the red fraction amongst Virgo ETGs between -19 M g -16.5, and (ii) JL09's measurement of systematically brighter u-band magnitudes. Despite this disagreement, these comparisons highlight two exciting aspects about the NGVS RS [and the photometry overall]: (i) it extends several magnitudes deeper than the SDSS, and (ii) the photometric errors are well-controlled up to the survey limits.
COMPARISON TO COMPACT STELLAR SYSTEMS
The NGVS is unique in that it provides photometry for complete samples of stellar systems within a single global environment, including galaxies, GCs, galactic nuclei, and UCDs. These systems are often compared to one another through their relative luminosities and sizes (e.g. Burstein et al. 1997;Misgeld & Hilker 2011;Brodie et al. 2011), whereas their relative stellar contents, based on homogeneous datasets, are poorly known. Given the depth of the NGVS RS, we have a unique opportunity to begin filling this gap by examining the colors of faint dwarfs and CSS at fixed lumuniosity.
Our samples of GCs, nuclei, and UCDs are drawn from the catalogs of Peng et al. (in preparation), F16, and Zhang et al. (2015), respectively; complete details on the selection functions for these samples may be found in those papers. Briefly though, GCs and UCDs were both identified via magnitude cuts and the u * ıK diagram (Muñoz et al. 2014), and separated from each other through size constraints [r h ≥ 11 pc for UCDs]. The validity of candidate GCs are assessed probabilistically and we use only those having a probability > 50%. All UCDs in the Zhang et al. (2015) catalog are spectroscopically-confirmed cluster members. Lastly, galactic nuclei were identified by visual inspection of the image cutouts for each galaxy and modelled in the 1D surface brightness profiles with Sérsic functions. For our purposes, we only consider those objects classified as unambiguous or possible nuclei in the F16 catalog.
In Figure 6 we plot the CMDs of galaxies and CSS in Virgo's core [left-hand side] and the color distributions for objects with g > 18 [right-hand side]; u * -g colors are shown in the upper row and g -ı in the lower. Note that we have truncated the CSS samples to 18 < g < 22 so that our comparisons focus on a common luminosity range.
An obvious difference between the distributions for galaxies and CSS at faint luminosities is the latter's extension to very red colors, whereas the former is consistent with a single color [Fig. 3]. This is interesting given that CSS have a higher surface density than the faint dwarfs in Virgo's core, suggesting that, at fixed luminosity, diffuse systems are forced to be blue while concentrated systems can have a wide range of colors. The nature of red CSS is likely explained by a higher metal content, since metallicity more strongly affects the colors of quiescent systems than age [see Fig. 9]. Also, the Spearman rank test suggests that nuclei follow CMRs in both u * -g [ρ = -0.57; p = 4 × 10 -5 ] and g -ı [ρ ∼ -0.5; p = 6 × 10 -4 ], hinting at a possible mass-metallicity relation for this population. A contribution of density to the colors of CSS is not obvious though given that many [if not most] of them were produced in the vicinity of higher-mass galaxies, and so may owe their enrichment to their local environments. The as-yet uncertain nature of UCDs as either the massive tail of the GC population or the bare nuclei of stripped galaxies also raises ambiguity on what governs their stellar contents, be it due to internal or external factors [i.e. self-enrichment versus enriched natal gas clouds].
While it is possible for CSS to be quite red for their luminosities, the majority of them have bluer colors, in both u * -g and g -ı, that agree better with those of faint RS galaxies. Closer inspection of the right-half of Fig. 6 reveals some tensions between the populations though. KS tests indicate that the null hypothesis of a common parent distribution for galaxies and GCs is strongly disfavored for u * -g and gı [p < 10 -10 ], whereas conclusions vary for UCDs and nuclei depending on the color under consideration [p u * -g ∼ 0.09 and p g -ı < 10 -4 for UCDs; p u * -g ∼ 0.007 and p g -ı ∼ 0.07 for nuclei]. The tails in the distributions for the CSS play an important role in these tests, but their removal only brings about consistency for the nuclei. For instance, clipping objects with * -g ≥ 1.2 increases the associated p-values to 0.18, 0.17, and 0.04 for UCDs, nuclei, and GCs, respectively, while p changes to ∼ 10 -4 , 0.65, and < 10 -4 by removing objects with g -ı ≥ 0.85. We have also fit skewed normal distributions to each dataset, finding consistent mean values between galaxies and CSS [except GCs, which have a larger value in g -ı], while the standard deviations for galaxies is typically larger than those for CSS. The evidence for common spectral shapes between the majority of CSS and faint galaxies in the core of Virgo is therefore conflicting. An initial assessment of the relative stellar contents within these systems, and potential trends with surface density and/or local environment, via a joint UV-optical-NIR analysis is desirable to pursue this subject further (e.g. Spengler et al., in preparation).
COMPARISON TO GALAXY FORMATION MODELS
As stated earlier, colors allow us to test our understanding of the star formation histories and chemical evolution of galaxies; scaling relations therein; and ultimately the physics governing these processes. Here we explore whether current galaxy formation models plausibly explain these subjects by reproducing the RS in the core of Virgo. The main novelty of this comparison lies in its focus on the oldest and densest part of a z ∼ 0 cluster, where members have been exposed to extreme external forces, on average, for several Gyr (Oman et al. 2013). The nature of our sample dictates that this comparison is best suited for galaxies of intermediate-to-low masses, although we still include high-mass systems for completeness. Unless otherwise stated, when discussing the slope of the RS, we are referring to the interval -19 M g -15, where its behavior is more or less linear.
We compare our results to three recent models of galaxy formation: one SAM (Henriques et al. 2015, hereafter H15) and two hydrodynamic (Illustris and EAGLE; Vogelsberger et al. 2014;Schaye et al. 2015). H15 significantly revised the L-Galaxies SAM, centered on: (i) increased efficiency of radiomode AGN feedback; (ii) delayed reincoporation of galactic winds [scaling inversely with halo mass]; (iii) reduced density threshold for star formation; (iv) AGN heating within satellites; and (v) no ram pressure stripping of hot halo gas in low-mass groups. H15 built their model on the Millenium I and II cosmological N-body simulations (Springel et al. 2005;Boylan-Kolchin et al. 2009), enabling them to produce galaxies over a mass range of 10 7 < M * < 10 12 M . Their revi- Since our intent is to compare these stellar systems within a common magnitude range, only those CSS having 18 < g < 22 are plotted. Representative errors for each population at faint magnitudes are included at bottom-left. (bottom row) As above but for the g -ı color. At faint magnitudes, comparitively red objects are only found amongst the CSS populations; their colors are likely caused by a higher metal content than those for galaxies of the same luminosity.
sions helped temper the persistent issues of SAMs having too large a blue and red fraction at high and low galaxy masses, respectively (Guo et al. 2011;Henriques et al. 2013). Illustris consists of eight cosmological N-body hydro simulations, each spanning a volume of ∼100 3 Mpc 3 , using the moving-mesh code AREPO. This model includes prescriptions for gas cooling; stochastic star formation; stellar evolution; gas recycling; chemical enrichment; [kinetic] SNe feedback; supermassive black hole [SMBH] seeding, accretion and mergers; and AGN feedback. The simulations differ in terms of the resolution and/or particle types/interactions considered; we use the one having the highest resolution and a full physics treatment. EAGLE comprises six simulations with a similar nature to Illustris but run with a modified ver-sion of the SPH code GADGET 3 instead. The simulations differ in terms of resolution, sub-grid physics, or AGN parameterization, where the latter variations produce a better match to the z ∼ 0 stellar mass function and high-mass galaxy observables, respectively. The fiducial model [which we adopt] includes radiative cooling; star formation; stellar mass loss; feedback from star formation and AGN; and accretion onto and mergers of SMBHs. Modifications were made to the implementations of stellar feedback [formerly kinetic, now thermal], gas accretion by SMBHs [angular momentum now included], and the star formation law [metallicity dependence now included]. The galaxy populations from Illustris and EA-GLE both span a range of M * 10 8.5 M .
We selected galaxies from the z = 0.0 snapshot of H15 M g (mag)
Figure 7. Comparison of the NGVS RS to those from galaxy formation models, with gray circles marking the positions of the observed galaxies. The shaded region surrounding each model curve indicates the 1-σ scatter, measured in five bins of luminosity. Curves for Illustris do not appear in panels showing u * -band colors since their subhalo catalogs lack those magnitudes. In every color, models uniformly predict a shallower slope for the RS than is observed in cluster cores.
that inhabit massive halos [M h > 10 14 M ], have non-zero stellar masses, are quenched [sSFR < 10 -11 yr -1 ] and bulgedominated [B/T > 0.5, by mass]; the last constraint aims to weed out highly-reddened spirals. We query the catalogs for both the Millenium I and II simulations, where the latter fills in the low-mass end of the galaxy mass function, making this sample of model galaxies the best match to the luminosity/mass range of our dataset. Similar selection criteria were used to obtain our samples of Illustris and EAGLE galaxies, except that involving B/T since bulge parameters are not included with either simulation's catalogs. We also imposed a resolution cut on Illustris such that each galaxy is populated by ≥240 star particles [minimum particle mass = 1.5 × 10 4 M ]. A similar cut is implicit in our EAGLE selection as SEDs are only available for galaxies having M * 10 8.5 M . Interestingly, most of the brightest cluster galaxies in EAGLE are not quenched, such that we make a second selection to incorporate them in our sample; no such issue is found with Illustris. Broadband magnitudes in the SDSS filters were obtained from all three models and transformed to the CFHT/MegaCam system [see Section 4]. We note that these magnitudes and the associated colors correspond to the total light of these galaxies.
A final note about this comparison is that we stack clusters from each model before analysing its RS. The high densities of cluster cores make them difficult to resolve within cosmological volumes, particularly for hydro simulations, leading to small samples for individual clusters. Stacking is therefore needed to enable a meaningful analysis of the model CMD for quenched cluster-core galaxies. H15, Illustris, and EAGLE respectively yield ∼15k, 144, and 157 galaxies lying within 300 kpc of their host halos' centers, which is roughly equivalent to the projected size of Virgo's core [as we define it]. Note that the much larger size of the H15 sample is explained by the greater spatial volume it models and the fainter luminosities reached by SAMs [M g ≤ -12, compared to M g -15 for hydro models].
In Figure 7 we compare the RS from Fig. 2 [black] to those from H15 [red], Illustris [green], and EAGLE [blue], where the curves for the latter were obtained in identical fashion to those for the NGVS. The shaded regions about each model RS convey the 1σ scatter within five bins of luminosity. The Illustris RS does not appear in the panels showing u * -band colors since their catalogs lack SDSS u-band magnitudes.
The clear impression from Fig. 7 is that no model reproduces the RS in Virgo's core, with model slopes being uniformly shallower than observed. Two-sample t-tests of linear fits to the data and models show that these differences are significant at the P = 0.01 level, except for the case of the EAGLE models and g -color [P = 0.09]. Further, the H15 RS exhibits no sign of the flattening we observe at faint magnitudes; the hydro models unfortunately lack the dynamic range needed to evaluate them in this regard.
The model RSs also differ from one another to varying degrees. First, H15 favors a shallower slope than the hydro models. Second, the color of the H15 RS monotonically reddens towards bright magnitudes whereas the hydro RSs turnover sharply at M g -19. EAGLE and Illustris agree well except for the ubiquitos upturn at faint magnitudes in the latter's RS [marked with dashed lines]. These upturns are created by the resolution criterion we impose on the Illustris catalog and should be disregarded. Underlying this behavior is the fact that lines of constant M * trace out an approximate anti- correlation in color-magnitude space (Roediger & Courteau 2015), a pattern clearly seen when working with richer samples from this model [e.g. galaxies from all cluster-centric radii]. Third, the scatter in H15 is typically the smallest and approximately constant with magnitude, whereas those of the hydro models are larger and increase towards faint magnitudes, more so for Illustris. Given that we find little intrinsic scatter in the NGVS RS at M g > -15 [Fig. 3], H15 appears to outperform the hydro models in this regard, although we can only trace the latter's scatter to M g ∼ -15. Other differences between Illustris and EAGLE appear for the colors g -ı, --ı, and ı-z , in terms of turnovers, slopes and/or zeropoints, all of which are significant [P = 0.01]. It is worth noting that while Fig. 7 references colors measured within 1.0 R e,g for NGVS galaxies [to maximize their numbers], the agreement is not much improved if we use colors from larger apertures. The conflicting shapes on the RS from data and models could be viewed in one of two ways: (i) the core of Virgo is somehow special, or (ii) models fail to reproduce the evolution of cluster-core galaxies. To help demonstrate that the latter is more probable, we compare the same models against a separate dataset for nearby clusters. WINGS (Fasano et al. 2002(Fasano et al. , 2006) is an all-sky survey of a complete, X-ray selected sample of 77 galaxy clusters spread over a narrow redshift range [z = 0.04 -0.07]. Valentinuzzi et al. (2011) measured the slope of the RS for 72 WINGS clusters using BV photometry for galaxies in the range -21.5 ≤ M V ≤ -18. We have done likewise for each well-populated [N > 100] model cluster, using the Blanton & Roweis (2007) filter transformations to obtain BV photometry from SDSS gr-band magnitudes.
Figure 8 compares the distribution of RS slopes from WINGS and galaxy formation models, with the dashed line in the top panel indicating the value in Virgo's core, which fits comfortably within the former. Each model distribution is shown for the two closest snapshots to the redshift limits of the WINGS sample. In the case of H15 and Illustris, these snapshots bracket the WINGS range quite well, whereas the redshift sampling of EAGLE is notably coarser. The latter fact may be important to explaining the difference between the two distributions for this model, since z = 0.1 corresponds to a look-back time of ∼1.3 Gyr. On the other hand, H15 and Illustris suggest that the RS slope does not evolve between z = 0.07/0.08 and 0.03. We have not tried to link model clusters across redshifts as parsing merger trees lies beyond the scope of this work. Observations though support the idea of a static slope in clusters over the range z = 0 -1 (Gladders et al. 1998;Stanford et al. 1998;Blakeslee et al. 2003;Ascaso et al. 2008).
Fig. 8 demonstrates that the distributions for the WINGS and model clusters are clearly incompatible, with the models, on average, preferring a shallower slope for the RS. The sense of this discrepancy is the same as that seen in Fig. 7 between the core of Virgo and the models. A caveat with the comparisons to WINGS though is that the model slopes have all been measured in the respective rest-frames of the clusters. In other words, the model slopes could be biased by differential redshifting of galaxy colors as a function of magnitude [e.g. fainter galaxies reddened more than brighter ones]. To address this, we have simulated the effect of k-corrections using the median of the EAGLE distribution at z = 0.1, finding it would steepen this cluster's RS by -0.01. While significant, we recall that the redshift range for the WINGS sample is z = 0.04 -0.07, such that the mean k-correction to the model slopes is likely smaller than this value and would therefore not bring about better agreement.
Given the value of the above comparisons for testing galaxy formation models, we provide in the Appendix parametric fits to the NGVS RS in every color [measured at 1 R e,g ]. These fits reproduce our LOWESS curves well and enable the wider community to perform their own comparisons.
DISCUSSION
Figure 1 indicates that >90% of the galaxy population within the innermost ∼300 kpc of the Virgo cluster has likely been quenched of star formation. This makes the population ideal for studying the characteristics of the RS, such as its shape and intrinsic scatter. Our analysis demonstrates that, in all optical colors, the RS is (a) non-linear and (b) strongly flattens in the domain of faint dwarfs. The former behavior had already been uncovered in Virgo, albeit at the bright end (Ferrarese et al. 2006;JL09), while the latter, which is new, begins at -14 < M g < -13 [see Appendix], well above the completeness limit of the NGVS. No correlation is observed between color and surface brightness, in bins of luminosity, for M g > -15, implying that the faint-end flattening is not the result of bias or selection effect.
The RS follows the same general shape at M g < -14 in each color, which may have implications for trends in the stellar populations of these galaxies. Assuming that bluer [e.g. u * -g ] and redder [e.g. g -z ] colors preferentially trace mean age and metallicity (Roediger et al. 2011b), respectively, the decrease in color towards faint magnitudes over the range -19 M g ≤ -14 hints that the populations become younger and less enriched (consistent with downsizing; Nelan et al. 2005), with two exceptions. The flattening at bright magnitudes, seen better in samples that span the full cluster (JL09) and the global galaxy population (Baldry et al. 2004), signals either a recent burst of star formation within these galaxies or an upper limit to galactic chemical enrichment. The latter seems more likely given that the stellar mass-metallicity relation for galaxies plateaus at M * 10 11.5 M (Gallazzi et al. 2005). The other exception concerns the flattening at the faint-end of the RS.
7.1. What Causes the Faint-End Flattening of the RS? If colors reasonably trace stellar population parameters [see next sub-section], then arguably the most exciting interpretation suggested by the data is that the faint dwarfs in Virgo's core have a near-uniform age and metallicity, over a range of ∼3-4 magnitudes. This would imply that the known stellar population scaling relations for quiescent galaxies of intermediate-to-high mass (e.g. Choi et al. 2014) break down at low masses [below ∼4 × 10 7 M ; see Appendix] and, more fundamentally, that the physics governing the star formation histories and chemical enrichment of galaxies decouples from mass at these scales.
Given the nature of our sample, the above scenario begs the questions of whether the faint-end flattening of the RS is caused by the environment, and if so, when and where the quenching occurs. While Geha et al. (2012) make the case that dwarfs with M * < 10 9 M must essentially be satellites in order to quench (also see Slater & Bell 2014;Phillips et al. 2015;Davies et al. 2016), we know little of the efficiency and timescale of quenching at low satellite masses and as a function of host halo mass. Using Illustris, Mistani et al. (2016) showed that, on average, the time to quench in low-mass clusters decreases towards low satellite masses, from ∼5.5 Gyr to ∼3 Gyr, over the range 8.5 log M * 10. Slater & Bell (2014) combine measurements of Local Group dwarfs with N-body simulations to suggest that, in such groups, galaxies of M * 10 7 M quench within 1-2 Gyr of their first pericenter passage. However, Weisz et al. (2015) compared HST/WFPC2 star formation histories to predicted infall times based on Via Lactea II (Diemand et al. 2008), finding that many dwarfs in the Local Group likely quenched prior to infall.
In addition to reionization, pre-processing within smaller host halos may play a key role in explaining why many Local Group dwarfs ceased forming stars before their accretion. Likewise, pre-processing must also be considered when trying to understand issues pertaining to quenching of cluster galaxies (e.g. McGee et al. 2009;De Lucia et al. 2012;Wetzel et al. 2013;Hou et al. 2014;Taranu et al. 2014), such as the cause of Virgo's flattened RS at faint magnitudes. Wetzel et al. (2013) deduced where satellites of z = 0 groups/clusters were when they quenched their star formation, by modelling SDSS observations of quiescent fractions with mock catalogs. They found that for host halo masses of 10 14-15 M the fraction of satellites that quenched via pre-processing increases towards lower satellite masses, down to their completeness limit of M * ∼ 7 × 10 9 M , largely at the expense of quenching in-situ. Extrapolating this trend to lower satellite masses suggests that the majority of the quiescent, low-mass dwarfs in Virgo were quenched elsewhere. This suggestion is consistent with abundance matching results for our sample (Grossauer et al. 2015), which indicate that only half of the core galaxies with M * = 10 6-7 M were accreted by z ∼ 1 (see also Oman et al. 2013).
Assuming that the flattening of the RS reflects an approximate homogeneity in stellar contents [i.e. constant mean age] and isolated low-mass dwarfs have constant star formation histories (e.g. Weisz et al. 2014), then the low-mass dwarfs in Virgo's core must have quenched their star formation coevally. Moreover, when coupled with a significant contribu- tion by pre-processing, it is implied that these galaxies are highly susceptible to environmental forces, over a range of host masses. This seems plausible given the very high quiescent fractions [>80%] for satellites between 10 6 < M * /M < 10 8 within the Local Volume (Phillips et al. 2015), which has led to the idea of a threshold satellite mass for effective environmental quenching (Geha et al. 2012;Slater & Bell 2014).
If synchronized quenching of low-mass dwarfs in groups [at least to ∼10 12 M ] leads to a flattened faint-end slope of the core RS, we should expect to find the same feature for dwarfs drawn from larger cluster-centric radii. This follows from the fact that a satellite's cluster-centric radius correlates with its infall time (De Lucia et al. 2012) and that the fraction of satellites accreted via groups increases towards low redshift (McGee et al. 2009). Studying the properties of the RS as a function of cluster-centric position (e.g. see Sánchez-Janssen et al. 2008) will be the focus of a future paper in the NGVS series.
Caveats
A major caveat with the above interpretations is that optical colors are not unambiguous tracers of population parameters, especially at low metallicities (Conroy & Gunn 2010). To this point, Kirby et al. (2013) have shown that stellar metallicity increases monotonically for galaxies from [Fe/H] ∼ -2.3 at M * = 10 4 M to slightly super-solar at M * = 10 12 M . Assuming this trend holds in all environments, we can check for any conditions under which the RS would flatten at faint magnitudes. In the middle and bottom panels of Figure 9 we compare the u * -g and g -z color-mass relations in Virgo's core [black lines] to those predicted by the Flexible Stellar Population Synthesis [FSPS] model (Conroy et al. 2009), where the Kirby et al. relation [top panel] is used to assign masses to each model metallicity track and lines of constant age are colored from purple [∼2 Gyr] to red [∼15 Gyr]. Other models (e.g. Bruzual & Charlot 2003) prove inadequate for our needs due to their coarse sampling of metallicity space over the range Z ∼ 4 × 10 -4 to 4 × 10 -3 . Error bars on the NGVS relations reflect standard errors in the mean, measured within seven bins of luminosity [having sizes of 0.5-2.0 dex]. Although we assume single-burst star formation histories for this test, qualitatively similar trends are expected for more complex treatments (e.g. constant star formation with variable quenching epochs; Roediger et al. 2011b).
Since the intent of Fig. 9 is to explore an alternative interpretation of the faint-end flattening of the RS, we limit our discussion to the range M * < 10 8 M , but show the full relations for completeness. Within that range, we find that the data are indeed consistent with Kirby et al.'s mass-metallicity relation, provided that age does not vary greatly therein. Moreover, the color-mass relation for select ages transitions to a flatter slope at lower masses. This confirms our previous statement that it is difficult to meaningfully constrain metallicities below a certain level with optical colors [Z 10 -3 in the case of FSPS], even when ages are independently known. The inconsistent ages we would infer from the the u * -g and g -z colors could likely be ameliorated by lowering the zeropoint of the Kirby et al. relation since the former color responds more strongly to metallicity for log(Z/Z ) -1. The comparisons shown in Fig. 9 therefore cast doubt on whether the flattening of the RS at faint magnitudes implies both a constant age and metallicity for cluster galaxies at low masses. Distinguishing between these scenarios will be more rigorously addressed in forth-coming work on the stellar populations of NGVS galaxies that incorporates UV and NIR photometry as well.
Shortcomings of Galaxy Formation Models
Regardless of the uncertainties inherent to the interpretation of optical colors, we should expect galaxy formation models to reproduce our observations if their physical recipes are correct. Our test of such models is special in that it focuses on the core of a z = 0 galaxy cluster, where the time-integrated effect of environment on galaxy evolution should be maximal. However, Fig. 7 shows that current models produce a shallower RS than observed, in all colors. This issue is not limited to Virgo's core, as Fig. 8 demonstrates that the distributions of RS slopes for entire model clusters populate shallower values than those measured for other nearby clusters. On a related note, Licitra et al. (2016) have shown that clusters at z < 1 in SAMs suffer from ETG populations with too low an abundance and too blue colors, while ∼10% of model clusters have positive RS slopes. On the other hand, Merson et al. (2016) found broad consistency between observations and SAMs in the zeropoint and slope of the RS in z > 1 clusters. This suggests that errors creep into the evolution of cluster galaxies in SAMs at z < 1.
The discrepancies indicated here follow upon similar issues highlighted by modellers themselves. H15 showed that their model produces a RS having bluer colors than observed in the SDSS for galaxies with M * ≥ 10 9.5 M . Vogelsberger et al. (2014) found the Illustris RS suffers the same problem, albeit at higher masses [M * > 10 10.5 M ], while also producing too low of a red fraction at M * < 10 11 M . Trayford et al. (2015) analyzed the colors of EAGLE galaxies, finding that its RS matches that from the GAMA survey (Taylor et al. 2015) for M r < -20.5, but is too red at fainter magnitudes. Our comparisons build on this work by drawing attention to model treatments of dense environments over cosmic time and [hopefully] incentivize modellers to employ our dataset in future work, especially as they extend their focus towards lower galaxy masses. To this end, the reader is reminded of the parametric fits to the NGVS RS provided in the Appendix.
Naturally, the root of the above discrepancies is tied to errors in the stellar populations of model galaxies. The supplementary material of H15 shows that the model exceeds the mean stellar metallicity of galaxies over the range 10 9.5 < M * 10 10 M by several tenths of a dex while undershooting measurements at 10 10.5 < M * 10 11 M by ∼0.1-0.2 dex. The issues with the H15 RS then seems to reflect shortcomings in both the star formation and chemical enrichment histories of their model galaxies. Part of the disagreement facing Illustris likely stems from the fact that their galaxies have older stellar populations than observed, by as much as 4 Gyr, for M * 10 10.5 M (Vogelsberger et al. 2014). Schaye et al. (2015) showed that EAGLE produces a flatter stellar mass-metallicity relation than measured from local galaxies due to too much enrichment at M * 10 10 M . Our inspection of the stellar populations in H15 and EAGLE reveals that their cluster-core galaxies, on average, have roughly a constant mass-weighted age [∼10-11 Gyr] and follow a shallow mass-metallicity relation, with EAGLE metallicities exceeding H15 values by ∼0.3 dex 7 . The discrepant colors produced by models thus reflect errors in both the star formation histories and chemical enrichment of cluster galaxies; for instance, 7 We omit Illustris from this dicussion as their catalogs do not provide mean stellar ages of their galaxies.
ram pressure stripping may be too effective in quenching cluster dwarfs of star formation (e.g. Steinhauser et al. 2016).
Two critical aspects of the RS that modellers must aim to reproduce are the flattenings at both bright and faint magnitudes. The former is already a contentious point between models themselves, with hydro varieties producing a turnover and while SAMs continuously increase [Fig. 7]. We remind the reader that our LOWESS curves are too steep for M g -19 since they essentially represent an extrapolation from intermediate magnitudes; the bright-end flattening is clearly visible in other datasets that span the full cluster and contain more of such galaxies [Fig. 5]. Hydro models appear to supercede SAMs in this regard, although it may be argued that their turnovers are too sharp. In the case of EAGLE, however, it is unclear what causes this turnover as several of their brightest cluster galaxies are star-forming at z = 0 while their luminosity-metallicity relation inverts for M g ≤ -20.
At present, only SAMs have the requisite depth to check for the flattening seen at the faint end of the RS; the effective resolution of cosmological hydro models is too low to probe the luminosity function to M g ∼ -13. Fig. 7 shows that the H15 RS exhibits no obvious change in slope at faint magnitudes, let alone the pronounced flattening seen in Virgo. The faintend flattening is a tantalizing feature of the RS that may hold new physical insights into the evolution of cluster galaxies of low mass. Addressing the absence of these features should be a focal point for future refinements of galaxy formation models.
CONCLUSIONS
We have used homogeneous isophotal photometry in the u * g iz bands for 404 galaxies belonging to the innermost ∼300 kpc of the Virgo cluster to study the CMD in a dense environment at z = 0, down to stellar masses of ∼ 10 6 M . Our main results are:
• The majority of galaxies in Virgo's core populate the RS [red fraction ∼ 0.9];
• The RS has a non-zero slope at intermediate magnitudes [-19 < M g < -14] in all colors, suggesting that stellar age and metallicity both decrease towards lower galaxy masses, and has minimal intrinsic scatter at the faint end;
• The RS flattens at both the brightest and faintest magnitudes [M g < -19 and M g > -14, respectively], where the latter has not been seen before;
• Galaxy formation models produce a shallower RS than observed at intermediate magnitudes, for both Virgo and other nearby clusters. Also, the RS in hydrodynamic models flattens for bright galaxies while that in SAMs varies monotonically over the full range of our dataset.
The flattening of the RS at faint magnitudes raises intriguing possibilities regarding galaxy evolution and/or cluster formation. However, these hinge on whether the flattening genuinely reflects a homogeneity of stellar populations in low-mass galaxies or colors becoming a poor tracer of age/metallicity at low metallicities [e.g. log(Z/Z ) -1.3]. This issue will be addressed in a forthcoming paper on the stellar populations of NGVS galaxies. A topic worth exploring with our parametric fits is whether the flattening of the RS occurs at a common magnitude for all colors. This can be done with the parameter M g ,0 and Table 1 shows that -14 ≤ M g ,0 ≤ -13, in a large majority of cases. For g -˚and ı-z the transition magnitude is brighter than -14, which might be explained by the fact that these colors sample short wavelength baselines and that the RS spans small ranges therein [∼0.25 and 0.15 mag, respectively]. It is also likely that the posterior distributions for the parameters in our fit are correlated.
Another way to assess the magnitude at which the RS flattens involves measuring the local gradient along our LOWESS fits, the advantage being that this approach is non-parametric. Figure 11 shows our RSs [top panel], scaled to a common zeropoint [arbitrarily established at M g ∼ -14], and the variation of the local gradient as a function of magnitude [bottom panel]. We measure the local gradient using a running bin of 9 [thin line] or 51 [thick line] data points, with the smaller bin allowing us to extend our measurements to brighter magnitudes, where our sample is sparse.
The local gradient varies in a consistent way for all colors at M g ≤ -12: the gradient is roughly constant and negative at bright magnitudes and becomes more positive towards faint magnitudes. The behaviors of the gradients at M g > -12 are more irregular as small fluctuations in the LOWESS curves are amplified when the gradients hover near zero. These behaviors are beyond this discussion however; we are interested in the locations where the rate of change of the gradients is maximized [i.e. the second derivatives of the RSs peak]. Disregarding the curves at M g > -12 then, the bottom panel of Fig. 11 shows that the rate of change maximizes in the range -14 < M g < -13, corresponding to an approximate stellar mass of ∼ 4 × 10 7 M (Ferrarese et al. 2016a). The approximate synchronicity of the flattening of the RS adds further insight to our main result on the flattening of the RS by suggesting a mass scale below which internal processes may cease to govern the stellar populations and evolution of dwarf satellites.
Figure 1 .
1 Figure 1. (a) (u * -) color versus absolute g -band magnitude for the 404 galaxies in the core of Virgo. Colored points are purged from our sample of RS candidates due to obvious star formation activity [blue], our completeness limits [grey], significant image contamination [green], or suspected tidal stripping [red]. The vertical lines indicate bins of magnitude referenced in the right-hand panel, with representative errors plotted in each. (b) Color distributions within the four magnitude bins marked at left. The NGVS photometry enables a deep study of the galaxy CMD and we verify that the core of Virgo is highly deficient in star-forming galaxies.
Figure2. CMDs for quiescent galaxies in Virgo's core, in all ten colors measured by the NGVS. Fluxes have been measured consistently in all five bands within apertures corresponding to 1.0 R e,g isophote of each galaxy. Black points represent individual galaxies while red lines show non-parametric fits to the data. The RS defines a color-magnitude relation in all colors that flattens at faint magnitudes, which could be explained by a constant mean age and metallicity for the lowest-mass galaxies in this region [albeit with significant scatter; but see Fig.3]. Representative errors for the same magnitude bins as in Fig.1are shown in each panel.
Figure 3 .
3 Figure 3. Comparison of the observed scatter [dashed lines] about the RS [solid lines] to photometric errors [dotted lines] established from artificial galaxy tests. The comparision is limited to M g-15 since our tests did not probe brighter magnitudes. The scatter and errors, averaged within three bins of luminosity, match quite well, especially at the faintest luminosities, suggesting minimal intrinsic scatter in the colors and stellar populations of these galaxies.
Figure4. RS in (u * -g ) and (g -z ), for different sizes of aperture used to measure galaxy colors. All four curves consider the same sample of galaxies. The choice of aperture has an impact on the slope of the RS at M g -16 mag for (u * -g ), with smaller apertures yielding steeper slopes, while the RS is more stable in (g -z ). The shaded envelopes represent the scatter about the RS for the 0.5 R e,g and 3.0 R e,g apertures.
Figure 5 .
5 Figure 5. (top) Comparison of the u * -g CMD from JL09 for Virgo ETGs [black circles] to that measured here [red dots]. The full sample is plotted for each dataset and LOWESS fits for both are overlaid [solid lines].Representative errors for the NGVS are included along the bottom. (bottom) As above but restricted to the galaxies common to both samples; measurement pairs are joined with lines. The NGVS extends the CMD for this cluster faintward by ∼5 mag, with much improved photometric errors. We also find that JL09's CMR is steeper than our own at intermediate magnitudes, likely due to their inclusion of systems having recent star formation and possible errors in their sky subtraction.
Figure 6 .
6 Figure6. (top row) u * -g CMD and color distributions for galaxies [circles], GCs [dots], UCDs [squares], and galactic nuclei [diamonds] within the core of Virgo. Since our intent is to compare these stellar systems within a common magnitude range, only those CSS having 18 < g < 22 are plotted. Representative errors for each population at faint magnitudes are included at bottom-left. (bottom row) As above but for the g -ı color. At faint magnitudes, comparitively red objects are only found amongst the CSS populations; their colors are likely caused by a higher metal content than those for galaxies of the same luminosity.
Figure 8 .
8 Figure 8. Comparison of RS slopes in real (top panel) and model clusters (other panels). The model slopes are measured from those snapshots which most closely bracket the redshift range of the WINGS clusters [0.03 ≤ z ≤ 0.07]. In all cases the typical slope within model clusters is shallower than observed. The dashed line indicates the RS in Virgo's core.
Figure 9
9 Figure 9. u * -g and g -z color-mass relations [middle and bottom panels; black lines] versus those predicted by the FSPS stellar population model [colored lines], constrained by the Kirby et al. (2013) mass-metallicity relation [top panel]. Each model relation corresponds to a certain fixed age, ranging between ∼2 Gyr [purple] and ∼15 Gyr [red] in steps of 0.025 dex. Error bars on the NGVS relations represent standard errors in the mean within bins of luminosity.
Figure 10 .
10 Figure10. Parameteric fits [green lines] to the RS in Virgo's core, corresponding to the 1.0 R e,g -colors of NGVS galaxies. These fits are compared to the data themselves [black points] as well as non-parametric [LOWESS] fits. Points clipped from the each fit are shown in blue.
Figure 11 .
11 Figure 11. (top) LOWESS fits from Fig. 10, scaled to a common zeropoint at M g ∼ -14. (bottom) Local gradient measured along each RS shown in the top panel using a rolling bin of either 9 [thin line] or 51 [thick line] data points; the former bin size allows us to extend our measurements up to bright galaxies. In all cases, the local gradient begins to flatten in the vicinity of M g ∼ -15.
of Ferrarese et al. 2016a as well]. Furthermore, the JL09 CMR has a lower zeropoint [by ∼0.06] and a shallower slope than the NGVS RS for M g -19, which two-sample t-tests verify as significant [P = 0.01]. The JL09 data also exhibit a flattening of the CMR in the dwarf regime, but at a brighter magnitude than that seen in ours [M g ∼
Table 1
1 Parameters of double power-law fit to the NGVS RS.
Color M g ,0 C 0 β 1 β 2 α rms
(mag) (mag) (mag)
(1) (2) (3) (4) (5) (6) (7)
u * -g u * -˚-13.62 1.552 3.871 0.000 11.51 0.091 -13.52 1.040 2.624 0.000 15.98 0.078 u * -ı -13.45 1.787 4.577 0.000 11.81 0.116
u * -z -13.95 1.927 5.494 0.773 20.73 0.157
g -˚-14.57 0.522 1.036 0.392 57.14 0.047
g -ı -13.81 0.751 1.685 0.578 1333. 0.058
g -z -13.74 0.852 2.808 0.413 23.39 0.124
--ı -13.07 0.230 0.735 0.130 96.97 0.050
--z -13.40 0.342 1.851 0.000 11.86 0.108
ı-z -14.15 0.107 1.133 0.000 15.92 0.102
shape of the RS.
Note that the filters used in the NGVS are not identical to those of the Sloan Digital Sky Survey (SDSS;York et al.
2000), with the u * -band being the most different. Unless otherwise stated, magnitudes are expressed in the MegaCam system throughout this paper.
http://www4.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/community/YorkExtinctionSolver/
The timescale associated with environmental quenching appears contentious, with some groups favoring shorter values (<2 Gyr;Boselli & Gavazzi 2014, and references therein;Haines et al. 2015) and others longer (several Gyr; e.g.Balogh et al. 2000;De Lucia et al. 2012;Taranu et al. 2014).
While the scatter in the JL09 data is likely dominated by the shallower depth of the SDSS imaging, a contribution by distance uncertainties cannot be ruled out, since the Virgo Cluster Catalog spans several sub-groups whose relative distances can exceed 10 Mpc(Mei et al. 2007).
Virgo comprises two large sub-clusters and several massive groups, such that its bright galaxies are spread throughout the cluster.
APPENDIX
Here we present parametric fits for the RS in Virgo's core based on the colors of our galaxies within 1.0 R e,g . Our purpose is to enable the wider community, particularly modellers, to compare our results to their own through simple [continuous] fitting functions. Motivated by the non-parameteric fits in Fig. 2, we choose a double power-law to describe the shape of the RS; we acknowledge that this choice is made strictly on a phenomenological basis and lacks physical motivation. This function is parameterized as,
where β 1 and β 2 represent the asymptotic slopes towards bright and faint magnitudes, respectively, while M g ,0 and C 0 correspond to the magnitude and color of the transition point between the two power-laws, and α reflects the sharpness of the transition.
We fit Equation 1 to our data through an iterative non-linear optimization of χ 2 following the L-BFGS-B algorithm (Byrd et al. 1995;Zhu et al. 1997), restricting α, β 1 , and β 2 to positive values, and M g ,0 and C 0 to lie in the respective ranges [-20, -8] and [0,20]. At each iteration, > 3σ outliers are clipped from each CMD; doing so allows the fits to better reproduce our LOWESS curves. We generally achieve convergence after 5-6 iterations while the fraction of clipped points is <10% in all cases.
Our power-law fits [green curves] are compared to the data [black points] and LOWESS fits [red curves] in Figure 10, while clipped data are represented by the blue points. The best-fit parameters are summarized in Table 1, where the final column lists the rms of each fit. Inspection of the rms values and the curves themselves indicates that our parametric fits do well in tracing the |
01567309 | en | [
"phys.cond.cm-sm"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01567309/file/1707.06632.pdf | Michele Starnini
email: [email protected]
Bruno Lepri
email: [email protected]
Andrea Baronchelli
email: [email protected]
Alain Barrat
email: [email protected]
Ciro Cattuto
email: [email protected]
Romualdo Pastor-Satorras
email: [email protected]
Robust modeling of human contact networks across different scales and proximity-sensing techniques
Keywords: Social Computing, Computational Social Science, Social Network Analysis, Mobile Sensing, Mathematical Modeling, Wearable Sensors
The problem of mapping human close-range proximity networks has been tackled using a variety of technical approaches. Wearable electronic devices, in particular, have proven to be particularly successful in a variety of settings relevant for research in social science, complex networks and infectious diseases dynamics. Each device and technology used for proximity sensing (e.g., RFIDs, Bluetooth, low-power radio or infrared communication, etc.) comes with specific biases on the closerange relations it records. Hence it is important to assess which statistical features of the empirical proximity networks are robust across different measurement techniques, and which modeling frameworks generalize well across empirical data. Here we compare time-resolved proximity networks recorded in different experimental settings and show that some important statistical features are robust across all settings considered. The observed universality calls for a simplified modeling approach. We show that one such simple model is indeed able to reproduce the main statistical distributions characterizing the empirical temporal networks.
Introduction
Being social animals by nature, most of our daily activities involve face-toface and proximity interactions with others. Although technological advances have enabled remote forms of communication such as calls, video-conferences, e-mails, etc., several studies [START_REF] Whittaker | Informal workplace communication: What is it like and how might we support it?[END_REF][START_REF] Nardi | The place of face to face communication in distributed work[END_REF] and the constant increase in business traveling, provide evidence that co-presence and face-to-face interactions still represent the richest communication channel for informal coordination [START_REF] Kraut | Informal communication in organizations: Form, function, and technology[END_REF], socialization and creation of social bonds [START_REF] Kendon | Organization of Behavior in Face-to-Face Interaction[END_REF][START_REF] Storper | Buzz: Face-to-face contact and the urban economy[END_REF], and the exchange of ideas and information [START_REF] Doherty-Sneddon | Face-to-face and video-mediated communication: A comparison of dialogue structure and task performance[END_REF][START_REF] Nohria | Face-to-face: Making network organizations work[END_REF][START_REF] Wright | The associations between young adults' face-to-face prosocial behaviorsand their online prosocial behaviors[END_REF]. At the same time, close-range physical proximity and face-to-face interactions are known determinants for the transmission of some pathogens such as airborne ones [START_REF] Liljeros | The web of human sexual contacts[END_REF][START_REF] Salathé | A high-resolution human contact network for infectious disease transmission[END_REF]. A quantitative understanding of human dynamics in social gatherings is therefore important not only to understand human behavior, creation of social bonds and flow of ideas, but also to design effective containment strategies and contrast epidemic spreading [START_REF] Starnini | Immunization strategies for epidemic processes in time-varying contact networks[END_REF][START_REF] Smieszek | A low-cost method to assess the epidemiological importance of individuals in controlling infectious disease outbreaks[END_REF][START_REF] Gemmetto | Mitigation of infectious disease at school: targeted class closure vs school closure[END_REF].
Hence, face-to-face and proximity interactions have long been the focus of major attention in social sciences and epidemiology [START_REF] Bales | Interaction process analysis: A method for the study of small groups[END_REF][START_REF] Arrow | Small groups as complex systems: Formation, coordination, development, and adaptation[END_REF][START_REF] Bion | Experiences in groups and other papers[END_REF][START_REF] Eames | Six challenges in measuring contact networks for use in modelling[END_REF] and recently various research groups have developed sensing devices and approaches to automatically measure these interaction networks [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF][START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Salathé | A high-resolution human contact network for infectious disease transmission[END_REF][START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF][START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF][START_REF] Stopczynski | Measuring large-scale social networks with high resolution[END_REF][START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF]. Reality Mining (RM) [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF], a study conducted in 2004 by the MIT Media Lab, was the first one to collect data from mobile phones to track the dynamics of a community of 100 business school students over a nine-month period. Following this seminal project, the Social Evolution study [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF] tracked the everyday life of a whole undergraduate dormitory for almost 8 months using mobile phones (i.e. call logs, location data, and proximity interactions). This study was specifically designed to model the adoption of political opinions, the spreading of epidemics, the effect of social interactions on depression and stress, and the eating and physical exercise habits. More recently, in the Friends and Family study 130 graduate students and their partners, sharing the same dormitory, carried smartphones running a mobile sensing platform for 15 months [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF]. Additional data were also collected from Facebook, credit card statements, surveys including questions about personality traits, group affiliations, daily mood states and sleep quality, etc.
Along similar lines, the SocioPatterns (SP) initiative [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF] and the Sociometric Badges projects [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF][START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF][START_REF] Onnela | Using sociometers to quantify social interaction patterns[END_REF] have been studying since several years the proximity patterns of human gatherings, in different social contexts, such as scientific conferences [START_REF] Stehlé | Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees[END_REF], museums [START_REF] Van Den Broeck | The making of sixty-nine days of close encounters at the science gallery[END_REF], schools [START_REF] Stehlé | High-resolution measurements of face-to-face contact patterns in a primary school[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF], hospitals [START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF] and research institutions [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF] by endowing participants with active RFID badges (SocioPatterns initiative) or with devices equipped with accelerometers, microphones, Bluetooth and Infrared sensors (Sociometric Badges projects) which capture body movements, prosodic speech features, proximity, and face-to-face interactions respectively.
However, the different technologies (e.g., RFID, Bluetooth, Infrared sensors) employed in these studies might imply potentially relevant differences in measuring contact networks. Interaction range and the angular width for detecting contacts, for instance, vary in a significant way, from less than 1 meter using Infrared sensors to more than 10 meters using Bluetooth sensors, and from 15 degrees using Infrared sensors to 360 degrees using Bluetooth sensors. In many cases, data cleaning and post-processing is based on calibrated power thresholds, temporal smoothing, and other assumptions that introduce their own biases. Finally, experiments themselves are diverse in terms of venue (from conferences to offices), size (from N 50 to N 500 individuals), duration (from a single day to several months) and temporal resolution. The full extent to which the measured proximity networks depends on experimental and data-processing techniques is challenging to assess, as no studies, to the best of our knowledge, have tackled a systematic comparison of different proximity-sensing techniques based on wearable devices.
Here we tackle this task, showing that empirical proximity networks measured in a variety of social gatherings by means of different measurement systems yield consistent statistical patterns of human dynamics, so we can assume that such regularities capture intrinsic properties of human contact networks. The presence of such apparently universal behavior, independent of the measurement framework and details, calls, within a statistical physics perspective, for an explanatory model, based on simple assumptions on human behavior. Indeed, we show that a simple multi-agent model [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF][START_REF] Starnini | Model reproduces individual, group and collective dynamics of human contact networks[END_REF] accurately reproduces the statistical regularities observed across different social contexts.
Related Work
The present study takes inspiration from the emerging body of work investigating the possibilities of analyzing proximity and face-to-face interactions using different kinds of wearable sensors. At present, mobile phones allow the collection of data on specific structural and temporal aspects of social interactions, offering ways to approximate social interactions as spatial proximity or as the co-location of mobile devices, e.g., by means of Bluetooth hits [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Dong | Modeling the co-evolution of behaviors and social relationships using mobile phone data[END_REF][START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF][START_REF] Stopczynski | Measuring large-scale social networks with high resolution[END_REF]. For example, Do and Gatica Perez have proposed several topic models for capturing group interaction patterns from Bluetooth proximity networks [START_REF] Do | Human interaction discovery in smartphone proximity networks[END_REF][START_REF] Do | Inferring social activities with mobile sensor networks[END_REF]. However, this approach does not always yield good proxies to the social interactions occurring between the individuals carrying the devices.
Mobile phone traces suffer a similar problem: They can be used to model human mobility [START_REF] Gonzaléz | Understanding individual human mobility patterns[END_REF][START_REF] Blondel | A survey of results on mobile phone datasets analysis[END_REF] with the great advantage of easily scaling up to millions of individuals; they too, however, offer only coarse localization and therefore provide only rough co-location information, yielding thus only very limited insights into the social interactions of individuals.
An alternative strategy for collecting data on social interactions is to resort to image and video processing based on cameras placed in the environment [START_REF] Cristani | Social interaction discovery by statistical analysis of F-formations[END_REF][START_REF] Staiano | Salsa: A novel dataset for multimodal group behavior analysis[END_REF]. This approach provides very rich data sets that are, in turn, computationally very complex: They require line-of-sight access to the monitored spaces and people, specific effort for equipping the relevant physical spaces, and can hardly cope with large scale data.
Since 2010, Cattuto et al. [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF] have used a technique for monitoring social interactions that reconciles scalability and resolution by means of proximitysensing systems based on active RFID devices. These devices are capable of sensing spatial proximity over different length scales and even close face-to-face interactions of individuals (1 to 2m), with tunable temporal resolution. The So-cioPatterns initiative has collected and analyzed face-to-face interaction data in many different contexts. These analyses have shown strong heterogeneities in the contact duration of individuals, the robustness of these statistics across contexts, and have revealed highly non-trivial mixing patterns of individuals in schools, hospitals or offices as well as their robustness across various timescales [START_REF] Stehlé | High-resolution measurements of face-to-face contact patterns in a primary school[END_REF][START_REF] Isella | Close encounters in a pediatric ward: Measuring face-to-face proximity and mixing patterns with wearable sensors[END_REF][START_REF] Isella | What's in a crowd? Analysis of face-to-face behavioral networks[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF][START_REF] Gnois | Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers[END_REF]. These data have been used in data-driven simulations of epidemic spreading phenomena, including the design and validation of containment measures [START_REF] Gemmetto | Mitigation of infectious disease at school: targeted class closure vs school closure[END_REF].
Along a similar line, Olguin Olguin et al. [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF] have designed and employed Sociometric Badges, platforms equipped with accelerometers, microphones, Bluetooth and Infrared sensors which capture body movements, prosodic speech features, proximity and face-to-face interactions respectively. Some previous studies based on Sociometric Badges revealed important insights into human dynamics and organizational processes, such as the impact of electronic communications on the business performance of teams [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF], the relationship between several behavioral features captured by Sociometric Badges, employee' self-perceptions (from surveys) and productivity [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF], the spreading of personality and emotional states [START_REF] Alshamsi | Beyond contagion: Reality mining reveals complex patterns of social influence[END_REF].
Empirical data
In this section, we describe datasets gathered by five different studies: The "Lyon hospital" and "SFHH" conference datasets from the SocioPatterns (SP) initiative [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF], the Trento Sociometric Badges (SB) dataset [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF], the Social Evolution (SE) dataset [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF], the Friends and Family (FF) [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF] dataset, and two datasets (Elem and Mid) collected using wireless ranging enabled nodes (WRENs) [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF]. The main statistical properties of datasets under consideration are summarized in Table 1, while the settings of the studies are described in detail in the following subsections.
SocioPatterns (SP)
The measurement infrastructure set up by the SP initiative is based on wireless devices embedded in badges, worn by the participants on their chests. Devices exchange radio packets and use them to monitor for proximity of individuals (RFID). Information is sent to receivers installed in the environment, logging contact data. They are tuned so that the face-to-face proximity of two individuals wearing the badges are sensed only when they are facing each other at close range (about 1 to 1.5m). The time resolution is set to 20 seconds, meaning that a contact between two individuals is considered established if their badges exchange at least one packet during such interval, and lasts as long as there is at least one packet exchanged over subsequent 20-second time windows. More details on the experimental setup can be found in Ref. [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF] Here we consider the dataset "Hospital", gathered by the SP initiative at a Lyon Hospital, during 4 workdays, and the dataset "SFHH", gathered by the SP initiative at the congress of the Société Francaise d'Hygiène Hospitaliére, where the experiment was conducted during the first day of a two-days conference. See Ref. [START_REF] Stehlé | Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees[END_REF] for a detailed description.
Sociometric Badges (SB)
The Sociometric Badges data [START_REF] Lepri | The SocioMetric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations[END_REF] has been collected in a research institute for over a six week consecutive period, involving a population of 54 subjects, during their working hours. The Sociometric Badges, employed for this study, are equipped with accelerometers, microphones, Bluetooth and Infrared sensors which capture body movements, prosodic speech features, co-location and faceto-face interactions respectively [START_REF] Olguín Olguín | Sensible organizations: Technology and methodology for automatically measuring organizational behavior[END_REF]. For the purposes of our study we have exploited the data provided from the Bluetooth and Infrared sensors.
Infrared Data Infrared (IR) transmissions are used to detect face-to-face interactions between people. In order for a badge to be detected by an IR sensor, two individuals must have a direct line of sight and the receiving badge's sensor must be within the transmitting badge's IR signal cone of height h ≤ 1 meter and a radius of r ≤ h tan θ, where θ = ±15 o degrees. The infrared transmission rate (T R ir ) was set to 1Hz.
Bluetooth Data Bluetooth (BT) detections can be used as a coarse indicator of proximity between devices. Radio signal strength indicator (RSSI) is a measure of the signal strength between transmitting and receiving devices. The range of RSSI values for the radio transceiver in the badge is (-128 dBm, 127 dBm). The Sociometric Badges broadcast their ID every five seconds using a 2.4 GHz transceiver (T R radio = 12 transmissions per minute).
Social Evolution (SE)
The Social Evolution dataset was collected as part of a longitudinal study with 74 undergraduate students uniformly distributed among all four academic years (freshmen, sophomores, juniors, seniors). Participants in the study represents 80% of the residents of a dormitory at the campus of a major university in North America. The study participants were equipped with a smartphone (i.e. a Windows Mobile device) incorporating a sensing platform designed for collecting call logs, location and proximity data. Specifically, the software scanned for Bluetooth wireless devices in proximity every six minutes, a compromise between short-term social interactions and battery life [START_REF] Eagle | Inferring friendship network structure by using mobile phone data[END_REF]. With this approach, the BT log of a given smartphone would contain the list of devices in its proximity, sampled every six minutes.
Participants used the Windows Mobile smartphones as their primary phones, with their existing voice plans. Students had also online data access with these phones due to pervasive Wi-Fi on the university campus and in the metropolitan area. As compensation for their participation, students were allowed to keep the smartphones at the end of the experiment. Although relevant academic and extra-curricular activities might have not been covered either because the mobile phones may not be permanently on (e.g., during classes), or because of contacts with people not taking part to the study, the dormitory may still represent the preferential place where students live, cook, and sleep. Additional information on the SE study is available in Madan et al. [START_REF] Madan | Social sensing for epidemiological behavior change[END_REF][START_REF] Madan | Sensing the "health state" of a community[END_REF].
Friends and Family (FF)
The Friends and Family dataset was collected during a longitudinal study capturing the lives of 117 subjects living in a married graduate student residency of a major US university [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF]. The sample of subjects has a large variety in terms of provenance and cultural background. During the study period, each participant was equipped with an Android-based mobile phone incorporating a sensing software explicitly designed for collecting mobile data. Such software runs in a passive manner and does not interfere with the every day usage of the phone.
Proximity interactions were derived from Bluetooth data in a manner similar to previous studies such as [START_REF] Eagle | Reality mining: sensing complex social systems[END_REF][START_REF] Madan | Social sensing for epidemiological behavior change[END_REF]. Specifically, the Funf phone sensing platform was used to detect Bluetooth devices in the participant's proximity. The Bluetooth scan was performed periodically, every five minutes in order to keep from draining the battery while achieving a high enough resolution for social interactions. With this approach, the BT log of a given smartphone would contain the list of devices in its proximity, sampled every 5 minutes. See Ref. [START_REF] Aharony | Social fmri: Investigating and shaping social mechanisms in the real world[END_REF] for a detailed description of the study.
Toth et al. datasets (Toth et al.)
The datasets, publicly available, were collected by Toth et al. [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF] deploying wireless ranging enabled nodes (WRENs) [START_REF] Forys | Wrenmining: large-scale data collection for human contact network research[END_REF] to students in Utah schools. Each WREN was worn by a student and collected time-stamped data from other WRENs in proximity at intervals of approximately 20 seconds. Each recording included a measure of signal strength, which depends on the distance between and relative orientation of the pair of individuals wearing each WREN. More specifically, Toth et al. [START_REF] Toth | The role of heterogeneity in contact timing and duration in network models of influenza spread in schools[END_REF] have applied signal strength criteria such that each retained data point was most likely to represent a pair of students, with face-toface orientation, located 1 meter from each other.
In the current paper, we resort to the data collected from two schools in Utah: One middle school (Mid), an urban public school with 679 students (age range 1214); and one elementary school (Elem), a suburban public school with 476 students, (age range 512). The contact data were captured during school 1. Some average properties of the datasets under consideration. SP-hosp = "SocioPatterns Lyon hospital", SP-sfhh = "SocioPatterns SFHH conference", SB = "Sociometric Badges", SE = "Social Evolution", FF = "Friends and Family", Elem = "Toth's Elementary school", Mid = "Toth's Middle school" hours of two consecutive school days in autumn 2012 from 591 students (87% coverage) at Mid and in winter 2013 from 339 students (71% coverage) at Elem.
Temporal network formalism
Proximity patterns can be naturally analyzed in terms of temporally evolving graphs [START_REF] Holme | Temporal networks[END_REF][START_REF] Holme | Modern temporal network theory: a colloquium[END_REF], whose nodes are defined by the individuals, and whose links represent interactions between pairs of individuals. Interactions need to be aggregated over an elementary time interval ∆t 0 in order to build a temporal network [START_REF] Ribeiro | Quantifying the effect of temporal resolution on time-varying networks[END_REF]. This elementary time step represents the temporal resolution of data, and all the interactions established within this time interval are considered as simultaneous. Taken together, these interactions constitute an "instantaneous" network, formed by isolated nodes and small groups of interacting individuals (not necessarily forming cliques). The sequence of such instantaneous networks forms a temporal, or time-varying, network. The elementary time step ∆t 0 is set to ∆t 0 = 20 seconds in the case of SP data, ∆t 0 = 60 seconds for SMBC data, ∆t 0 = 300 seconds for SE and FF data, and ∆t 0 = 20 seconds for Toth et al. datasets. Note that temporal networks are built by including only non-empty instantaneous graphs, i.e. graphs in which at least a pair of nodes are connected.
Each data set is thus represented by a temporal network with a number N of different interacting individuals, and a total duration of T elementary time steps. Temporal networks can be described in terms of a characteristic function χ(i, j, t) taking the value 1 when individuals i and j are connected at time t, and zero otherwise [START_REF] Starnini | Random walks on temporal networks[END_REF]. Integrating the information of the time-varying network over a given time window T produces an aggregated weighted network, where the weight w ij between nodes i and j represents the total temporal duration of the contacts between agents i and j, w ij = t χ(i, j, t), and the strength s i of a node i, s i = j w ij , represents the cumulated time spent in interactions by individual i. In Table 1 we summarize a number of significant statistical properties, such as the size N , the total duration T in units of elementary time steps ∆t 0 , and the average fraction of individuals interacting at each time step, p. We also report the average degree, k , defined as the average number of interactions per individual, and average strength, s = N -1 i s i , of the aggregated networks, integrated over the whole sequence. One can note that the data sets under consideration are highly heterogeneous in terms of the reported statistical properties. Aggregated network representations preserve such heteogeneity, even though it is important to remark that aggregated properties are sensitive to the time-aggreagating interval [START_REF] Ribeiro | Quantifying the effect of temporal resolution on time-varying networks[END_REF] and therefore to the specificity of data collection and preprocessing.
Comparison among the different datasets
In this section we perform a comparison of several statistical properties of the temporal networks, as defined above, representing the different datasets under consideration.
The temporal pattern of the agents' contacts is probably the most distinctive feature of proximity interaction networks. We therefore start by considering the distribution of the durations ∆t of the contacts between pairs of agents, P (∆t), and the distribution of gap times τ between two consecutive proximity events involving a given individual, P (τ ). The bursty dynamics of human inter- Fig. 2. Probability distribution of the gap times τ between consecutive contacts of pairs of agents, P (τ ), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A power law form, P (τ ) ∼ τ -γτ , with γτ = 2.1, is plotted as a reference in dashed line.
actions [START_REF] Barabasi | The origin of bursts and heavy tails in human dynamics[END_REF] is revealed by the long-tailed form of these two distributions, which can be described in terms of a power-law function. Figures 1 and2 show the distribution of the contacts duration P (∆t) and gap times P (τ ) for the various sets of empirical data. In both cases, all dataset shows a broad-tailed behavior, that can be loosely described by a power law distribution. In Figures 1 and2 we plot, as a guide for the eye, power-law forms P (∆t) ∼ ∆t -γ ∆t , with exponent γ ∆t ∼ 2.5, and P (τ ) ∼ τ -γτ , with exponent γ τ ∼ 2.1, respectively.
The probability distributions of strength, P (s), and weight, P (w), are a signature of the topological structure of the corresponding aggregated, weighted networks. Since the duration T of the datasets under consideration is quite heterogeneous, see Table 1, we do not reconstruct the aggregated networks by integrating over the whole duration T , but we integrate each temporal network over a time window of fixed length, ∆T = 1000 elementary time steps. That is, we consider a random starting time T 0 (provided that T 0 < T -∆T ), and reconstruct an aggregated network by integrating the temporal network from T 0 to T 0 + ∆T . We average our results by sampling 100 different starting times. Note that, since the elementary time step ∆t 0 is different across different experiments, the real duration of the time window considered is different across different datasets.
Figs. 3 and4 show the weight and strength distributions, P (w) and P (s), of the aggregated networks over ∆T , for the considered datasets. Again, all datasets display a similar heavy tailed weight distribution, roughly compatible with a power-law form, meaning that the heterogeneity shown in the broad-tailed form of the contact duration distribution, P (∆t), persists also over longer time scales. Data sets SB-BT, SE and FF present deviations with respect to the other data sets. The strength distribution P (s) is also broad tailed and quite similar for all data sets considered, but in this case it is not compatible with a power law.
Finally, Fig. 5 shows the average strength as a function of the degree, s(k), in the aggregated networks integrated over an interval ∆T . One can see that if the strength is rescaled by the total strength of the network in the considered time window, s = N -1 T0+∆T t=T0 ij χ(i, j, t), the different data sets show a similar correlation between strength and degree. In particular, Fig. 5 shows that all data sets considered present a slightly superlinear correlation between strength and degree, s(k) ∼ k γ with γ > 1, as highlighted by the linear correlation plotted as a dashed line.
Modeling human contact networks
In the previous Section, we have shown that the temporal networks representing different datasets, highly heterogeneous in terms of size, duration, proximitysensing techniques, and social contexts, are characterized by very similar statistical properties. Here we show that a simple model, in which individuals are endowed with different social attractiveness, is able to reproduce the empirical distributions.
Model definition
The social contexts in which the data were collected can be modeled by a set of N mobile agents free to move in a closed environment, who interact when they are close enough (within the exchange range of the devices) [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF]. The simplifying assumption of the model proposed in [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF] is that the agents perform a random walk in a box of linear size L with periodic boundary conditions (the average density is ρ = N/L 2 ). Whenever two agents are within distance d (with d << L), they start to interact. The key ingredient of the model is that each agent is characterized by an "attractiveness", a i , a quenched random number, extracted from a distribution η(a), representing her power to raise interest in the others, which can be thought of as a proxy for social status or the role played in the considered social gathering. Attractiveness rules the interactions between agents in a natural way: Whenever an individual is involved in an interaction with other peers, she will continue to interact with them with a probability proportional to the attractiveness of her most interesting neighbor, or move away otherwise. Finally, the model incorporates the empirical evidence that not all agents are simultaneously present in system: Individuals can be either in an active state, where they can move and establish interactions, or in an inactive one representing absence from the premises. Thus, at each time step, every active individual becomes inactive with a constant probability r, while inactive individuals can go back to the active state with the complementary probabillty 1-r. See Refs. [START_REF] Starnini | Modeling human dynamics of face-to-face interaction networks[END_REF][START_REF] Starnini | Model reproduces individual, group and collective dynamics of human contact networks[END_REF] for a detailed description of the model.
Model validation
Here we contrast the results obtained by the numerical simulation of the model against empirical data sets. We average our results over 100 runs with parameters N = 100, L = 50, T = 5000. The results of numerical experiments are reported in Figs. 1 to 5, for the corresponding quantities considered, represented by a continuous, blue line.
In the case of the contact duration distribution, P (∆t), Fig. 1, numerical and experimental data show a remarkable match, with some deviations for the SB-BT and FF datasets. Numerical data also show a close behavior to the mentioned power-law distribution with exponent γ ∆t = 2.5. Also in the case of the gap times distribution, P (τ ), Fig. 2, the distribution obtained by numerical simulations of the model is very close to the experimental ones, spanning the same orders of magnitude. The weight distribution P (w) of the model presents a very good fit to the empirical data, see Fig. 3, with the exception of data sets SB-BT, SE and FF, as mentioned above. The strength distribution P (s), Fig. 4, is, as we have commented above, quite noisy, especially for the datasets of smallest size. It follows however a similar trend across the different datasets that is well matched by numerical simulations of the model. Finally, in the case of the average strength of individuals of degree k, s(k), Fig. 5, the most striking feature, namely the superlinear behavior as a function of k, is correctly captured by the numerical simulations of the model.
Discussion
All datasets under consideration show similar statistical properties of the individuals' contacts. The distribution of the contact durations, P (∆t), and the inter-event time distribution, P (τ ), are heavy tailed and compatible with power law forms, and the attractiveness model is able to quantitavely reproduce such behavior. The weight distribution of the aggregated networks, P (w), is also heavy tailed for all datasets and for the attractiveness model, even though some datasets show deviations. The strength distribution P (s) and the correlation between strength and degree, s(k), present a quite noisy behavior, especially for smaller datasets. However, all datasets show a long tailed form of P (s) and a superlinear correlation of the s(k), correctly reproduced by the attractiveness model.
Previous work [START_REF] Cattuto | Dynamics of person-to-person interactions from distributed RFID sensor networks[END_REF][START_REF] Isella | What's in a crowd? Analysis of face-to-face behavioral networks[END_REF][START_REF] Fournet | Contact patterns among high school students[END_REF] have shown that the functional shapes of contact and inter-contact durations' distributions were very robust across contexts, for data collected by the SocioPatterns infrastructure as well as by similar RFID sensors. Our results show that this robustness extends in fact to proximity data collected through different types of sensors (e.g., Bluetooth, Infrared, WREN, RFID). This is of particular relevance in the context of modeling human behavior and building data-driven models depending on human interaction data, such as models for the spread of infectious diseases, from two points of view. On the one hand, the robust broadness of these distributions implies that different contacts might play very different roles in a transmission process: Under the common assumption that the transmission probability between two individuals depends on their time in contact, the longer contacts, which are orders of magnitude longer than average, could play a crucial role in disease dynamics. The heterogeneity of contact patterns is also relevant at the individual level, as revealed by broad distributions of strengths and the superlinear behavior of s(k), and is known to have a strong impact on spreading dynamics. In particular, it highlights the existence of "super-contactors", i.e. individuals who account for an important proportion of the overall contact durations and may therefore become super-spreaders in the case of an outbreak On the other hand, the robustness of the distributions found in different contexts represents an important information and asset for modelers: It means that these distributions can be assumed to depend negligibly on the specifics of the situation being modeled and thus directly plugged into the models to create for instance synthetic populations of interacting agents. From another modeling point of view, they also represent a validation benchmark for microscopic models of interactions, which should correctly reproduce such robust features. In fact, as we have shown, a simple model based on mobile agents, and on the concept of social appealing or attractiveness, is able to reproduce most of the main statistical properties of human contact temporal networks. The good fit of this model hints towards the fact that the temporal patterns of human contacts at different time scales can be explained in terms of simple physical processes, without assuming any cognitive processes at work.
It would be of interest to measure and compare several other properties of the contact networks, such as the evolution of the integrated degree distribution P T (k) and of the aggregated average degree in k(T ), or the rate at which the contact neighborhoods of individuals change. Unfortunately, these quantities are difficult to measure in some cases due to the small sizes of the datasets.
Fig.
Fig. Probability distribution of the duration ∆t of the contacts between pairs of agents, P (∆t), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A power law form, P (∆t) ∼ ∆t -γ ∆t , with γ∆t = 2.5, is plotted as a reference in dashed line.
Fig. 3 .
3 Fig. 3. Weight distribution P (w), for the different datasets under consideration, compared with numerical simulations of the attractiveness model.
Fig. 4 .
4 Fig. 4. Strength distribution P (s), for the different datasets under consideration, compared with numerical simulations of the attractiveness model.
Fig. 5 .
5 Fig. 5. Strength as a function of the degree, s(k), for the different datasets under consideration, compared with numerical simulations of the attractiveness model. A linear correlation s(k) ∼ k is plotted in dashed line, to highlight the superlinear correlation observed in data and model.
Acknowledgments
M.S. acknowledges financial support from the James S. McDonnell Foundation. R.P.-S. acknowledges financial support from the Spanish MINECO, under projects FIS2013-47282-C2-2 and FIS2016-76830-C2-1-P, and additional financial support from ICREA Academia, funded by the Generalitat de Catalunya. C.C. acknowledges support from the Lagrange Laboratory of the ISI Foundation funded by the CRT Foundation. |
01698252 | en | [
"phys.cond.cm-sm"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01698252/file/1710.05589.pdf | Antoine Moinet
Romualdo Pastor-Satorras
Alain Barrat
Effect of
come
I. INTRODUCTION
The propagation patterns of an infectious disease depend on many factors, including the number and properties of the different stages of the disease, the transmission and recovery mechanisms and rates, and the hosts' behavior (e.g., their contacts and mobility) [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. Given the inherent complexity of a microscopic description taking into account all details, simple models are typically used as basic mathematical frameworks aiming at capturing the main characteristics of the epidemic spreading process and in particular at understanding if and how strategies such as quarantine or immunization can help contain it. Such models have been developed with increasing levels of sophistication and detail in the description of both the disease evolution and the behaviour of the host population [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF].
The most widely used assumption concerning the disease evolution within each host consists in discretizing the possible health status of individuals [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. For instance, in the Susceptible-Infectious-Susceptible (SIS) model, each individual is considered either healthy and susceptible (S) or infectious (I). Susceptible individuals can become infectious through contact with an infectious individual, and recover spontaneously afterwards, becoming susceptible again. In the Susceptible-Infectious-Recovered (SIR) case, recovered individuals are considered as immunized and cannot become infectious again. The rate of infection during a contact is assumed to be the same for all individuals, as well as the rate of recovery.
Obviously, the diffusion of the disease in the host population depends crucially on the patterns of contacts between hosts. The simplest homogeneous mixing assump-tion, which makes many analytical results achievable, considers that individuals are identical and that each has a uniform probability of being in contact with any other individual [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF][START_REF] Anderson | Infectious diseases of humans: dynamics and control[END_REF]. Even within this crude approximation, it is possible to highlight fundamental aspects of epidemic spreading, such as the epidemic threshold, signaling a non-equilibrium phase transition that separates an epidemic-free phase from a phase in which a finite fraction of the population is affected [START_REF] Keeling | Modeling Infectious Diseases in Humans and Animals[END_REF]. However, this approach neglects any non-trivial structure of the contacts effectively occurring within a population, while advances in network science [START_REF] Newman | Networks: An Introduction[END_REF] have shown that a large number of networks of interest have in common important features such as a strong heterogeneity in the number of connections, a large number of triads, a community structure, and a low average shortest path length between two individuals [START_REF] Newman | Networks: An Introduction[END_REF][START_REF] Caldarelli | Scale-Free Networks: Complex Webs in Nature and Technology[END_REF]. Spreading models have thus been adapted to complex networks, and studies have unveiled the important role of each of these properties [START_REF] Pastor-Satorras | [END_REF][START_REF] Barrat | Dynamical processes on complex networks[END_REF][START_REF] Pastor-Satorras | [END_REF]. More recently, a number of studies have also considered spreading processes on time-varying networks [8][9][10][11][12][13], to take into account the fact that contact networks evolve on various timescales and present non-trivial temporal properties such as broad distribution of contact durations [14,15] and burstiness [8,16] (i.e., the timeline of social interactions of a given individual exhibits periods of time with intense activity separated by long quiescent periods with no interactions).
All these modeling approaches consider that the propagation of the disease takes place on a substrate (the contacts between individuals) that does not depend on the disease itself. In this framework, standard containment measures consist in the immunization of individuals, in order to effectively remove them from the popu-lation and thus break propagation paths. Immunization can also (in models) be performed in a targeted way, trying to identify the most important (class of) spreaders and to suppress propagation in the most efficient possible way [17,18]. An important point to consider however is that the structure and properties of contacts themselves can in fact be affected by the presence of the disease in the population, as individuals aware of the disease can modify their behaviour in spontaneous reaction in order to adopt self-protecting measures such as vaccination or mask-wearing. A number of studies have considered this issue along several directions (see Ref. [19] for a review). For instance, some works consider an adaptive evolution of the network [20] with probabilistic redirection of links between susceptible and infectious individuals, to mimic the fact that a susceptible individual might be aware of the infectious state of some of his/her neighbors, and therefore try to avoid contact with them.
Other works introduce behavioral classes in the population, depending on the awareness to the disease [21], possibly consider that the awareness of the disease propagates on a different (static) network than the disease itself, and that being aware of the disease implies a certain level of immunity to it [22,23]. Finally, the fact that an individual takes self-protecting measures that decrease his/her probability to be infected (such as wearing a mask or washing hands more frequently) can depend on the fraction of infectious individuals present in the whole population or among the neighbors of an individual. These measures are then modeled by the fact that the probability of a susceptible catching the disease from an infectious neighbor depends on such fractions [24][25][26][27]. Yet these studies mostly consider contacts occurring on a static underlying contact network (see however [25,26] for the case of a temporal network in which awareness has the very strong effect of reducing the activity of individuals and their number of contacts, either because they are infectious or because of a global knowledge of the overall incidence of the disease).
Here, we consider instead the following scenario: First, individuals are connected by a time-varying network of contacts, which is more realistic than a static one; second, we use the scenario of a relatively mild disease, which does not disrupt the patterns of contacts but which leads susceptible individuals who witness the disease in other individuals to take precautionary measures. We do not assume any knowledge of the overall incidence, which is usually very difficult to know in a real epidemic, especially in real time. We consider SIS and SIR models and both empirical and synthetic temporal networks of contacts. We extend the concept of awareness with respect to the state of neighbors from static to temporal networks and perform extensive numerical simulations to uncover the change in the phase diagram (epidemic threshold and fraction of individuals affected by the disease) as the parameters describing the reaction of the individuals are varied.
II. TEMPORAL NETWORKS
We will consider as substrate for epidemic propagation both synthetic and empirical temporal networks of interactions. We describe them succinctly in the following Subsections.
A. Synthetic networks
Activity-driven network model
The activity driven (AD) temporal network model proposed in Ref. [28] considers a population of N individuals (agents), each agent i characterized by an activity potential a i , defined as the probability that he/she engages in a social act/connection with other agents per unit time. The activity of the agents is a (quenched) random variable, extracted from the activity potential distribution F (a), which can take a priori any form. The temporal network is built as follows: at each time step t, we start with N disconnected individuals. Each individual i becomes active with probability a i . Each active agent generates m links (starts m social interactions) that are connected to m other agents selected uniformly at random (among all agents, not only active ones) 1 . The resulting set of N individuals and links defines the instantaneous network G t . At the next time step, all links are deleted and the procedure is iterated. For simplicity, we will here consider m = 1.
In Ref. [28] it was shown that several empirical networks display broad distributions of node activities, with functional shapes close to power-laws for F (a), with exponents between 2 and 3. The aggregation of the activity-driven temporal network over a time-window of length T yields moreover a static network with a longtailed degree distribution of the form P T (k) ∼ F (k/T ) [28,29]. Indeed, the individuals with the highest activity potential tend to form a lot more connections than the others and behave as hubs, which are known to play a crucial role in spreading processes [START_REF] Pastor-Satorras | [END_REF].
Activity-driven network model with memory
A major shortcoming of the activity-driven model lies in the total absence of correlations between the connections built in successive time steps. It is therefore unable to reproduce a number of features observed in empirical data. An extension of the model tackles this issue by introducing a memory effect into the mechanism of link creation [30]. In the resulting activity-driven model with memory (ADM), each individual keeps track of the set of other individuals with whom there has been an interaction in the past. At each time step t we start as in the AD model with N disconnected individuals, and each individual i becomes active with probability a i . For each link created by an active individual i, the link goes with probability p = q i (t)/[q i (t) + 1] to one of the q i (t) individuals previously encountered by i, and with probability 1 -p towards a never encountered one. In this way, contacts with already encountered other individuals have a larger probability to be repeated and are reinforced. As a result, for a power-law distributed activity F (a), the degree distribution of the temporal network aggregated on a time window T becomes narrow, while the distribution of weights (defined as the number of interactions between two individuals) becomes broad [30].
B. Empirical social networks
In addition to the simple models described above, which do not exhibit all the complexity of empirical data, we also consider two datasets gathered by the SocioPatterns collaboration [START_REF]Sociopatterns collaboration[END_REF], which describe close face-to-face contacts between individuals with a temporal resolution of 20 seconds in specific contexts (for further details, see Ref. [14]). We consider first a dataset describing the contacts between students of nine classes of a high school (Lycée Thiers, Marseilles, France), collected during 5 days in Dec. 2012 ("Thiers" dataset) [START_REF]Sociopatterns dataset: High school dynamic contact networks[END_REF][START_REF] Fournet | [END_REF]. We also use another dataset consisting in the temporal network of contacts between the participants of a conference (2009 Annual French Conference on Nosocomial Infections, Nice, France) during one day ("SFHH" dataset) [10]. The SFHH (conference) data correspond to a rather homogeneous contact network, while the Thiers (high school) population is structured in classes of similar sizes and presents contact patterns that are constrained by strict and repetitive school schedules. In Table I We consider the paradigmatic Susceptible-Infectious-Susceptible (SIS) and Susceptible-Infectious-Recovered (SIR) models to describe the spread of a disease in a fixed population of N individuals. In the SIS model, each individual belongs to one of the following compartments: healthy and susceptible (S) or diseased and infectious (I). A susceptible individual in contact with an infectious becomes infectious at a given constant rate, while each infectious recovers from infection at another constant rate. In the SIR case, infectious individuals enter the recovered (R) compartment and cannot become infectious anymore. We consider a discrete time modeling approach, in which the contacts between individuals are given by a temporal network encoded in a time-dependent adjacency matrix A ij (t) taking value 1 if individuals i and j are in contact at time t, and 0 otherwise. At each time step, the probability that a susceptible individual i becomes infectious is thus given by
p i = 1-j [1-λ A ij (t) σ j ],
where λ is the infection probability, and σ j is the state of node j (σ j = 1 if node j is infectious and 0 otherwise). We define µ as the probability that an infectious individual recovers during a time step. The competition between the transmission and recovery mechanisms determines the epidemic threshold. Indeed, if λ is not large enough to compensate the recovery process (λ/µ smaller than a critical value), the epidemic outbreak will not affect a finite portion of the population, dying out rapidly. On the other hand, if λ/µ is large enough, the spread can lead in the SIS model to a non-equilibrium stationary state in which a finite fraction of the population is in the infectious state. For the SIR model, on the other hand, the epidemic threshold is determined by the fact that the fraction r ∞ = R ∞ /N of individuals in the recovered state at the end of the spread becomes finite for λ/µ larger than the threshold.
In order to numerically determine the epidemic threshold of the SIS model, we adapt the method proposed in Refs. [34,35], which consists in measuring the lifetime and the coverage of realizations of spreading events, where the coverage is defined as the fraction of distinct nodes ever infected during the realization. Below the epidemic threshold, realizations have a finite lifetime and the coverage goes to 0 in the thermodynamic limit. Above threshold, the system in the thermodynamic limit has a finite probability to reach an endemic stationary state, with infinite lifetime and coverage going to 1, while realizations that do not reach the stationary state have a finite lifetime. The threshold is therefore found as the value of λ/µ where the average lifetime of non-endemic realizations diverges. For finite systems, one can operationally define an arbitrary maximum coverage C > 0 (for instance C = 0.5) above which a realization is considered endemic, and look for the peak in the average lifetime of non-endemic realizations as a function of λ/µ.
In the SIR model the lifetime of any realization is finite. We thus evaluate the threshold as the location of the peak of the relative variance of the fraction r ∞ of recovered individuals at the end of the process [36], i.e.,
σ r = r 2 ∞ -r ∞ 2 r ∞ . (1)
B. Modeling risk perception
To model risk perception, we consider the approach proposed in Ref. [24] for static interaction networks. In this framework, each individual i is assumed to be aware of the fraction of his/her neighbors who are infectious at each time step. This awareness leads the individual to take precautionary measures that decrease its probability to become infectious upon contact. This decrease is modeled by a reduction of the transmission probability by an exponential factor: at each time step, the probability of a susceptible node i in contact with an infectious to become infectious depends on the neighborhood of i and is given by λ i (t) = λ 0 exp(-Jn i (t)/k i ) where k i is the number of neighbors of i, n i (t) the number of these neighbors that are in the infectious state at time t, and J is a parameter tuning the degree of awareness or amount of precautionary measures taken by individuals.
Static networks of interactions are however only a first approximation and real networks of contacts between individuals evolve on multiple timescales [15]. We therefore consider in the present work, more realistically, that the set of neighbors of each individual i changes over time. We need thus to extend the previous concept of neighborhood awareness to take into account the history of the contacts of each individual and his/her previous encounters with infectious individuals. We consider that longer contacts with infectious individuals should have a stronger influence on a susceptible individual's awareness, and that the overall effect on any individual depends on the ratio of the time spent in contact with infectious to the total time spent in contact with other individuals. Indeed, two individuals spending a given amount of time in contact with infectious individuals may react differently depending on whether these contacts represent a large fraction of their total number of contacts or not. We moreover argue that the awareness is influenced only by recent contacts, as having encountered ill individuals in a distant past is less susceptible to lead to a change of behaviour. To model this point in a simple way, we consider that each individual has a finite memory of length ∆T and that only contacts taking place in the time window [t -∆T, t[, in which the present time t is excluded, are relevant.
We thus propose the following risk awareness change of behaviour: The probability for a susceptible individual i, in contact at time t with an infectious one, to become infectious, is given by
λ i (t) = λ 0 exp (-α n I (i) ∆T ) (2)
where n I (i) ∆T is the number of contacts with infectious individuals seen by the susceptible during the interval [t-∆T, t[, divided by the total number of contacts counted by the individual during the same time window (repeated contacts between the same individuals are also counted). α is a parameter gauging the strength of the awareness, and the case α = 0 corresponds to the pure SIS process, in which λ i (t) = λ 0 for all individuals and at all times.
IV. EPIDEMIC SPREADING ON SYNTHETIC NETWORKS
A. SIS dynamics
Analytical approach
On a synthetic temporal network, an infectious individual can propagate the disease only when he/she is in contact with a susceptible. As a result, the spreading results from an interplay between the recovery time scale 1/µ, the propagation probability λ conditioned on the existence of a contact and the multiple time scales of the network as emerging from the distribution of nodes' activity F (a). Analogously to what is done for heterogeneous static networks [START_REF] Barrat | Dynamical processes on complex networks[END_REF][START_REF] Pastor-Satorras | [END_REF], it is possible to describe the spread at a mean-field level by grouping nodes in activity classes: all nodes with the same activity a are in this approximation considered equivalent [28]. The resulting equation for the evolution of the number of infectious nodes in the class of nodes with activity a in the original AD model has been derived in Ref. [28] and reads
I t+1 a = I t a -µ I t a + λ a S t a I t a N da + λ S t a I t a a N da .
(3) where I a and S a are the number of infectious and susceptible nodes with activity a, verifying N a = S a + I a .
From this equation one can show, by means of a linear stability analysis, that there is an endemic non-zero steady state if and only if ( a + a 2 )λ/µ > 1 [28]. Noticing that a + a 2 may be regarded as the highest statistically significant activity rate, the interpretation of this equation becomes clear: the epidemic can propagate to the whole network when the smallest time scale of relevance for the infection process is smaller than the time scale of recovery.
Let us now consider the introduction of risk awareness in the SIS dynamics on AD networks. In general, we can write for a susceptible with activity a n I (a
) ∆T = ∆T i=1 a I t-i a N da + I t-i a a N da (a + a ) ∆T , (4)
where the denominator accounts for the average number of contacts of an individual with activity a in ∆T time steps. In the steady state, where the quantities I a become independent of t, the dependence on ∆T in Eq. ( 4) vanishes, since both the average time in contact with infectious individuals and the average total time in contact are proportional to the time window width. Introducing this expression into Eq. ( 2), we obtain
λ a = λ 0 exp -α a I a N da + I a a N da a + a , (5)
which can be inserted into Eq. ( 3). Setting µ = 1 without loss of generality, we obtain the steady state solution
ρ a = λ a (aρ + θ) 1 + λ a (aρ + θ) , (6)
where ρ a = I a /N a and we have defined
ρ = a F (a)ρ a , (7)
θ = a a F (a)ρ a . (8)
Introducing Eqs. ( 5) and ( 6) into Eqs. ( 7) and (8), and expanding at second order in ρ and θ, we obtain after some computations the epidemic threshold
λ c = 1 a + a 2 . (9)
Moreover, setting λ 0 = λ c (1 + ) and expanding at order 1 in we obtain
ρ = 2 Aα + B , (10)
where
A = λ c a 3 a 2 + 3a a 2 + a 2 + 3a 2 a + a (11)
B = λ 2 c a 3 a 2 + 3 a a 2 + 4 a 2 .
This indicates that, at the mean-field level, the epidemic threshold is not affected by the awareness. Nevertheless, the density of infectious individuals in the vicinity of the threshold is reduced as the awareness strength α grows.
In the case of activity driven networks with memory (ADM), no analytical approach is available for the SIS dynamics, even in the absence of awareness. The numerical investigation carried out in Ref. [37] has shown that the memory mechanism, which leads to the repetition of some contacts, reinforcing some links and yielding a broad distribution of weights, has a strong effect in the SIS model. Indeed, the repeating links help the reinfection of nodes that have already spread the disease and make the system more vulnerable to epidemics. As a result, the epidemic threshold is reduced with respect to the memory-less (AD) case. For the SIS dynamics with awareness on ADM networks, we will now resort to numerical simulations.
Numerical simulations
In order to inspect in details the effect of risk awareness on the SIS epidemic process, we perform extensive numerical simulations. Following Refs. [28,37], we consider a distribution of nodes' activities of the form F (a) ∝ a -γ for a ∈ [ , 1], where is a lower activity cut-off introduced to avoid divergences at small activity values. In all simulations we set = 10 -3 and γ = 2. We consider networks up to a size N = 10 5 and a SIS process starting with a fraction I 0 /N = 0.01 of infectious nodes chosen at random in the population. In order to take into account the connectivity of the instantaneous networks, we use as a control parameter the quantity β/µ, where β = 2 a λ 0 is the per capita rate of infection [28]. Notice that the average degree of an instantaneous network is k t = 2 a [29]. With this definition, the critical endemic phase corresponds to
β µ ≥ 2 a a + a 2 . ( 12
)
In Fig. 1 we first explore the effect of the strength of risk awareness, as measured by the parameter α, in the case ∆T = ∞, i.e., when each agent is influenced by the whole history of his/her past contacts, a situation in which awareness effects should be maximal. We plot the steady state average fraction of infectious nodes ρ = a ρ a F (a) as a function of β/µ for three different values of α, and evaluate the position of the effective epidemic threshold, as measured by the peak of the average lifetime of non-endemic realizations, see Sec. III A. Figures 1c) andd) indicate that the effect of awareness in the model (α > 0), with respect to the pure SIS model (α = 0) is to reduce the fraction ρ of infectious individuals for all values of β/µ, and Fig- ures 1a) and b) seem to indicate in addition a shift of the effective epidemic threshold to larger values. This effect is more pronounced for the ADM than for the AD networks. As this shift of the epidemic threshold is in contradiction, at least for the AD case, with the mean-field analysis of the previous paragraphs, we investigate this issue in more details in Fig. 2, where we show, both for the pure SIS model (α = 0) and for a positive value of α, the average lifetime of non-endemic realizations for various system sizes. Strong finite-size effects are observed, especially for the model with awareness (α > 0). Fitting the values of the effective threshold (the position of the lifetime peak) with a law of the form (β/µ) N = (β/µ) ∞ + A N -ν , typical of finite-size scaling analysis [START_REF] Cardy | Finite Size Scaling[END_REF], leads to a threshold in the thermodynamic limit of (β/µ) ∞ = 0.37 [START_REF] Newman | Networks: An Introduction[END_REF] for the pure SIS model on AD networks, (β/µ) ∞ = 0.34(2) for AD with α = 10 (SIS model with awareness), (β/µ) ∞ = 0.29(3) for ADM with α = 0 (pure SIS model) and (β/µ) ∞ = 0.29(2) for ADM with α = 10. We notice here that the extrapolations for α = 0 are less accurate and thus with larger associated errors. Nevertheless, with the evidence at hand, we can conclude that, within error bars, the risk perception has no effect on the epidemic threshold in the thermodynamic limit, in agreement with the result from Eq. ( 12), that gives a theoretical threshold (β/µ) c = 0.366 for the AD case. It is however noteworthy that the effective epidemic threshold measured in finite systems can be quite strongly affected by the awareness mechanism, even for quite large systems, and in a particularly dramatic way for ADM networks.
We finally explore in Fig. 3 the effect of a varying memory length ∆T , at fixed risk awareness strength α. In both AD and ADM networks, an increasing awareness temporal window shifts the effective epidemic threshold towards larger values, up to a maximum given by ∆T = ∞, when the whole system history is available. For the ADM networks, this effect is less clear because of the changing height of the maximum of the lifespan when increasing ∆T . For AD networks, this result is apparently at odds with the mean-field analysis in which ∆T is irrelevant in the stationary state. We should notice, however, that for ∆T → ∞, the critical point is unchanged in the thermodynamic limit with respect to the pure SIS dynamics. Given that for ∆T → ∞ the effects of awareness are the strongest, we expect that a finite ∆T will not be able to change the threshold in the infinite network limit. We can thus attribute the shifts observed to pure finite size effects. Note that this effect is also seen in homogeneous AD networks with uniform activity a (data not shown), observation that we can explain as follows: when ∆T is small, the ratio of contacts with infectious n I (i) ∆T recorded by an individual i can differ significantly from the overall ratio recorded in the whole network in the same time window, which is equal to n I (i) ∆T = ρ (for a uniform activity). Mathematically, we have
λ i = λ 0 exp(-α n I (i) ∆T ) ≥ λ 0 exp(-α ρ) (13)
by concavity of the exponential function. Thus, even if locally and temporarily some individuals perceive an overestimated prevalence of the epidemics and reduce their probability of being infected accordingly, on average the reduction in the transmission rate would be larger if the ensemble average were used instead of the temporal one, and thus the epidemics is better contained in the former case. As ∆T increases, the temporal average n I (i) ∆T becomes closer to the ensemble one ρ and the effect of awareness increases. When ∆T is large enough compared to the time scale of variation of the network 1/a, the local time recording becomes equivalent to an ensemble average, and we recover the mean-field situation.
B. SIR dynamics
Analytical approach
Following an approach similar to the case of the SIS model, the SIR model has been studied at the heterogeneous mean field level in AD networks, in terms of a set of equations for the state of nodes with activity a, which takes the form [39]
I t+1 a = I t a -µ I t a + λ a (N a -I t a -R t a ) I t a N da + λ (N a -I t a -R t a ) I t a a N da , (14)
where N a is the total number of nodes with activity a, and I a and R a are the number of nodes with activity a in the infectious and recovered states, respectively. Again, a linear stability analysis shows the presence of a threshold, which takes the same form as in the SIS case:
β µ ≥ 2 a a + a 2 . ( 15
)
The same expression can be obtained by a different approach, based on the mapping of the SIR processes to bond percolation [40].
Since the SIR model lacks a steady state, we cannot apply in the general case the approach followed in the previous section. The effects of risk perception can be however treated theoretically for a homogeneous network (uniform activity) in the limit ∆T → ∞, which is defined by the effective infection probability
λ(t) = λ 0 exp - α t t 0 ρ(τ ) dτ . ( 16
)
Even this case is hard to tackle analytically, so that we consider instead a modified model defined by the infection probability
λ(t) = λ 0 exp -α t 0 ρ(τ ) dτ . (17)
In this definition the fraction of infectious seen by an individual is no longer averaged over the memory length but rather accumulated over the memory timespan, so that we expect stronger effects of the risk perception with respect to Eq. ( 15), if any. The fraction of susceptibles s = S/N and the fraction of recovered r = R/N in the system obey the equations
ds dt = -λ 0 ρ(t) s(t) e -αr(t)/µ ( 18
)
dr dt = µρ(t) (19)
where in the first equation we have used the second equation to replace t 0 ρ(τ ) dτ in λ(t) by (r(t) -r(0))/µ (with the initial conditions r(0) = 0).
Setting µ = 1 without loss of generality, the final average fraction of recovered individuals after the end of an outbreak is given by
r ∞ = 1 -s(0) exp - λ 0 α (1 -e -αr∞ ) . (20)
Close to the threshold, i.e., for r ∞ ∼ 0, performing an expansion up to second order and imposing the initial condition ρ(0) = 1 -s(0) = 0, we obtain the asymptotic solution
r ∞ 2 λ 0 (α + λ 0 ) (λ 0 -1), (21)
which leads to the critical infection rate λ 0 = 1. This means that, as for the SIS case, the risk perception does not affect the epidemic threshold at the mean field level, at least for a homogeneous network. The only effect of awareness is a depression of the order parameter r ∞ with α, as observed also in the SIS case. The same conclusion is expected to hold for the original model of awareness, with an infection rate of the form Eq. ( 16) as in this case the dynamics is affected to a lower extent. In analogy, for the general case of an heterogeneous AD network, with rate infection given by Eq. ( 2), we expect the effects of awareness on the epidemic threshold to be negligible at the mean-field level. On ADM networks, the numerical analysis of the SIR model carried out in Ref. [37] has revealed a picture opposite to the SIS case. In an SIR process indeed, reinfection is not possible; as a result, repeating contacts are not useful for the diffusion of the infection. The spread is thus favoured by the more random patterns occurring in the memory-less (AD) case, which allows infectious nodes to contact a broader range of different individuals and find new susceptible ones. The epidemic threshold for SIR processes is hence higher in the ADM case than in the AD one [37].
Numerical simulations
To study the effects of risk perception on the dynamics of a SIR spreading process in temporal networks we resort again to numerical simulations. In Fig. 4 we compare the effects of the risk perception mechanism given by Eq. ( 2) for AD and ADM networks. The spread starts with a fraction ρ 0 = I 0 /N = 0.01 of infectious nodes chosen at random in the population and the activity distribution is the same as in the SIS case. In the present simulations the memory span ∆T is infinite and we compare the results obtained for two different values of the awareness strenght α. We see that the effective epidemic threshold is increased for the ADM network, whereas it seems unchanged for the AD network and around a value of β/µ = 0.35, an agreement with the theoretical prediction quoted in the previous section.
The SIR phase transition is rigorously defined for a vanishing initial density of infectious, i.e., in the limit ρ(0) → 0 and s(0) → 1, as can be seen at the mean-field level in the derivation of Eq. ( 21). In Fig. 5 we explore the effects of the initial density ρ 0 = I 0 /N of infectious individuals on the effect of awareness on AD networks. For large values of ρ 0 = I 0 /N , the awareness (α > 0) can significantly decrease the final epidemic size, as already observed in Fig. 4. This effect can be understood by the fact that, for large ρ 0 , more individuals are aware already from the start of the spread and have therefore lower probabilities to be infected. At very small initial densities, on the other hand, r ∞ becomes independent of α. This is at odds with the result in Eq. ( 21), which however was obtained within an approximation that in- creases the effects of awareness. The milder form considered in Eq. ( 2) leads instead to an approximately unaltered threshold, and to a prevalence independent of α.
For ADM networks, Fig. 6 shows the variance of the order parameter for two different values of α. As in the SIS case, we see that an apparent shift of the effective epidemic threshold is obtained, but very strong finite size effects are present even at large size, especially for α > 0. The difference between the effective thresholds at α > 0 and α = 0 decreases as the system size increases, but remains quite large, making it difficult to reach a clear conclusion on the infinite size limit.
V. EPIDEMIC SPREADING ON EMPIRICAL SOCIAL NETWORKS
As neither AD nor ADM networks display all the complex multi-scale features of real contact networks, we now turn to numerical simulations of spreading processes with and without awareness on empirical temporal contact networks, using the datasets described in Sec. II B.
A. SIS dynamics
As we saw in Sec. IV A, the susceptibility defined to evaluate the epidemic threshold of the SIS process is subject to strong finite size effects. Since the empirical networks used in the present section are quite small, we choose to focus only on the main observable of physical interest, i.e., the average prevalence ρ in the steady state of the epidemics.
As we are interested in the influence of the structural properties of the network, we choose to skip the nights in the datasets describing the contacts between individuals, as obviously no social activity was recorded then, to avoid undesired extinction of the epidemic during those periods. In order to run simulations of the SIS spreading, we construct from the data arbitrarily long lasting periodic networks, with the period being the recording duration (once the nights have been removed). For both networks we define the average instantaneous degree k = 1 T data i k t where the sum runs over all the time steps of the data, and k t is the average degree of the snapshot network at time t. We then define β/µ = λ k /µ as the parameter of the epidemic. For each run, a random starting time step is chosen, and a single agent in the same time step, if there is any, is defined as the seed of the infection (otherwise a new starting time is chosen).
In Fig. 7, we compare the curves of the prevalence ρ of the epidemics in the stationary state on both empirical networks, and for increasing values of the memory length ∆T . We can see that an important reduction of the prevalence is occurring even for ∆T = 1. This is due to the presence of many contacts of duration longer than ∆T (contrarily to the AD case): the awareness mechanism decreases the probability of contagion of all these contacts (and in particular of the contacts with very long duration, which have an important role in the propagation) as soon as ∆T > 1, leading to a strong effect even in this case. At large values of the control parameter β/µ, the effect of the awareness is stronger for increasing values of the memory length ∆T , as was observed in Sec. IV A. At small values of β/µ on the contrary, the awareness is optimum for a finite value of ∆T , and the knowledge of the whole contact history is not the best way to contain the epidemics. While a detailed investigation of this effect lies beyond the scope of our work, preliminary work (not shown) seem to indicate that it is linked to the use of the periodicity introduced in the data through the repetition of the dataset.
B. SIR
In this section we study the impact of the awareness on the SIR spreading process running on the empirical networks. In particular, we study the effect of self protection on the fraction of recovered individuals r ∞ in the final state, and on the effective threshold evaluated as the peak of the relative variance of r ∞ defined in Eq. ( 1). In Fig. 8 and9 we plot σ r and r ∞ for different mem- ory length ∆T , for the SFHH conference and the Thiers highschool data respectively. We first notice that a notable effect appears already for ∆T = 1, similarly to the SIS process. However, we see that r ∞ is monotonously reduced as ∆T grows and that the effective threshold is shifted to higher values of β/µ, also monotonously. It is worth noticing that the timescale of the SIR process is much smaller than the one studied in the SIS process because the final state is an absorbing state free of infectious agents. The lifetime of the epidemic in this case is of the order of magnitude of the data duration, so that the periodicity introduced by the repetition of the dataset is not relevant anymore. Overall, we observe for both networks an important reduction of outbreak size when people adopt a self protecting behaviour, as well as a significant shift of the effective epidemic threshold.
VI. CONCLUSION
The implementation of immunization strategies to contain the propagation of epidemic outbreaks in social networks is a task of paramount importance. In this work, we have considered the effects of taking protective measures to avoid infection in the context of social temporal networks, a more faithful representation of the patterns of social contacts than often considered static structures. In this context, we have implemented a model including awareness to the propagating disease in a temporal network, extending previous approaches defined for static frameworks. In our model, susceptible individuals have a local perception of the overall disease prevalence measured as the ratio of the number of previous contacts with infectious individuals on a training window of width ∆T . An increased level of awareness induces a reduction in the probability that a susceptible individual contracts the disease via a contact with an infectious individual.
To explore the effects of disease awareness we have considered the paradigmatic SIS and SIR spreading models on both synthetic temporal networks, based in the activity driven (AD) model paradigm, and empirical faceto-face contact networks collected by the SocioPatterns collaboration. In the case of network models, we consider the original AD model, and a variation, the AD model with memory (ADM), in which a memory kernel mimics some of the non-Markovian effects observed in real social networks.
In the case of synthetic networks, analytical and numerical results hint that in AD networks without memory, the epidemic threshold on both SIS and SIR models is not changed by the presence of awareness, while the epidemic prevalence is diminished for increasing values of the parameter α gauging the strength of awareness. In the case of the ADM model (temporal network with memory effects) on the other hand, awareness seems to be able to shift the threshold to an increased value, but very strong finite size effects are present: our results are compatible with an absence of change of the epidemic threshold in the infinite size limit, while, as for the AD case, the epidemic prevalence is decreased.
In the case of empirical contact networks, we observe in all cases a strong reduction of the prevalence for different values of α and ∆T , and an apparent shift of the effective epidemic threshold. These empirical networks differ from the network models from two crucial points of view. On the one hand, they have a relatively small size. Given that important finite size effects are observed in the models, especially in the one with memory effects, one might also expect stronger effective shifts in such populations of limited size. On the other hand, AD and ADM networks lack numerous realistic features observed in real social systems. On AD and ADM networks, contacts are established with random nodes (even in the ADM case) so that the perception of the density of infectious by any node is quite homogeneous, at least in the hypothesis of a sufficiently large number of contacts recorded (i.e., at large enough times, for a∆T 1). This is not the case for the empirical networks, which exhibits complex patterns such as community structures, as well as broad distributions of contact and inter-contact durations, specific time-scales (e.g., lunch breaks), correlated activity patterns, etc. [41]. This rich topological and temporal structure can lead to strong heterogeneities in the local perception of the disease. In this respect, it would be interesting to investigate the effect of awareness in more realistic temporal network models.
Notably, the awareness mechanism, even if only local and not assuming any global knowledge of the unfolding of the epidemics, leads to a strong decrease of the prevalence and to shifts in the effective epidemic threshold even at quite large size, in systems as diverse as simple models and empirical data. Moreover, some features of empirical contact networks, such as the broad distribution of contact durations, seem to enhance this effect even for short-term memory awareness. Overall, our results indicate that it would be important to take into account awareness effects as much as possible in data-driven simulations of epidemic spread, to study the relative role of the complex properties of contact networks on these effects, and we hope this will stimulate more research into this crucial topic.
FIG. 1 .
1 FIG. 1. Effect of the strength of risk awareness on the SIS spreading on AD and ADM networks with ∆T = ∞. (a): average lifetime of non-endemic runs for AD network, (b): average lifetime of non-endemic runs for ADM networks, (c): Steady state fraction of infectious for AD, (d): Steady state fraction of infectious for ADM. Vertical lines in subplots (a) and (b) indicate the position of the maximum of the average lifetime. Model parameters: µ = 0.015, γ = 2, = 10 -3 , ∆T = ∞ and network size N = 10 5 . Results are averaged over 1000 realizations.
FIG. 2 .
2 FIG. 2. Analysis of finite-size effects. We plot the average lifetime of non-endemic realizations of the SIS process, for different system sizes and 2 different values of α. (a): ADM networks and α = 0. (b): ADM networks with α = 10. (c): AD networks. Vertical lines indicate the position of the maximum of the average lifetime. Model parameters: µ = 0.015, γ = 2, = 10 -3 and ∆T = ∞. Results are averaged over 1000 realizations.
FIG. 3 .
3 FIG. 3. Effect of the local risk perception with increasing memory span ∆T for the SIS spreading on AD and ADM network. (top): AD network. (bottom): ADM network. Vertical lines indicate the position of the maximum of the average lifetime. Model parameters: α = 10, µ = 0.015, γ = 2, = 10 -3 and network size N = 10 4 . Results are averaged over 1000 realizations.
FIG. 4 .
4 FIG.[START_REF] Caldarelli | Scale-Free Networks: Complex Webs in Nature and Technology[END_REF]. Effect of the local risk perception on the SIR spreading on AD networks and ADM networks. We plot r∞ and σr/σ max r
FIG. 5 .FIG. 6 .
56 FIG. 5. Effect of the initial density of infectious on the SIR model on AD networks for different values of the awareness strength α and the initial density of infectious individuals ρ0. Model parameters: ∆T = ∞, µ = 0.015, γ = 2, = 10 -3 and network size N = 10 5 . Results are averaged over 1000 realizations.
ρρFIG. 7 .
7 FIG. 7. Steady state fraction of infectious for the SIS process on both empirical networks, for 2 values of α and different values of ∆T . Model parameters: µ = 0.001 for Thiers and µ = 0.005 for SFHH. Results are averaged over 1000 realizations.
FIG. 8 .
8 FIG. 8. Effect of the risk perception for different values of ∆T on the SIR spreading on SFHH network. (top): normalized standard deviation σr/σ max r . (bottom): order parameter r∞. Model parameters: µ = 0.005. α = 200. Results are averaged over 10 4 realizations.
FIG. 9 .
9 FIG. 9. Effect of the risk perception for different values of ∆T on the SIR spreading on the Thiers network. (top): normalized standard deviation σr/σ max r . (bottom): order parameter r∞. Model parameters: µ = 0.001. α = 200. Results are averaged over 10 4 realizations.
we pro- vide a brief summary of the main properties of these two datasets.
III. MODELLING EPIDEMIC SPREAD IN
TEMPORAL NETWORKS
A. Epidemic models and epidemic threshold
Dataset N T p ∆t k s
Thiers 180 14026 5.67 2.28 24.66 500.5
SFHH 403 3801 26.14 2.69 47.47 348.7
TABLE I. Some properties of the SocioPatterns datasets un-
der consideration: N , number of different individuals engaged
in interactions; T , total duration of the contact sequence, in
units of the elementary time interval t0 = 20 seconds; p, aver-
age number of individuals interacting at each time step; ∆t ,
average duration of a contact; k and s : average degree and
average strength of the nodes in the network aggregated over
the whole time sequence.
Note that with such a definition, an agent may both receive and emit a link to the same other agent. However, we consider here an unweighted and undirected graph, thus in such a case, a single link is considered. Moreover, in the limit of large N , the probability of such an event goes to 0.
ACKNOWLEDGMENTS R.P.-S. acknowledgs financial support from the Spanish Government's MINECO, under projects FIS2013-47282-C2-2 and FIS2016-76830-C2-1-P, from ICREA Academia, funded by the Generalitat de Catalunya regional authorities. |
01767558 | en | [
"info.info-mm"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01767558/file/OptimalQers_ACM_MM.pdf | Xavier Corbillon
email: [email protected]
Alisa Devlic
email: [email protected]
Gwendal Simon
email: [email protected]
Jacob Chakareski
Optimal Set of 360-Degree Videos for Viewport-Adaptive Streaming
Keywords: Omnidirectional Video, Quality Emphasized Region, Viewport Adaptive Streaming
With the decreasing price of Head-Mounted Displays (HMDs), 360-degree videos are becoming popular. The streaming of such videos through the Internet with state of the art streaming architectures requires, to provide high immersion feeling, much more bandwidth than the median user's access bandwidth. To decrease the need for bandwidth consumption while providing high immersion to users, scientists and specialists proposed to prepare and encode 360-degree videos into quality-variable video versions and to implement viewport-adaptive streaming. Quality-variable versions are different versions of the same video with non-uniformly spread quality: there exists some so-called Quality Emphasized Regions (QERs). With viewport-adaptive streaming the client, based on head movement prediction, downloads the video version with the high quality region closer to where the user will watch. In this paper we propose a generic theoretical model to find out the optimal set of quality-variable video versions based on traces of head positions of users watching a 360-degree video. We propose extensions to adapt the model to popular quality-variable version implementations such as tiling and offset projection. We then solve a simplified version of the model with two quality levels and restricted shapes for the QERs. With this simplified model, we show that an optimal set of four quality-variable video versions prepared by a streaming server, together with a perfect head movement prediction, allow for 45% bandwidth savings to display video with the same average quality as state of the art solutions or allows an increase of 102% of the displayed quality for the same bandwidth budget.
INTRODUCTION
Offering high-quality virtual reality immersion by streaming 360degree videos on the Internet is a challenge. The main problem is that most of the video signal information that is delivered is not displayed. Indeed, the Head-Mounted Displays (HMDs) that are used for immersion show a viewport, which represents a small fraction of the whole 360-degree video. Typically, to extract a 4K (3840 × 2160 pixels) video viewport from the whole 360-degree video, the stream should be at least a 12K (11520 × 6480 pixels) video, from which most information is ignored by the video player.
A solution researchers are exploring to limit the waste of bandwidth is to prepare and stream 360-degree videos such that their quality is not homogeneous spatially [START_REF] Corbillon | Viewport-adaptive navigable 360-degree video delivery[END_REF][START_REF] Hosseini | Adaptive 360 VR video streaming based on MPEG-DASH SRD[END_REF][START_REF] Niamut | MPEG DASH SRD: spatial relationship description[END_REF][START_REF] Sreedhar | Viewportadaptive encoding and streaming of 360-degree video for virtual reality applications[END_REF]. Instead the quality is better at the expected viewport positions than in the rest of the video frame. Two main concepts that support this solution are (i) encoding of quality-variable videos, which can be based on tiling [START_REF] Zare | Hevc-compliant tilebased streaming of panoramic video for virtual reality applications[END_REF], scalable coding [START_REF] Boyce | Overview of SHVC: scalable extensions of the high efficiency video coding standard[END_REF][START_REF] Youvalari | Efficient coding of 360-degree pseudo-cylindrical panoramic video for virtual reality applications[END_REF], and offset projections [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF]; and (ii) implementation of viewport-adaptive streaming, which is to signal the different quality-variable versions of the video, to predict viewport movements, and to make sure that a given user downloads the quality-variable video such that the quality is maximum at her viewport position.
The design of efficient viewport-adaptive streaming systems requires the understanding of the complex interplay between the most probable viewport positions, the coding efficiency, and the resulting Quality of Experience (QoE) with respect to the traditional constraints of delivery systems such as bandwidth and latency. MPEG experts have proposed the concept of quality region, which is a rectangular region defined on a sphere, characterized by a quality level ranging from 1 to 100. The main idea is that the content provider determines some quality regions based on offline external information (e.g., content analysis and statistics about viewport positions), and then prepares multiple quality-variable versions of the same 360-degree video based on these quality regions.
We provide in this paper a theoretical analysis of this concept of quality regions for 360-degree videos. We present optimization models to determine the optimal quality regions, subject to a population of clients, the number of quality-variable video versions, and the bandwidth. We aim at maximizing the video quality displayed in the client viewports by identifying (i) the location of the quality region, (ii) their dimensions (or area size), and (iii) the quality inside and outside the regions. Our model enables content providers to prepare 360-degree videos based on the analytics of the head movements collected from the first content consumers. Using a dataset of real head movements captured on an HMD, we study an optimal set of video versions that are generated by our algorithms and evaluate the performance of such optimal viewport-adaptive streaming. We demonstrate that, for a given overall bit-rate video, the video quality as perceived by the user improves by 102% on average.
RELATED WORK 2.1 Quality-Variable Videos Implementation
In the literature, we distinguish two approaches to implement quality-variable 360-degree videos. We give a brief introduction to these approaches in the following, while providing more details in Sections 3.2 and 3.3 on how our model applies to these approach.
Tile-based Approach. The motion-constrained tiles are contiguous fractions of the whole frame, which can be encoded/decoded independently and can thus be seen as separated sub-videos. The concept of tiling is part of the High Efficiency Video Coding (HEVC) standardized decoder [START_REF] Misra | An overview of tiles in HEVC[END_REF] and is considered as a key supporting technology for the encoding of quality-variable video versions. The tile-based approach has been developed for other multimedia scenarios where end-users consume only a fraction of the video, especially in navigable panorama [START_REF] Gaddam | Tiling of Panorama Video for Interactive Virtual Cameras: Overheads and Potential Bandwidth Requirement Reduction[END_REF][START_REF] Sánchez | Compressed domain video processing for tile based panoramic streaming using HEVC[END_REF][START_REF] Wang | Mixing Tile Resolutions in Tiled Video: A Perceptual Quality Assessment[END_REF]. This approach has been recently extended to meet the demand of virtual reality and 360-degree video systems. In a short paper, Ochi et al. [START_REF] Ochi | Live streaming system for omnidirectional video[END_REF] have sketched a solution where the spherical video is mapped onto an equirectangular video, which is cut into 8×8 tiles. Zare et al. [START_REF] Zare | Hevc-compliant tilebased streaming of panoramic video for virtual reality applications[END_REF] provide more details on the encoding performance of tilling when applied on projected frames. This study demonstrates the theoretical gains that can be expected by a quality-variable implementation of 360-degree video. More recently, Hosseini and Swaminathan [START_REF] Hosseini | Adaptive 360 VR video streaming based on MPEG-DASH SRD[END_REF] proposed a hexaface sphere-based tiling of a 360-degree video to take into account projection distortion. They also present an approach to describe the tiles with MPEG Dynamic Adaptive Streaming over HTTP (DASH) Spatial Relationship Description (SRD) formatting principles. Quan et al. [START_REF] Quan | Optimizing 360 video delivery over cellular networks[END_REF] also propose the delivery of tiles based on a prediction of the head movements. Their main contribution is to show that the head movements can be accurately predicted for short segment sizes by using standard statistical approaches. Le Feuvre and Concolato [START_REF] Le Feuvre | Tiled-based Adaptive Streaming using MPEG-DASH[END_REF] have demonstrated the combination of HEVC tiling with 360-degree video delivery. Their main contribution is to demonstrate that current technologies enable efficient implementation of the principles of the tile-based approach. Finally, Zare et al. [START_REF] Zare | Hevc-compliant viewport-adaptive streaming of stereoscopic panoramic video[END_REF] show that, by using the extractor design for HEVC files and using constrained inter-view prediction in combination with motion-constrained tiles, it is possible to efficiently compress stereoscopic 360-degree videos while allowing clients to decode the videos simultaneously with multiple decoding instances.
Projection-Based Approach. This approach, which has been proposed by Kuzyakov [14] and is currently implemented in practical systems [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF], takes profit from the geometrical projection. Indeed, since state-of-the-art video encoding are based on two-dimensional rectangles, any 360-degree video (captured on the surface of a sphere) needs to be projected onto a two-dimensional video before encoding. Scientists have been studying spherical projection onto maps for centuries. The most common projections are equirectangular, cube map, and pyramid [START_REF] Corbillon | Viewport-adaptive navigable 360-degree video delivery[END_REF][START_REF] Yu | A Framework to Evaluate Omnidirectional Video Coding Schemes[END_REF]. The main idea introduced by Kuzyakov [START_REF] Kuzyakov | End-to-end optimizations for dynamic streaming[END_REF] is to leverage a feature of the pyramid, projection: the sampling of pixels from the spherical surface to the two-dimensional surface is irregular, which means that some parts of the spherical surface get more distortion after the projection than others. Depending on the position of the base face of the pyramid, the projection, and consequently the video encoding, is better for some parts of the spherical surface. A refined approach based on geometrical projections is the offset projection [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF], where a constant directed vector is applied during the projection to change the pixel sampling in the sphere domain while keeping the same pixel sampling (resolution) in the projected domain. It results in a better quality encoding near the "offset direction" and a continuously decreasing qualities for viewports far from this direction.
Viewport-Adaptive Streaming
Several researchers have concomitantly studied solutions to stream 360-degree videos based on the same principle as in rate-adaptive streaming [START_REF] Concolato | Adaptive streaming of hevc tiled videos using mpeg-dash[END_REF][START_REF] Corbillon | Viewport-adaptive navigable 360-degree video delivery[END_REF][START_REF] Kuzyakov | Next-generation video encoding techniques for 360 video and vr. Blogpost[END_REF][START_REF] Le Feuvre | Tiled-based Adaptive Streaming using MPEG-DASH[END_REF][START_REF] Quan | Optimizing 360 video delivery over cellular networks[END_REF]. A server splits the video into different segments which duration typically vary between 1 s to 10 s. Each segment is then encoded into different representations each representation having different size (in byte) and having different quality distribution. A client decides, thanks to an adaptation algorithm using local information and predictions, which video representation (or set of representations) to download, to match the available bandwidth budget and the future position of the user viewport.
Zhou et al. [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF] studied a practical implementation of viewport-adaptive streaming made for the Occulus HMD, and showed that the occulus' implementation is not efficient: 20% of the bandwidth is wasted to download video segments that are never used. Le Feuvre and Concolato [START_REF] Le Feuvre | Tiled-based Adaptive Streaming using MPEG-DASH[END_REF] and Concolato et al. [START_REF] Concolato | Adaptive streaming of hevc tiled videos using mpeg-dash[END_REF] studied practical implementation of tile-based quality-variable 360-degree videos viewport-adaptive streaming. Corbillon et al. [START_REF] Corbillon | Viewport-adaptive navigable 360-degree video delivery[END_REF] studied an optimal viewport-adaptive streaming selection algorithm based on different heuristically defined quality-variable versions of 360-degree videos. In this paper, we focus on an optimization model to generate quality-variable video versions for viewport-adaptive streaming that maximize the quality inside users' viewports when number of video versions available to the user is limited. To the best of our knowledge, nobody studied before us optimal parameters to generate limited number of quality-variable versions for 360-degree videos.
Regions of Interest
Our work has also some common roots with the literature on Region of Interest (RoI) in video delivery. The human vision system can only extract information at high resolution near the fovea, where the gaze focuses its attention; the vision resolution decreases with eccentricity. Within the same video picture, it is common that most users focus their gaze on some specific regions of the picture, named RoI. Researchers have studied saliency map, which measures the gaze location of multiple users watching the same video. The goal is to extract RoI and, if possible, to corroborate RoI with picture structures to enable automatic RoI prediction [START_REF] Borji | State-of-the-art in visual attention modeling[END_REF][START_REF] Dodge | Visual saliency prediction using a mixture of deep neural networks[END_REF]. However, the concept of saliency map should be revisited with 360-degree videos, because the head movement is the prevailing factor to determine the attention of users. To the best of our knowledge, the relation between gaze-based saliency map and head movements in HMD has not been demonstrated.
The attention-based video coding [START_REF] Boccignone | Bayesian integration of face and low-level cues for foveated video coding[END_REF][START_REF] Itti | Automatic foveation for video compression using a neurobiological model of visual attention[END_REF][START_REF] Lee | Efficient video coding based on audio-visual focus of attention[END_REF]] is a coding strategy, which takes advantage of the gaze saliency prediction. The quantization parameters of the encoder are adjusted to allocate more bits near the different RoI and less bits farther away. A live encoder can perform attention-based video coding by using either feedback from a set of specific users or predicted RoI.
We revisit this approach to 360-degree videos in this paper. Our work is both to study per-segment RoI localization based on head movement information and to generate RoI-based encoded video representations. The creation of spherical quality-variable video versions based on head movement analysis enables viewport-adaptive streaming in the same manner that saliency map and attention-based video coding enable efficient video delivery on regular planar videos [START_REF] Dodge | Visual saliency prediction using a mixture of deep neural networks[END_REF].
QUALITY-VARIABLE VIDEOS
We first introduce a model for quality-variable 360-degree videos and then provide some illustrations of this model on some implementation proposals.
Generic Model
Spherical videos. The unit sphere that underlies the 360-degree video is split into N non-overlapping areas that cover the full sphere. The set of areas is denoted by A. In essence, each area corresponds to the video signal projected on a given direction of the sphere. Let us denote by s a the surface of an area a on the sphere and observe that the smallest possible surface s a is the pixel (in which case the set A is the full signal decomposition and N is the video resolution). However, video preparation processes are generally based on a video decomposition A with larger surface s a , such as the concept of tiles in HEVC [START_REF] Misra | An overview of tiles in HEVC[END_REF]. For the preparation of 360-degree videos, any decomposition of the video into A can be considered if it respects that it covers the whole sphere, formally a ∈A s a = 4π .
Area Quality. The goal of a video encoder is to compress the information of the video signal corresponding to a given area a into a decodable byte-stream (lossy compression generating distortion when the video is eventually played). An encoder uses a compression algorithm with various parameter settings to encode the video. For a given encoder, the more compression due to the encoding settings, the more distortion in the decoded and played video. Using MPEG terminology, we use the generic term quality to express the settings of the encoding scheme on a given area, regardless of the used area encoding process. The number of different ways to encode areas is finite, which results in a set of available qualities Q for this encoder (typically the quality ranges from 1 to 100 in MPEG). The set Q is totally ordered with a transitive comparison function, noted with >.
We provide some natural notations: q min (respectively q max ) is the lowest (respectively highest) possible quality for areas. The encoder processes an area a ∈ A with a quality q to generate a byte-stream of size b a,q . Given the usual strictly increasing feature of the rate-distortion performance of video encoders, we get that if a quality q 1 ∈ Q is better than a quality q 2 ∈ Q (formally q 1 > q 2 ), then we have b a,q 1 > b a,q 2 , ∀a ∈ A.
Video Version. We use the term version to represent the transportable full video signal byte-stream. It is the video as it can be delivered to clients. Based on the definitions of areas and qualities, a version is a function that associates with every area a ∈ A a unique quality q ∈ Q, which corresponds to the encoding quality of a. Let us denote by R the set of all possible versions. Please note that the number of possible versions is finite since both the set of areas A and the set of qualities Q are finite. However, the number of different versions is N |Q | . We use the notation r (a) to denote the quality q that corresponds to the quality at which the area a ∈ A is encoded in the version r ∈ R.
Let B be a positive real number. We denote by R B the subset of versions in R such that r ∈ R B satisfies that the sum of the byte-stream sizes for every area a ∈ A is equal to B. Formally, we have :
∀r ∈ R B , a ∈A b a,r (a) = B
Viewport. One of the peculiarities of 360-degree videos is that at a given time t a user u watches only a fraction of the whole video, which is generally called the viewport. The viewport displays only a subset of all the areas of the sphere. Let v u,t,a be a real number equal to the ratio of the surface of area a that is inside the viewport of user u at time t and let v u,a be the average value of v u,t,a during all time t in a video segment: v u,a = t v u,t,a /T , with T the duration of the segment. With respect to the same definition of quality, we have that the average viewport quality during a video segment can be defined as being the sum of the qualities of all the areas that are visible in the viewports, formally a v u,a • r (a). In practice, the satisfaction of the user watching a viewport is more complex since it depends not only on the visible distortion of the different areas in the viewport but also on the possible effects that different levels of distortion on contiguous areas can produce. Nevertheless, for the sake of simplicity, and with regards to the lack of formal studies dealing with subjective satisfaction evaluation of multi-encoded videos, we consider here that the satisfaction grows with the sum of qualities of the visible areas.
Illustration: Offset Projections
To apply the implementation of offset projection as presented by Zhou et al. [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF] to our model, we need to introduce some additional notations. Let 0 ⩽ β ⩽ 1 be a real number, which is the magnitude of the vector used by the "offset" projection. We denote by θ the angular distance between the "offset direction" and a given point on the sphere. The variation of the sampling frequency compared to the frequency of the same projection without offset at angular distance θ is:
f (θ ) = 1 + 2β + β 2 1 + β β cos(θ ) + 1 1 + 2β cos(θ ) + β 2
If we denote by D(a1, a2) the angular distance between the centers of two areas a1 and a2, offset projections could be modeled by the set of version r ∈ R such as there exists a of f set ∈ A such as ∀a ∈ A, r (a) = f (D(a of f set , a)) • r (a of f set ).
Illustration: Tiling
We define the concept of tile and tiled partition to extend our model to tiled versions. A tile is a set of contiguous areas of A. A tiled partition T of A is a set of non overlapping tiles that cover A. A tiled version using the tiled partition T , is a version r ∈ R such that the quality is uniform on each tile of T . Formaly we have ∀τ ∈ T , |r (τ )| = 1.
Note that in the tiled scenario, the service provider can generate a version for each tile individually without offering a version for the whole video. In this case, the client has to select separately a version for each tile to generate what we denote by a tiled version in our model. This differs from the other scenarios where the service provider is the one that decides which video version to generate.
VIEWPORT-ADAPTIVE STREAMING
An adaptive streaming system is modeled as being one client and one server, where the server offers J different versions of the video, and the client periodically selects one of these versions based on a version selection algorithm.
Server. The main question is to prepare J versions in R among all the possible combinations of qualities and areas. In the practical 360-degree video streaming system described by Zhou et al. [START_REF] Zhou | A measurement study of oculus 360 degree video streaming[END_REF], the number of versions J is equal to 30, while the solution that is promoted by Niamut et al. [START_REF] Niamut | MPEG DASH SRD: spatial relationship description[END_REF] is to offer all the combinations of tiles (typically 8 × 4) and qualities (typically 3). In practice, a low number of versions J is suitable since it means less files to manage at the server side (96 files in the latter case) and less complexity in the choice of the version at the client side (more than 32 thousand combinations in the aforementioned case). The main variable of our problem is the boolean x r , which indicates whether the server decides to offer the version r ∈ R. Formally, we have:
x r =
1, if the server offers r ∈ R 0, otherwise Since the server offers only J different versions, we have r ∈R x r = J . In the following, we restrict our focus on the case of a given overall bit-rate budget B, which is a real number. The main idea is to offer several versions of the video meeting the same bandwidth requirement but with different quality distributions. All the versions have thus the same overall bit-rate "budget" but they differ by the quality of the video, which is better at some directions in the sphere than others.
Client. The version selection algorithm first determines the most suitable bit-rate, here B, and then selects one and only one versions among the J offered versions for every segment of the videos, ideally the version that is the best match to user viewport. To simplify notations, we omit in the following the subscripts related to temporal segments, and we thus denote by y u,r the binary variable that indicates that user u selects r ∈ R for the video. Formally:
y u,r = 1, if the client u selects r ∈ R 0, otherwise
Since the user selects only one offered versions, we have r ∈R y u,r • x r = 1. We consider an ideal version selection algorithm and we thus assume that the client always selects the version that maximizes the viewport quality as previously defined, which is r such that a v u,a • r (a) is maximum.
Model Formulation
Our objective is to determine, for a given set of users who request the video at bit-rate B, the J versions that should be prepared at the server side so that the quality of the viewports is maximum. In its most generic form, the problem can thus be formulated as follows.
max y u,r u r ∈R y u,r • a v u,a • r (a)
Such that:
a b a,r (a) = B ∀r ∈ R (1a) r x r ⩽ J (1b) r y u,r = 1 ∀u (1c) y u,r ⩽ x r ∀r , u (1d)
Note that with this formulation the problem is tractable.
PRACTICAL OPTIMIZATION MODEL
We take into account some practical additional constraints and some further hypothesis to formulate a tractable optimization problem, which meets key questions from content providers.
Practical Hypothesis
We first suppose that each area a ∈ A in the whole spherical video has the same coding complexity. This means we suppose that for a given quality, the byte-stream size of a area is proportional to its size. We derive the concept of surface bit-rate, which expresses in Bps/m 2 the amount of data that is required to encode an area at a given quality. We obtain that b max (respectively b min ) corresponds to the surface bit-rate for the maximum (resp. minimum) quality.
Second, we restrict our study to only two qualities per version. We follow in that spirit the MPEG experts in the Omnidirectional Media Application Format (OMAF) group [12], and notably we follow their recommendation to implement scalable tiled video coding such as HEVC Scalable Extension (SHVC) [START_REF] Boyce | Overview of SHVC: scalable extensions of the high efficiency video coding standard[END_REF] for the implementation of quality-variable 360-degree video versions. It means that for each version we distinguish a Quality Emphasized Region (QER), which is the set of areas that are at the high quality noted b qer , and the remaining areas, which are at the low quality b out . In the SHVC encoding, b qer corresponds to the video signal with the enhancement layer, while b out contains only the base layer. Let s r be the overall surface of the areas that are in QER for a given version r ∈ R. The bit-rate constraints (1a) can thus be expressed as follow:
s r • b qer + (4π -s r ) • b out = B (2)
Third, we introduce a maximum gap between both qualities. The motivation is to prevent the video to have too visible quality changes between areas. This quality gap ratio, denoted by r b , can be defined as the maximum ratio that relate the qualities b qer and hDim v Dim c
Figure 1: A rectangular region of the sphere: in blue the two small circle that delimit the region and in red the two great circles that delimit the region.
b out : b qer b out < r b (3)
Finally, we define the QER as a rectangular region defined on the sphere as shown in Figure 1. We thus adopt the restriction that has been introduced in the MPEG OMAF [12] to delimit a so-called rectangular region on the sphere. We also adopt the same way to define the region by delimiting two small circles (angular distance vDim), two great circles (angular distance hDim) and the spherical coordinates of the region center is (1, θ, φ).
In the following, we consider only video versions r ∈ R such that there exists -π ⩽ θ ⩽ π , 0 ⩽ φ ⩽ π , -π ⩽ hDim ⩽ π , and 0 ⩽ vDim ⩽ π such that for all area a ∈ A, if a is inside the rectangle characterized by (θ, φ, hDim, vDim), the bit-rate of a is b qer otherwise it is b out . We denote such a version by r θ,φ,hDim,v Dim .
Bit-Rate Computation
The objective function [START_REF] Aminlou | Testing methodology for viewport-dependent encoding and streaming[END_REF] imply that if two versions have a QERs containing the same areas, the optimal set of offered video versions can only contains the version that maximize the b qer subject to the bit-rate constraint (2) and the ratio constraint [START_REF] Borji | State-of-the-art in visual attention modeling[END_REF].
In order to simplify the complexity of the model, we pre-computed the value of b qer and b out depending on the size of the QER s r . We identify four different cases depending on the size of the QER s r . For simplicity, we provide in the following the main ideas of the algorithm and put the details of the mathematical model in the Appendix of the paper.
We first combine the constraints given by the overall bit-rate budget with Equation ( 2) and the knowledge that b min ⩽ b out < b qer ⩽ b max . There are two cases, depending on whether the QER is small or not:
• When the surface of the QER is small, i.e., s r ⩽ B-4π b min b max -b min (see in Appendix) , the constraint on the maximum surface bit-rate prevails for b qer . The surface bit-rate inside the QER can be maximum. The bit-rate budget that remains after deducing the bit-rate in the QER is B -(s r • b max ). This remaining bit-rate budget is large enough to ensure that the surface bit-rate for the areas outside the QER is greater than b min . We obtain that b qer is equal to b max and b out is derived as:
b out = B -(b max • s r ) 4π -s r (4)
• When the surface of the QER is large, i.e., s r ⩾ B-4π b min b max -b min , the constraint on the minimum surface bit-rate prevails. The surface bit-rate inside the QER cannot be b max , otherwise the remaining bit-rate that can be assigned to the video area outside the QER would not be large enough to ensure that b out is greater than b min . Here, we first have to set b out to b min and then assign the remaining budget B -(b min • (4π -s r )) to the QER area.
b qer = B -(b min • (4π -s r )) s r (5)
Next, we consider the quality gap ratio, which applies to both previously discussed cases:
• When the QER is small, setting b qer = b max and b r,out to (4) can lead to not respect Equation (3). It occurs for any QER such that (see in Appendix) : We represent in Figure 2 the algorithm with the four cases when it applies to standard settings1 of the overall bit-rate B, the maximum surface bit-rate b max , the minimum surface bit-rate b min , and the quality gap ratio r b . Finally, we show in Figure 3 how the surface bit-rates are assigned depending on the surface s r for a given parameter configuration (see in caption and in Section 6).
s r ≥ 4π • b max -B • r b (1 -r b )
Here the thin gray vertical lines correspond to the threshold at which the algorithm runs a different case.
EVALUATION -CASE STUDY 6.1 Settings
We used a custom-made C++ software publicly available on github. 2 This software uses the IBM Cplex library to solve our optimization problem.
Dataset of Head Movements. We used the public head movement dataset that we recently extracted and shared with the community [START_REF] Corbillon | 360-degree video head movement dataset[END_REF]. 3 This dataset contains the head orientation of 59 persons watching, with a HMD, five 70-second-long 360-degree videos. In this paper we used the results from only two out of the five videos available: roller-coaster and diving. We selected those videos because users exhibit different behaviors while watching them: most users focus on a single RoI in the roller-coaster video 1. The content provider generates up to K = 4 video versions and solves the optimization problem for every video segment (i.e., each video segment has its own set of versions). The parameters related to the bit-rates are similar as in Figure 3: a total bit-rate budget B of 12.56 Mbps, a maximal surface bit-rate b max of 2.1 Mbps/m 2 and a minimal surface bit-rate b min of 0.45 Mbps/m 2 . We restricted the positions of the center of the QER on the sphere to 17 possible latitudes and 17 possible longitudes. Moreover the angular distance hDim and the angular distance vDim can take 12 different values. We split the sphere into a total of N = 400 areas. We cut the videos of the dataset into 2 s long segments. We solved the optimization model independently for each video segment.
Theoretical Gains of Viewport-Adaptive Streaming
Our first goal is to evaluate the possible (theoretical) gains that the implementation of viewport-adaptive streaming can offer to the content providers. The gains can be evaluated from two perspectives: either the opportunity to save bandwidth while offering the video at the same level of quality as if the video was sent with uniform quality, or the opportunity to improve the quality of the video that is displayed at the client side for the same bit-rate as for a standard delivery. We computed the average surface bit-rate inside the viewport of the users (named visible surface bit-rate in the following) for different bit-rate budgets. The average visible surface bit-rate b vqer in the viewport during a segment can be formally written as follow, with N u the number of user:
b vqer = r,u y u,r • a v u,a • b r (a) • s a N u • a v u,a • s a (6)
Figure 4 represents the mean average visible surface bit-rate for all segments of the two selected videos. The horizontal dashed line shows the average visible surface bit-rate for the bit-rate budget of 12.56 Mbps that is uniformly spread on the sphere, while the vertical dashed line indicates the quality for a constant bit-rate of 12.56 Mbps. We also represent the gains from the two aforementioned perspectives (either bit-rate savings or quality).
For a constant average quality inside the user viewports, the delivery of optimally generated QER versions enables 45% bandwidth savings. For a constant bit-rate budget, the optimal viewport-adaptive delivery enables an average increase of visible surface bit-rate of 102%.
Video Content vs. Delivery Settings
We now study the settings of the viewport-adaptive streaming systems, especially the parameters related to the number of different versions (J ) and the segment size (T ). We compare the set of versions that are generated by the optimal solver for both selected videos. We are interested in studying whether there exists a common bestpractice setting to generate versions, regardless of the video content, or whether each video should be prepared with respect to the content by a dedicated process with its own setting. We show the results computed separately for the roller-coaster and the diving video. Recall that the roller-coaster video has a single static RoI and most of the 59 users focus on it. On the contrary, the diving video has multiple moving RoI, which most users alternatively watch. Figure 5 represents the average visible surface bit-rate b vqer of the optimal QER versions for each user and each video segment for both videos: the roller-coaster video is in plain-green lines while the diving video is in dashed-blue lines. The results are shown with a box plot, with the 10 th , 25 th , 50 th , 75 th and 90 th percentiles for the 30 segments watched by the 59 users of each video in an optimal viewport-adaptive delivery system.
The viewport-adaptive streaming systems make that the higher the number of QER versions offered by the content provider, the better the average quality watched by the users because the set of versions covers more user behaviors. However, we notice that there exists a threshold value after which increasing the number of versions does not significantly improve the quality of the viewport of the users. This threshold depends on the video content. For the roller-coaster video, the limit is four QER versions while this limit is eight for the diving video. Please note that both threshold are significantly lower than the thirty versions that are generated by state-of-the-art viewport-adaptive delivery systems [START_REF] Kuzyakov | End-to-end optimizations for dynamic streaming[END_REF].
In Figure 7 we fix the number of QER versions to four and we evaluate the impact of the segment size on the generated QER versions. Like for Figure 5 the results are displayed with a box plot, which follows the same color code. The median quality decreases while the size of the segments increases. Indeed, the higher the segments size, the wider are the head movements of the users. But, similarly as in the number of video versions, we notice that the median average displayed quality for the diving video is more sensitive to the segment size than for the roller-coaster video. For the latter, the quality decreases for segments longer than 2 s while for the diving, the quality decreases for segment longer than 1 s.
QER Dimensions vs. Overall Bit-rate
We study the main characteristics of the generated QER versions with a focus on the impact of the global bit-rate budget on the dimensions. We evaluate both the size of the QER inside each video version and the shape of the QERs.
Figure 7 represents the cumulative density function (CDF) of the surface of the QER inside each generated optimal version, for different global bit-rate budget, for both video. The dashed vertical black line represents the surface of the viewports of the users as it is seen in the HMD.
The size of the QERs increases with the overall bit-rate budget. If the bit-rate budget is small, the size of each QERs is smaller than the surface of the viewports. It means that no user has a viewport with full quality everywhere. The optimal solver prefers here to keep a high quality on an area that is common to the viewport of many users. If we increase the available bit-rate budget, the surface of the optimal QERs increases and is now wider than the viewport, so when a user who moves the head can nevertheless still have a viewport within the QER. Figure 8 represents the probability density function (PDF) of the difference between the horizontal and vertical dimensions of the generated QERs. For instance, Figure 8a indicates that 21 % of the QERs have a horizontal size hDim that is within the range [-1 + vDim, -0.5 + vDim). The more occurrences of QER on the right, the more horizontal QERs are generated by the optimal solver.
QERs have often a squared shape (the horizontal dimension is close to the vertical dimension), and are mostly more horizontal than vertical. The horizontal shape can be explained by the fact that users move more often horizontally than vertically (they often stay close to the horizon). Moreover, when the bit-rate budget is limited, shapes are less often squared. Our interpretation is that, given that the generated QERs are narrower, the optimal solver generates QERs that cover various positions, corresponding to more users whose attention is on various positions around the horizon.
CONCLUSION
This paper investigates some theoretical models for the preparation of 360-degree video for viewport-adaptive streaming systems. Viewport-adaptive streaming has recently received a growing attention from both academic [START_REF] Corbillon | Viewport-adaptive navigable 360-degree video delivery[END_REF][START_REF] Ochi | Live streaming system for omnidirectional video[END_REF][START_REF] Quan | Optimizing 360 video delivery over cellular networks[END_REF] and industrial [START_REF] Aminlou | Testing methodology for viewport-dependent encoding and streaming[END_REF][START_REF] Di | Adaptive streaming for fov switching[END_REF][START_REF] Thomas | Draft for ve on region and point description in omnidirectional content[END_REF] communities. Despite some promising proposal, no previous work has explored the interplay between the parameters that characterize the video area in which the quality should be better. We denote this special video area a QER. In this paper, we address, on a simplified version of our theoretical model, the fundamental trade-off between spatial size of the QERs and the aggregate video bit-rate. We show that some new concepts, such as the surface bit-rate, can be introduced to let the content provider efficiently prepare the content to be delivered. Finally, we demonstrate the potential benefits of viewport-adaptive streaming: the gains compared to streaming of a video version with a uniform quality are greater than 102% in terms of displayed quality to a user given a constant bit-rate budget, and a bit-rate budget reduction for more than 45% for the same displayed video quality.
In this paper, we assumed that content provider already has some user head movement statistics. In future work we will study the generic QERs parameters that the provider can use to generate initial video versions of a 360-degree video, without video specific statistics. When the provider receives enough analytic, he will be able to generate versions adapted to real user behavior on each video segment. Such functionality would be required in both the processed and the live video viewport-adaptive streaming. Additionally, in this paper we studied only a simplified version of the theoretical model with only two different levels of quality per versions. We plan to study smoother decreasing of the quality inside video versions. Extra bit-rate assignment. In some cases, the algorithm obtains (at the step 3 in Figure 2) some so-called extra bit-rate, which comes from the quality gap ratio. This extra bit-rate must be assigned to both the QER and non-QER areas while still maintaining the constraints. Let E be the extra-bit-rate. Let y be the ratio of the extra bit-rate that is assigned to the non-QER areas. Let b int be an intermediate surface bit-rate computed as in the step 1 in Figure 2.
APPENDIX
We have:
b out = b int + y • E 4π -s r b qer = r b • b int + (1 -y) • E s r
Given that the quality gap ratio is the prevailing constraint in the considered cases, b qer = r b • b out . We thus obtain:
r b • b int + (1 -y) • E s r = r b • b int + y • E 4π -s r y = 4π -s r 4π + s r • (r b -1)
• b max The surface bit-rate b qer should be instead reset as b qer = r b • b out . This constraint makes that some extra bit-rate are not assigned: s r • (b max -r b • b out ). These extra bit-rates can thus be re-assigned to both b qer and b r,out (see in Appendix) . • When the QER is large, setting b out = b min and b r,qer with Equation (5) can also lead to not respect Equation (3). It occurs for any QER such that: s r ⩽ 4π • b min -B (1 -r b ) • b min Similarly as in the previous case, resetting b qer with respect to the quality gap ratio leads to release of some extra bit-rates, which can be re-assigned to both b out and b qer .
( 1 )Figure 2 :Figure 3 :
123 Figure2: Algorithm for surface bit-rates in and out of the QER. The algorithm depends on the surface of the QER s r . We show here the four different cases, for various surfaces (smallest to largest from left to right).
2 )Figure 4 :
24 Figure 4: Visible surface bit-rate depending on the global bit-rate B. The horizontal red arrow shows the difference in total bit-rate to deliver viewports with the same average quality as a user would observe with a video encoded with a uniform quality. The vertical red arrow indicates the gain in quality (measure in surface bit-rate) compared to viewports extracted at the same position on a video with uniform quality with the same total bit-rate.
Figure 5 :
5 Figure 5: Visible surface bit-rate depending on the number of offered QER versions. The dark red line representes the visible surface bit-rate of a video encoded with the same overall bitrate but with uniform quality.
Figure 6 :Figure 7 :
67 Figure 6: Visible surface bit-rate depending on the size of the segment. The dark red line representes the visible surface bit-rate of a video encoded with the same overall bitrate but with uniform quality.
Figure 8 :
8 Figure 8: Difference between the horizontal and vertical dimension of the QERs
Limits in the Optimal Bit-Rate Algorithm Constraint on maximum and minimum bit-rate. Let set b qer = b max , which makes that s r • b max bit-rate are used for the QER. The remaining bit-rate can be used to the non-QER: b out = B-(s r •b max ) 4π -s r . We know that b min ⩽ b out . So: b min ⩽ B -(s r • b max ) 4π -s r s r ⩽ B -4πb min b max -b min Constraint on the quality gap ratio. Let set b qer = b max and b out be computed from Equation (4). However, for some s r , it can happen that r b • b out is lower than b max :r b • Bb max • s r 4π -s r ⩽ b max s r ⩾ 4πb max -r b • B (1 -r b )b max
Table 1 :
1 Default evaluation settings Content Provider Case Study. The default parameters are summarized in Table
In some configurations, it is possible that some of the presented cases do not hold since the threshold for the cases can be negative, greater than 4π , or interfering with a prevailing constraint. This however does not occur for the most common configuration parameters such that a quality gap ratio not too large and consistent values for both b min and b max .
https://github.com/xmar/optimal-set-representation-viewport-adaptive-streaming
http://dash.ipv6.enstb.fr/headMovements/ |
00176758 | en | [
"info.info-au"
] | 2024/03/05 22:32:15 | 2008 | https://inria.hal.science/inria-00176758/file/IEEE-FTobserversRevised.pdf | Wilfrid Perruquetti
Thierry Floquet
Emmanuel Moulay
Finite time observers: application to secure communication
Keywords: Finite time observers, finite time synchronization, two-channel transmission, secure communication
come
I. INTRODUCTION
A lot of encryption methods involving chaotic dynamics have been proposed in the literature since the 90's. Most of them consists of transmitting informations through an insecure channel, with a chaotic system. The synchronization mechanism of the two chaotic signals is known as chaos synchronization and has been developed for instance in [START_REF] Pecora | Synchronization in chaotic systems[END_REF]. The idea is to use the output of the drive system to control the response system so that they oscillate in a synchronized manner. [email protected], [email protected]) E. Moulay is with IRCCyN (UMR-CNRS 6597), 1 rue de la Noë, B.P. 92 101, 44321 Nantes CEDEX 03, France (e-mail: [email protected]) October 4, 2007 DRAFT Since the work [START_REF] Nijmeijer | An observer looks at synchronization[END_REF], the synchronization can be viewed as a special case of observer design problem, i.e the state reconstruction from measurements of an output variable under the assumption that the system structure and parameters are known. This approach leads to a systematic tool which guarantees chaos synchronization of a class of observable systems. Different observer based methods were developed: adaptive observers [START_REF] Fradkov | Adaptive observer-based synchronization for communication[END_REF], backstepping design [START_REF] Mascolo | Controlling chaos via backstepping design[END_REF], Hamiltonian forms [5] or sub-Lyapunov exponents [START_REF] Pecora | Synchronization in chaotic systems[END_REF]. Nevertheless, during the chaos synchronization of continuous systems, the convergence of the error is always asymptotic as in [START_REF] Grassi | Nonlinear observer design to synchronize hyperchaotic systems via a scalar signal[END_REF]. Instead of attempting the construction of an asymptotic nonlinear observer for the transmitter or coding system, a finite time chaos synchronization for continuous systems (in the sense that the error reaches the origin in finite time) can be developed. Finite time observers for nonlinear systems that are linearizable up to output injection have been proposed in [START_REF] Engel | A continuous-time observer which converges in finite time[END_REF] and [8] using delays or in [START_REF] Drakunov | Sliding mode observers. tutorial[END_REF] and [START_REF] Perruquetti | A note on sliding observer and controller for generalized canonical forms[END_REF] using discontinuous injection terms. Recently, an algebraic method (using module theory and non-commutative algebra) leading to the non asymptotic estimation of the system states has been developed in [START_REF] Fliess | Reconstructeurs dŠetat[END_REF] and applied to chaotic synchronization in [START_REF] Sira-Ramírez | An algebraic state estimation approach for the recovery of chaotically encrypted messages[END_REF]. In this work, an homogeneous finite time observer is introduced. This observer yields the finite time convergence of the error variables without using delayed or discontinuous terms. Then, it is applied to the finite time synchronization of chaotic systems and combined with the conventional cryptographic method called two-channel transmission in order to design a cryptosystem. The technique of two channel transmission has been proposed in [START_REF] Jiang | A note on chaotic secure communication systems[END_REF]. Other cryptography techniques for secure communications exist such as the parameter modulation developed in [START_REF] Yang | Secure communication via chaotic parameter modulation[END_REF].
The paper is organized as follows. The problem statement and some definitions are given in Section II. An homogeneous finite time observer is developed in Section III. On the basis of this observer, a two-channel transmission cryptosystem is built and is applied in Section IV to the Chua's circuit that is relevant to secure communications (see e.g. [START_REF] Lozi | Secure communications via chaotic synchronization in chua's circuit and bonhoeffer-van der pol equation: numerical analysis of the errors of the recovered signal[END_REF] and [START_REF] Itoh | Performance of yamakawa's chaotic chips and chua's circuits for secure communications[END_REF]).
II. PROBLEM STATEMENT AND DEFINITIONS
Let us consider a nonlinear system of the form:
ẋ = η (x, u) (1)
y = h(x) (2)
ż = Az + f (y, u, u, ..., u (r) ) (3)
y = Cz (4)
where z ∈ R n is the state, r ∈ N >0 and
A = a 1 1 0 0 0 a 2 0 1 0 0 . . . . . . . . . . . . . . . a n-1 0 0 0 1 a n 0 0 0 0 , C = 1 0 ... 0 . (5)
The transformations involved in such a linearization method for different classes of systems with n = d can be found in [18], [START_REF] Krener | Linearization by output injection and nonlinear observers[END_REF], [START_REF] Krener | Nonlinear observers with linearizable error dynamics[END_REF], [START_REF] Xia | Nonlinear observer design by observer error linearization[END_REF]. One can have n > d in the case of system immersion [START_REF] Back | Dynamic observer error linearization[END_REF], [START_REF] Jouan | Immersion of nonlinear systems into linear systems modulo output injection[END_REF].
Then, the observer design is quite simple since all nonlinearities are function of the output and known inputs. Asymptotic stability can be obtained using a straightforward generalization of a linear Luenberger observer. Finite time sliding mode observers have already been designed
for system (3)-( 4) (see e.g. [START_REF] Drakunov | Sliding mode observers. tutorial[END_REF], [START_REF] Perruquetti | A note on sliding observer and controller for generalized canonical forms[END_REF]). However, they rely on discontinuous output injections and on a step-by-step procedure that can be harmful for high order systems. In this paper, a finite time observer based on continuous output injections is introduced.
Notions about finite time stability and homogeneity are recalled hereafter.
Finite time stability
Consider the following ordinary differential equation:
ẋ = g (x) , x ∈ R n . (6)
Note φ x 0 (t) a solution of the system (6) starting from x 0 at time zero.
October 4, 2007 DRAFT Definition 1: The system ( 6) is said to have a unique solution in forward time on a neighbourhood U ⊂ R n if for any x 0 ∈ U and two right maximally defined solutions of (6),
φ x 0 : [0, T φ [ → R n and ψ x 0 : [0, T ψ [ → R n , there exists 0 < T x 0 ≤ min {T φ , T ψ } such that φ x 0 (t) = ψ x 0 (t) for all t ∈ [0, T x 0 [.
Let us consider the system (6) where g ∈ C 0 (R n ), g(0) = 0 and where g has a unique solution in forward time. Let us recall the notion of finite time stability involving the settling-time function
given in [24, Definition 2.2] and [START_REF] Bacciotti | Liapunov Functions and Stability in Control Theory[END_REF].
Definition 2: The origin of the system ( 6) is Finite Time Stable (FTS) if:
1) there exists a function T :
V \ {0} → R + (V is a neighbourhood of the origin) such that for all x 0 ∈ V \ {0}, φ x 0 (t) is defined (and unique) on [0, T (x 0 )), φ x 0 (t) ∈ V \ {0} for all t ∈ [0, T (x 0 )) and lim t→T (x 0 ) φ x 0 (t) = 0.
T is called the settling-time function of the system (6).
2) for all ǫ > 0, there exists δ (ǫ) > 0 such that for every
x 0 ∈ (δ (ǫ) B n \ {0}) ∩ V, φ x 0 (t) ∈ ǫB n for all t ∈ [0, T (x 0 )).
The following result gives a sufficient condition for system (6) to be FTS (see [START_REF] Moulay | Finite time stability of non linear systems[END_REF], [START_REF] Perruquetti | Finite time stability and stabilisation[END_REF] for ODE, and [START_REF] Moulay | Finite time stability of differential inclusions[END_REF] for differential inclusions):
Theorem 3: Let the origin be an equilibrium point for the system (6), and let r be a continuous
function on an open neighborhood V of the origin. If there exist a Lyapunov function V : V → R + and a function r : R + → R + such that
V (x) ≤ -r(V (x)), (7)
along the solutions of ( 6) and ε > 0 such that
ε 0 dz r(z) < +∞, (8)
then the origin is FTS.
The interested reader can find more details on finite time stability in [START_REF] Bhat | Geometric homogeneity with applications to finite-time stability[END_REF], [START_REF] Haimo | Finite time controllers[END_REF], [START_REF] Hong | On an output feedback finite-time stabilization problem[END_REF], [START_REF] Moulay | Finite-time stability and stabilization: state of the art[END_REF], [START_REF]Finite time stability and stabilization of a class of continuous systems[END_REF], [START_REF] Orlov | Finite time stability of homogeneous switched systems[END_REF].
Homogeneity
Definition 4: A function V : R n → R is homogeneous of degree d with respect to the weights
(r 1 , . . . , r n ) ∈ R n >0 if V (λ r 1 x 1 , . . . , λ rn x n ) = λ d V (x 1 , . . . , x n ) October 4, 2007 DRAFT
for all λ > 0.
Definition 5: A vector field g is homogeneous of degree d with respect to the weights (r 1 , . . . , r n ) ∈ R n >0 if for all 1 ≤ i ≤ n, the i-th component g i is a homogeneous function of degree r i + d, that is
g i (λ r 1 x 1 , . . . , λ rn x n ) = λ r i +d g i (x 1 , . . . , x n )
for all λ > 0. The system ( 6) is homogeneous of degree d if the vector field g is homogeneous of degree d.
Theorem 6: [25, Theorem 5.8 and Corollary 5.4] Let g be defined on R n and be a continuous vector field homogeneous of degree d < 0 (with respect to the weights (r 1 , . . . , r n )). If the origin of ( 6) is locally asymptotically stable, it is globally FTS.
III. A CONTINUOUS FINITE TIME OBSERVER
Assume that the system (1)-( 2) can be put into the observable canonical form (3)-( 4). An observer for this system is designed as follows
dẑ 1 dt . . . dẑn dt = A z 1 ẑ2 . . . ẑn + f (y, u, u, ..., u (r) ) - χ 1 (z 1 -ẑ1 ) χ 2 (z 1 -ẑ1 )
. . .
χ n (z 1 -ẑ1 ) (9)
where the functions χ i will be defined in such a way that the observation error e = z -ẑ tends to zero in finite time. Set e = e 1 e 2
ė1 = e 2 + χ 1 (e 1 )
ė2 = e 3 + χ 2 (e 1 ) . . .
ėn-1 = e n + χ n-1 (e 1 ) ėn = χ n (e 1 ) (10)
Denote ⌈x⌋ α = |x| α sgn (x) for all x ∈ R and for α > 0. The following result holds:
October 4, 2007 DRAFT Lemma 7: Let d ∈ R and (k 1 , . . . , k n ) ∈ R n >0 . Define (r 1 , . . . , r n ) ∈ R n >0 and (α 1 , . . . , α n ) ∈ R n >0 such that r i+1 = r i + d, 1 ≤ i ≤ n -1, (11)
α i = r i+1 r 1 , 1 ≤ i ≤ n -1, (12)
α n = r n + d r 1 , (13)
and set
χ i (e 1 ) = -k i ⌈e 1 ⌋ α i , 1 ≤ i ≤ n.
Then, the system ( 10) is homogeneous of degree d with respect to the weights (r 1 , . . . , r n ) ∈ R n >0 . Proof of Lemma 7 is obvious.
Denote α 1 = α. Lemma 8: If α > 1 -1
n-1 , the system ( 10) is homogeneous of degree α -1 with respect to the weights
{(i -1) α -(i -2)} 1≤i≤n and α i = iα -(i -1) , 1 < i ≤ n.
Proof: Let us normalize the weights by setting r 1 = 1. Then r 2 = α and
d = r 2 -r 1 = α -1.
From ( 11) and ( 12)-( 13), one obtains recursively that:
r i = (i -1) α -(i -2) , 1 < i ≤ n, α i = iα -(i -1) , 1 < i ≤ n.
Since r 1 > . . . > r n > 0, one has:
α > n -2 n -1 = 1 - 1 n -1 .
The result follows from Lemma 7.
The system (10) is then given by:
ė1 = e 2 -k 1 ⌈e 1 ⌋ α ė2 = e 3 -k 2 ⌈e 1 ⌋ 2α-1 . . . ėn-1 = e n -k n-1 ⌈e 1 ⌋ (n-1)α-(n-2) ėn = -k n ⌈e 1 ⌋ nα-(n-1) (14)
W × Y about {x 0 } × Y , where W is a neighborhood of x 0 in X.
Theorem 10: Set the gains (k 1 , . . . , k n ) such that the matrix
A o = -k 1 1 0 0 0 -k 2 0 1 0 0 . . . . . . . . . . . . . . . -k n-1 0 0 0 1 -k n 0 0 0 0
is Hurwitz. Then, there exists ǫ ∈ 1 -1 n-1 , 1 such that for all α ∈ (1ǫ, 1), the system ( 15) is globally finite time stable.
Proof: Set 1 - 1 n -1 < α < 1.
Homogeneity: From Lemma 8, the system ( 15) is homogeneous of degree α -1 < 0 with respect to the weight {(i -1) α -(i -2)} 1≤i≤n .
Asymptotic stability: Consider the following differentiable positive definite function
V (α, e) = y T P y (16)
where
y = ⌈e 1 ⌋ 1 q ⌈e 2 ⌋ 1 αq . . . ⌈e i ⌋ 1 [(i-1)α-(i-2)]q . . . ⌈e n ⌋ 1 [(n-1)α-(n-2)]q , q = n-1 i=1 ((i -1) α -(i -2)
) is the product of the weights and P is the solution of the following Lyapunov equation
A T o P + P A o = -I. October 4, 2007 DRAFT As V is proper, S = {e ∈ R n : V (1, e) = 1}
is a compact set of R n . Define the function
ϕ : R >0 × S → R (α, e) → ∇V (α, e) , ψ(α, e)
Since A o is Hurwitz, the system
ė = A o e
is globally asymptotically stable and corresponds to the system (15
) with α = 1. Since ϕ is continuous, ϕ -1 (R <0
) is an open subset of Λ×S containing the slice {1}×S. Since S is compact, it follows from the Tube Lemma 9 that ϕ -1 (R <0 ) contains some tube (1
-ǫ 1 , 1 + ǫ 2 ) × S about {1} × S. For all (α, e) ∈ (1 -ǫ 1 , 1 + ǫ 2 ) × S ∇V (α, e) , ψ(α, e) < 0.
Thus, the system ( 15) is locally asymptotically stable. It can also be shown to be globally asymptotically stable as follows. Note that
V (α, λ r 1 e 1 , . . . , λ rn e n ) = λ 1 q 2 V (α, e 1 , . . . , e n ) with r i = (i -1) α -(i -2) for 1 ≤ i ≤ n. Thus e → V (α, e)
is homogeneous of degree 1 q 2 with respect to the weights {(i -1) α -(i -2)} 1≤i≤n . From [START_REF] Rosier | Homogeneous Lyapunov function for homogeneous continuous vector field[END_REF], it can be deduced that e → ∇V (α, e) , ψ(α, e) is homogeneous of degree 1 q 2 + α -1 with respect to the weights {(i -1) α -(i -2)} 1≤i≤n and thus is negative definite. This imply that, for α ∈ (1
-ǫ 1 , 1 + ǫ 2 ), e → V (α, e)
is a Lyapunov function for the system [START_REF] Lozi | Secure communications via chaotic synchronization in chua's circuit and bonhoeffer-van der pol equation: numerical analysis of the errors of the recovered signal[END_REF].
From Theorem 6, it follows that the system is globally finite time stable. October 4, 2007 DRAFT IV. CRYPTOSYSTEM AND ITS APPLICATION TO THE CHUA'S CIRCUIT Several chaotic systems, as the three-dimensional Genesio-Tesi system [START_REF] Chen | General synchronization of genesio-tesi systems[END_REF], the Lur'e-like system or the Duffing equation [START_REF] Feki | Observer-based exact synchronization of ideal and mismatched chaotic systems[END_REF], belong to the class of systems [START_REF] Fradkov | Adaptive observer-based synchronization for communication[END_REF][START_REF] Mascolo | Controlling chaos via backstepping design[END_REF]. Let us show that the proposed observer can be useful to perform finite time synchronization of this class of chaotic systems and secure data transmission. For a two-channel transmission, the system governing the transmitter is given by:
ż = A z + f (y) (17)
y = z 1 (18) s (t) = ν e (z(t), m(t)) . (19)
The first channel is used to convey the output y = z 1 of the chaotic system (17). The function ν e encrypts the message m (t) and delivers the signal s (t) which is transmitted via the second channel. The receiver gets z 1 (t) on the first channel. An observer is designed as follows:
dẑ 1 dt . . . dẑn dt = A z 1 ẑ2 . . . ẑn + f (y) + O n (y -ẑ1 ) (20)
where
O n (y -ẑ1 ) = k 1 ⌈z 1 -ẑ1 ⌋ α k 2 ⌈z 1 -ẑ1 ⌋ 2α-1 . . . k n ⌈z 1 -ẑ1 ⌋ nα-(n-1) .
The error dynamics e = z -ẑ is given by the system [START_REF] Lozi | Secure communications via chaotic synchronization in chua's circuit and bonhoeffer-van der pol equation: numerical analysis of the errors of the recovered signal[END_REF]. With a good choice of α and {k i } 1≤i≤n , Theorem [START_REF] Perruquetti | A note on sliding observer and controller for generalized canonical forms[END_REF] implies that the error dynamic e (t) converges to the origin in finite time. As a consequence, the message m (t) can be completely recovered after the finite time synchronization by the system
System (20) ŷ = ẑ1 m = ν d (ẑ, s)
.
where the decoding function ν d is defined by ν d (z (t) , s (t)) = m (t).
October 4, 2007 DRAFT
The Chua's circuit belongs to the class of chaotic systems which can be put into the observable canonical form. ( 3)-( 4) The equations of a Chua's oscillator are given by:
C 1 ẋ1 = 1 R (x 2 -x 1 ) + h (x 1 ) C 2 ẋ2 = 1 R (x 1 -x 2 ) + x 3 L ẋ3 = -x 2 -rx 3 (21)
where L is a linear inductor, R and r two linear resistors, C 1 and C 2 two linear capacitors,
h (x) = G 2 x 1 + 1 2 (G 1 -G 2 ) (|x 1 + B| -|x 1 -B|)
is the piecewise linear Chua's function. The chosen output is y = x 1 .
Using the transformation z = T x with 21) is transformed into the observable canonical form (17)-( 18) with
T = 1 0 0 1 C 2 R + r L 1 C 1 R 0 1 C 2 L 1 + r R r C 1 LR 1 C 1 C 2 R the system (
A = -1 C 1 R -1 C 2 R -r L 1 0 -1 L r C 1 R + r C 2 R + 1 C 2 0 1 -1 C 1 C 2 RL 0 0 , f (y) = 1 C 1 1 C 1 1 C 2 R + r L 1 C 1 C 2 L 1 + r R h (y) .
In the simulations, the numerical values of the Chua's circuit are C
1 = 10.04 nF, C 2 = 102.2 nF, R = 1747 Ω, r = 20Ω, L = 18.8 mH, G 1 = -0.756 mS, G 2 = -0.409 mS, H = 1 V.
The gains of the observer have been set as follows: α = 0.7, k 1 = 1000, k 2 = 240, k 3 = 24. The observation error dynamics e = z -ẑ is then given by
ė1 = e 2 -1000 ⌈e 1 ⌋ 0.7 ė2 = e 3 -240 ⌈e 1 ⌋ 0.4 ė3 = -24 ⌈e 1 ⌋ 0.1 . ( 22
)
and e (t) converges to the origin in finite time (see Fig. 1 and2). A message m (t) can be sent and recovered after the delay due to the finite time synchronization by using the previous algorithm (see Fig. 3).
October 4, 2007 DRAFT Remark 11: It is possible to increase the security of the transmission by introducing some observation singularities in the system (17). In this case, finite time convergence is a useful property (see [START_REF] Barbot | Observability Normal Forms in[END_REF]).
V. CONCLUSION
In this paper, a continuous finite time observer based on homogeneity properties has been designed for the observation problem of nonlinear systems that are linearizable up to output injection. It does not involve any discontinuous output injections and step-by-step procedure, as it is the case, for instance, for sliding mode observers. It has been applied to finite time chaos synchronization and to secure data transmission using the two-channel transmission method.
W.
Perruquetti and T. Floquet are with the LAGIS (UMR CNRS 8146), Ecole Centrale de Lille, Cité Scientifique, 59651 Villeneuve d'Ascq Cedex, France and with Centre de recherche INRIA futurs, Equipe Projet ALIEN. (e-mail:
Lemma 9 (
9 Tube Lemma): Consider the product space X × Y , where Y is compact. If N is an open set of X × Y containing the slice {x 0 } × Y of X × Y , then N contains some tube
3 Fig. 1 .
31 Fig. 1. State of the system (21) and its estimate
3 Fig. 2 .
32 Fig. 2. Observation error
R d is the state, u ∈ R m is a known and sufficiently smooth control input, and y(t) ∈ R is the output. η : R d × R m → R d is a known continuous vector field. It is assumed
where x ∈ that the system (1)-(2) is locally observable [17] and that there exist a local state coordinate
transformation and an output coordinate transformation which transform the nonlinear system
(1)-(2) into the following canonical observable form:
October 4, 2007 DRAFT |
01767609 | en | [
"info.info-rb"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01767609/file/2018_RCIM_Kermorgant_Climbing.pdf | Olivier Kermorgant
A magnetic climbing robot to perform autonomous welding in the shipbuilding industry $
Keywords: Continuous track, welding robot, climbing robot
In this paper we present the mechanical and control design of a magnetic tracked mobile robot. The robot is designed to move on vertical steel ship hulls and to be able to carry 100 kg payload, including its own weight. The mechanical components are presented and the sizing of the magnetic tracks is detailed. All computation is embedded in order to reduce time delays between processes and to keep the robot functional even in case of signal loss with the ground station. The main sensor of the robot is a 2D laser scanner, that gives information on the hull surface and is used for several tasks. We focus on the welding task and expose the control algorithm that allows the robot to follow a straight line for the welding process.
Introduction
In the advanced manufacturing area, one challenge is to extend the use of autonomous or collaborative robots in several industries such as car companies, plane and ship building or renewable energies. Whereas most of the innovations deal with traditional or mobile robot arms in classical manufacturing plants, another source of productivity can be found in more exotic locations: large structures such as an aircraft or a ship indeed have their own constraints when it comes to mobile robotics. In this work we design a mobile robot able to navigate on vertical steel surfaces such as ship hulls. The robot can perform various tasks, either autonomously or in tele-operation depending on the complexity. Currently, hull welding is performed manually by a welder mounted on a boom lift or scaffoldings. This is both dangerous and costly, as the typical size of a ship is some hundred meters length and a few tens of meters high. Besides, due to the thickness of the hull, several passes have to $ c 2018. This manuscript version is the accepted preprint and is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ Email address: [email protected] (Olivier Kermorgant) be performed (up to 10). A typical ship thus requires several kilometers of straight line welding. Human welder expertise is far from being fully exploited in this task, and it can benefit from an autonomous process. The stakes for the ship builder are to reduce lifts or scaffoldings use, and to have their welders available for more complex tasks such as corners or welding in small places where adaptability is crucial. This task requires the accurate positioning of an embedded welding torch. The torch is carried by a 2-degrees of freedom arm located at the rear of the robot and benefits from laser scanner feedback for the analysis of the welding joint. The motion law of the torch is not detailed and we focus on the capabilities of the vehicle in terms of alignment and positioning with regards to the welding joint, which drives the position of the welding torch. Absolute localization on the hull, or longitudinal positioning along the welding joint, are not considered in this work. Welding results are presented to show the reliability of the process.
Most of the works on climbing robots focus on mechanical design and adhesion principle. A survey [START_REF] Silva | A survey of technologies for climbing robots adhesion to surfaces[END_REF] indicates that the two most popular approaches for climbing robots are the use of suction force and, when the surface allows it, magnetic force.
A suction-based climbing robot was presented in [START_REF] Savall | Two compact robots for remote inspection of hazardous areas in nuclear power plants[END_REF] for the inspection of radioactive cylindrical tanks. The payload is only 3 kg. Alternatively, in [START_REF] Kang | Design of quadruped walking and climbing robot[END_REF] a quadruped walking robot is presented with suction pads at each of the 4 contact links. The control of such a robot is of course quite tedious.
In [START_REF] Sekhar | A Novel Design Technique to Develop a Low Cost and Highly Stable Wall Climbing Robot[END_REF] a low cost wall climbing robot is proposed. Classical tracks are used with a vacuum pump that increases the grip on the surface. The payload of this robot is only 500 g. Magnetic force-based robot may use electromagnets [START_REF] Shores | Design, kinematic analysis, and quasi-steady control of a morphic rolling disk biped climbing robot[END_REF][START_REF] Armada | Design of mobile robots[END_REF], which are interesting as they can be activated at will and increase the control possibilities of the platform. On the other hand, permanent magnets can be use with magnetic wheels [START_REF] Berns | Climbing robots for commercial applications-a survey[END_REF][START_REF] Park | Design of a mobile robot system for automatic integrity evaluation of large size reservoirs and pipelines in industrial fields[END_REF]. The main advantage is that no energy is spent on the adhesion, but it makes the control more difficult and requires more power in order to cope with the friction between the wheels and the surface. A magnetic tracked robot is presented in [START_REF] Shen | Proposed wall climbing robot with permanent magnetic tracks for inspecting oil tanks[END_REF] with an emphasis on the sizing of the magnetic pads. The goal is to perform oil tanks inspection with manual control of the robot.
Alternative technologies to suction and magnetic forces have been investigated. In [START_REF] Wang | Large-payload climbing in complex vertical environments using thermoplastic adhesive bonds[END_REF],thermoplastic adhesive bonds are used instead of magnets or suction pads. Even if it leads to a very high payload, the number of cycles is limited and this would induce too many maintenance operations for the ship industry. Another technology has been proposed for non-magnetic wall climbing [START_REF] Murphy | Waalbot II: adhesion recovery and improved performance of a climbing robot using fibrillar adhesives[END_REF], but again the payload is only 100 g and the control is not automated.
High-payload wall climbing robot designs are proposed in [START_REF] Wu | The mechanism design of a wheeled climbing welding robot with passing obstacles capability[END_REF][START_REF] Wu | Design and optimal research of a non-contact adjustable magnetic adhesion mechanism for a wallclimbing welding robot[END_REF]. Here the vehicle is equipped with wheels and an adjustable magnet allows the control of the adhesion force, allowing up to 50 kg payload. Although the robot is designed for welding, no performance analysis is done for the arm positioning and the control also seems to be manual. Another manually-driven robot is proposed in [START_REF] Bibuli | MARC: Magnetic autonomous robotic crawler development and exploitation in the MINOAS Project[END_REF], using magnetic tracks for hull inspection.
In our case, safety imposes that the robot should not fall even in the case of power failure. In addition, the robot is designed to work on ship hulls. That is why we chose magnetic tracks, even if as we will see it leads to constraints on the navigation part.
Most of the mentioned robots assume manual control. On the opposite, in [START_REF] Sánchez | Machine Vision Guidance System for a Modular Climbing Robot used in Shipbuilding[END_REF] a magnetic robot is proposed with a laser feedback for autonomous inspection. The design is similar to [START_REF] Wu | Design and optimal research of a non-contact adjustable magnetic adhesion mechanism for a wallclimbing welding robot[END_REF] with wheels and adjustable permanent magnets. The laser feed-back is similar to our case, though we need a higher accuracy due to the welding task. Navigation are usually not studied for tracked vehicles, as their autonomous modes are often used in large areas where GPS feedback is available and where rough accuracy is enough [START_REF] Schempf | Pandora: autonomous urban robotic reconnaissance system[END_REF]. Improving open-loop odometry relies on the study of the track-soil interaction [START_REF] Le | Estimation of track-soil interactions for autonomous tracked vehicles[END_REF] which may not be enough for welding.
In this work, we focus on the laser feedback from [START_REF] Hascoet | Shiphull Welding: Trajectory Generation Strategies Using a Retrofit Welding Robot[END_REF], that proposes several trajectories for the welding torch that can be computed from the welding joint 2D profile. This approach is classically used for non-mobile welding robots [START_REF] Landry | Collision-Free Path Planning of Welding Robots[END_REF][START_REF] Shah | A Review Paper on Vision Based Identification, Detection and Tracking of Weld Seams Path in Welding Robot Environment[END_REF] or robot that switch between motion and welding [START_REF] Ku | Design of controller for mobile robot in welding process of shipbuilding engineering[END_REF].
Compared to previous works, we propose a whole mechanical and control design for autonomous hull welding. The prototype has been tested on real ship hulls, which has revealed that the key feature is the autonomous line following in order to ensure accurate positioning of the welding torch.
We exploit the laser feedback needed for the welding, to perform joint tracking. For cost reasons only one laser scanner is considered on the robot. The contribution lies in the design, control law and state estimation of the mobile base. In Section 2 the general design and components of the robot are detailed. We also focus on the main mechanical challenge, that is the magnetic caterpillar. The control architecture is then presented in Section 3, with the design of an estimator and control law to perform welding joint tracking. Experiments on joint tracking and welding results are then presented in Section 4.
Overall design
In this section we detail the mechanical design and the components of the robot. The starting point for the sizing of the platform was the weight and payload that directly impact the number of magnets, hence the general size of the robot. The current form factor is 80 cm length and 50 cm width, with 100 kg total weight including the weight of the external cables (power supply and welding torch cable). The robot structure is composed of aluminum. A 600 W external power supply delivers 3x48 V. We first list the robot components before detailing the sizing of the magnetic track.
Components
The prototype is shown in Fig. 1. The laser scanner is hidden behind a protective layer. On the rear panel (bottom), the embedded controller is visible with its modules wired to the other pieces of hardware. At the center of the robot is the 2-degrees of freedom arm equipped with a functioning welding torch.
On-board computer
The controller is a National Instruments Com-pactRio. It features a dual-core 1.33 GHz Intel Atom processor running a Linux real-time kernel. The CompactRio also features a Xilinx FPGA. All of the code is developed in Labview. The real-time kernel hosts most of the programs, except for the arm low-level controllers that run on the FPGA and the user interface that runs on a remote computer.
Continuous tracks and motors
The continuous tracks and their magnetic pads have been sized to carry the robot and its payload. The sizing is detailed in Section 2.2. The track has 24 pads in contact with the surface with 25 kg grip per pad. This induces a smooth motion without discontinuities, hence no disturbances on the welding process. The pads are composed with neodymium iron boron magnets protected by aluminum boxes, and are linked with a stainless-steel chain.
Each track is associated with a Motor Power Company brushless servomotor with an input voltage of 48 V. A planetary gear with 1:195 ratio then delivers up to 42 Nm torque at velocities up to 6 m/min. The motor speed drives have been tuned with the final payload in working condition (robot on a vertical hull) and allow a fast response to the velocity setpoint of the two tracks.
Although some works address the issue of servomotor cooling in a harsh welding environment [START_REF] Lee | Development of modularized airtight controller for mobile welding robot working in harsh environments[END_REF], this was not considered in the current prototype.
2-degrees of freedom arm
At the center of the robot a 2-degrees of freedom arm is mounted in order to control the position of the welding torch. Each axis is driven with a 24 V DC motor. The motors are driven through Com-pactRio modules allowing real-time position control and feedback. A first calibration step was needed to register the axis with regards to the laser measurement.
Laser scanner
The embedded laser scanner is a MEL scanner, with a measuring range of 84 to 204 mm and a max-imum width of 80 mm, which corresponds to an angle opening of ±15 o . The resolution if 0.06 mm in range and 0.14 mm in X-direction. The laser connects to the CompactRio with Ethernet and sends the X-Z position of 290 points at 100 Hz. During the welding process, some outliers appear in the laser scan due to the welding light. They can easily be filtered out.
Proximity sensors
As the robot may be placed on unfinished hulls, it could fall by simply moving out of its own contact surface. Four Sharp Infrared proximity sensors have thus been mounted at each corner. The voltage output is acquired by an analog IO module on the CompactRio. Again, during the welding some irregular measures happen but they can be eliminated through a hysteresis filter.
Welding process
In the current state of the robot, the welding torch is directly connected to the welding ground station through a cable that provides shielding gas and wire. On longer distances, it will be necessary to mount a wire subfeeder on the robot.
Magnetic track sizing
The main challenge of the mechanical design is the sizing of the magnetic tread. Indeed, safety protocols in the considered industry force a magnetic device to have a grip equal to at least five times its weight. This is due to the possibility of slippery surface in case of humidity or even rain. The robot also has to be able not to fall even in one caterpillar is broken. The considered robot and payload weighting about 100 kg, this means that each caterpillar has to have an attractive force of at least 500 kg. This is where the magnetic track strongly differs from a classical tracked vehicle. Indeed, the attractive force caused by the magnets is far greater that the sole gravity in the case of a vehicle moving on a horizontal plane. Rotational motions, which are induced by having the two tracks running at different velocities, imply that some of the pads are actually slipping on the contact surface. In our case, first experiments have shown that rotational motions can be perfectly performed on the ground but as soon as the robot is on a magnetic surface the attractive force is such that the chain is subject to very high efforts. This induces deformations in the chain and can also make it jump off the gear. In order to cope with this phenomenon, a minimal radius of curvature is imposed during the motion. As we will see in Section 3.2, the use of magnetic tracks also induces uncertainties during autonomous control. This aspect will be detailed in Section 3.3.
Finally, the presented sizing induces no slippery motion on wet hulls. In practice, the welding torch is protected from the rain, and the welding process heats the hull and dries out its surface around the robot position. We now present the software and control architecture of the robot.
Control design
In this section the general control architecture is presented. We then focus on laser-guided line tracking and present an Extended Kalman Filter used to estimate the angle between the robot and the welding joint.
General control architecture
As in multi-process software, the Labview design paradigm allows the definition of several parallel loops that interact through shared variables. The designed software is composed of the following loops:
1. The state machine is the core of the control architecture. Depending on the state of the system (manual control, welding process...) it sends different setpoints to the track motors and the arm. The state machine also includes the security checks based on the proximity sensors in order to cancel any motion making the robot fall off the hull. 2. Two user-interface loops at 50 Hz: one for the inputs and one for the display. • Manual : both the mobile base and the arm are manually controlled. This is used to perform the approach of the welding joint and to calibrate the arm.
• Alignment: here the vehicle automatically aligns with the joint at a given forward velocity. No welding is performed, the goal is to be sure the vehicle is aligned.
• Welding: in this state the vehicle has a forward velocity coming from the welding control and the angular velocity is controlled to stay parallel to the joint. The arm follows a YZ trajectory that is defined for the welding process.
• Return: when a welding pass is over the robot goes backward to perform another pass. The arm is motionless while the vehicle follows the welding joint in reverse.
The sequence between the main steps is described in Fig. 2. A manual control is typically required to approach the joint and to move to the next one. Before actual welding, an automated alignment is performed.
We now detail the laser-guided joint tracking that is used during the Alignment, Welding and Return states.
Raw profile Edges Barycenter
Laser-guided navigation
Laser-guided navigation is used during the welding process in order to ensure the robot stays parallel to the welding joint. Only the angular velocity ω is controlled, as the linear velocity v is imposed by the welding process.
Other strategies for line tracking include vision [START_REF] Ismail | Visionbased system for line following mobile robot[END_REF] and trajectory tracking [START_REF] Gu | Receding horizon tracking control of wheeled mobile robots[END_REF]. In the considered environment, vision is however unreliable due to light conditions and poor contrast between the hull and the joint. Trajectory tracking requires a global frame where the robot and the trajectory are defined. Here the only exteroceptive sensor is a laser and cannot reconstruct the absolute pose.
Assuming the robot lies on the welding joint, the embedded laser provides the joint profile that can be processed to extract feature points. A typical profile is represented on Fig. 3 with a seam of approximately 3 cm width × 2 cm depth. The raw profile is first smoothed with a Savitzky-Golay filter [START_REF] Savitzky | Smoothing and Differentiation of Data by Simplified Least Squares Procedures[END_REF]. For joint tracking, we are interested in extracting the two edges of the hull parts that are to be welded (red diamonds on the figure). These edge points are found by an iterative robust line fitting algorithm [START_REF] Thiel | A rank-invariant method of linear and polynomial regression analysis, Part 3[END_REF] on the hull. Line fitting is performed on an increasing number of (leftmost or rightmost) points. When the last considered points are systematically outliers for several iterations then the edge point is assumed to be the last inlier. Other methods have been proposed to perform weld joint detection and tracking [START_REF] Dinham | Detection of fillet weld joints using an adaptive line growing algorithm for robotic arc welding[END_REF][START_REF] Shah | Butt welding joints recognition and location identification by using local thresholding[END_REF] Once the edges have been found, the barycenter of the joint is computed according to previous works [START_REF] Hascoet | Shiphull Welding: Trajectory Generation Strategies Using a Retrofit Welding Robot[END_REF]. Green's theorem is used to compute the position of the barycenter (green in Fig. 3). The barycenter is robust to small variations of the edges detection and is assumed to draw a straight line through the joint. Other features points, used to control the position of the welding torch, are also extracted but not displayed.
The general configuration of the robot with regards to the welding joint is shown in Fig. 4. The origin O denotes the instantaneous center of rotation. Unlike classical unicycle robots [START_REF] Consolini | Stabilization of a hierarchical formation of unicycle robots with velocity and curvature constraints[END_REF], the position of O is not known for tracked vehicles as angular motions imply that some pads are slipping while others are not. y is the distance between O and the joint, l is the distance between O and the laser beam and θ is the orientation. These values are linked through the equation:
d = y/ cos θ + l tan θ (1)
As ẏ = v sin θ, the derivative of ( 1) is:
ḋ = -yω sin θ/ cos 2 θ + v tan θ + l tan θ + lω(1+tan 2 θ) (2)
We now detail the continuous transfer function between the angular velocity ω and the distance d.
In the desired work configuration, the robot state is centered and aligned with the joint, and we can assume that for a particular configuration the force repartition on the tracks is stable and l does not vary. ( 2) is thus linearized around the position (y = 0, l = 0, θ 1):
ḋ = vθ + lw (3)
The Laplace transform of (3) yields:
sD(s) = v s Ω(s) + lΩ(s) (4)
where s is the Laplace variable. The corresponding transfer function H(s) = D(s)/Ω(s) yields:
H(s) = D(s) Ω(s) = v + sl s 2 (5)
The system thus has two very different behaviors, depending on the moving direction v.
Going forward
If vl ≥ 0 then the system is minimum-phase. This is the case during Welding state, as the laser scanner is located at the front of the robot (l > 0). A simple PID from d to ω is used to control the angular velocity, inducing high stability and precision as represented on Fig. 5. Even if the system has a double integrator, another Integrator term is needed in the controller. Indeed, the radii of the tracks may be subject to small changes because of surface defects or accumulated iron filings on the magnets.
Going backward
If vl < 0 then it is non minimum-phase. This is the case when the robot has finished a welding pass and is going back to the starting point. The previous controller cannot be used since regulating d to 0 does not induce staying parallel to the joint, as represented on Fig. 6. However, in this state we only aim at going back to the starting position while tracking coarsely the joint. Small variations on d are thus acceptable and we regulate an error composed of the distance d and an estimation of the angle θ:
e = d + αθ ( 6
)
where α is a tuning parameter. The corresponding transfer function yields:
H e (s) = E(s) Ω(s) = v + (l + α)s s 2 (7)
We see that even if v < 0, the zero of H e is at -v/(l+α) and can thus be made negative by choosing α < -l. Actually, if we simulate a virtual laser located at l in the rear direction then the output d of this laser would be:
d = d -(l + l ) tan θ (8)
This is equivalent to ( 6) for small values of θ. Ensuring α < -l in ( 6) amounts to having l > 0 in [START_REF] Park | Design of a mobile robot system for automatic integrity evaluation of large size reservoirs and pipelines in industrial fields[END_REF], which means the virtual laser is indeed at the rear of the robot. The forward control law can thus be used from this virtual laser, using a PID from e to ω.
In order to use this controller an estimation of θ is required. We now present the Extended Kalman Filter that is used to do so.
Kalman filter for angle estimation
While the laser scanner provides very accurate measurement of the joint profile, it lacks the orientation of the robot which is needed during backward motion. An Extended Kalman Filter (EKF) [START_REF] Julier | A new extension of the Kalman filter to nonlinear systems[END_REF] is thus used to estimate the missing value. The state of the robot is chosen as (v, ω, y, θ, l), and we assume no time variation for (v, ω, l) in the filter definition. The continuous formulation of the state transition model yields:
v = 0 ω = 0 ẏ = v sin θ θ = ω l = 0 (9)
The available measurements are the position d of the joint barycenter in the laser beam and the encoders of the tracks which are considered in a differential drive model [START_REF] Consolini | Stabilization of a hierarchical formation of unicycle robots with velocity and curvature constraints[END_REF]. The observation model thus yields:
ω l = 1 r v + b 2r ω ω r = -1 r v + b 2r ω d = y/ cos θ + l tan θ (10)
where (ω l , ω r ) are the angular velocities of the left and right motors, r is the track radius and b is the distance between the tracks. The last equation is the same as [START_REF] Silva | A survey of technologies for climbing robots adhesion to surfaces[END_REF]. A classical Extended Kalman Filter is then used with the models ( 9) and ( 10) in order to estimate the orientation that is needed during the backward motion.
Experiments
In this section we present some results on autonomous line tracking. We first describe the experimental setup before analyzing the results. Finally, the actual welding is briefly exposed.
Experimental setup
In order to have the ground truth for the angle α and the robot state, a second laser scanner is located at the end of the robot. It is not used in the control law or in the state estimation. As described in Fig. 2, the robot is first manually driven above the welding joint. The robot then goes through the following steps:
1. Alignment for 30 s 2. Return for 30 s 3. Welding for 60 s 4. Return for 60 s In the graphs, the state is indicated as FWD (Alignment / Welding) or BWD (Return). During the Alignment and Welding steps, the linear velocity is set to +0.02m/s and the steering is regulated through the PID described in Section 3.2.1. During the Return steps, the linear velocity is set to -0.04m/s and the steering is regulated through the PID described in Section 3.2.2. We are interested in the regulation of d and in the torch position error with regards to the joint. In practice, additional motion of the Y-Z axis is used to perform the welding but its performances mainly depend on the torch position error caused by an undesired angle between the robot and the joint.
Two robot designs are compared in order to detail the compromise that arise between the robot control and the torch positioning. The first one is shown in Fig. 7 and corresponds to the initial design of the prototype. The laser is at the center while the welding torch is at the rear of the mobile base. The idea was to reduce interactions between the welding process and the components of the robot (especially the laser and the computer). We will see that in this case the joint tracking is less satisfactory as a small angle induces a large lateral error on the welding torch. Besides, the welding process itself is not protected from external disturbances such as wind or a small rain. The final design hence corresponds to Fig. 1, with a laser at the front and a centered welding torch. It is the current design of the prototype and we will show it leads to a smaller position error for the torch.
Initial design: centered laser, torch at the rear
In the initial design, the laser scanner is located near the center (l = 0.15m) while the welding torch is at the rear of the robot.
Fig. 8a shows the estimation error of the distance to the welding joint y (blue), the robot orientation θ (green) and the distance l (red). The EKF presented in Section 3.3 allows a fine estimation of y and θ, but due to the large initialization error on l (0.05 m instead of 0.15) this value is not correctly estimated. As the value of l is quite small, it has few impact on the measurement model [START_REF] Wang | Large-payload climbing in complex vertical environments using thermoplastic adhesive bonds[END_REF] which makes it difficult to estimate.
This does not prevent the robot from following the joint: Fig. 8b shows that d (joint center in laser scan) quickly reaches 0 in Forward motion. Indeed we have shown in Section 3.2.1 that the steering is stable when going forward. The welding torch being at the rear of the robot, its distance to the joint is highly dependent on the robot angle. Hence, high oscillations can be observed during the first Forward motion.
After 30 s, the robot is in Return state. It is visible on Fig. 8b that the distance d is not regulated to 0 according to [START_REF] Berns | Climbing robots for commercial applications-a survey[END_REF]. The vehicle still roughly follows the line, despite the error on the estimation of l. We can see that the error on d increases while the torch position decreases. The vehicle thus roughly follows the joint and is able to get back to the starting position. After 60 s, the robot is in Welding state. As for the initial Alignment, because of the required steering to center the joint in the laser scan, the torch position error oscillates between -4 and +2 mm at the beginning of the welding. This would typically lead to a bad welding quality for the first centimeters. The position error then converges to 0 and the robot stays aligned during the end of the Welding and the tracking Return state. We now expose the results from the final design, that leads to a better torch position.
4.3.
Final design: centered welding torch, laser at the front In the final design, the laser scanner is located at the front of the robot (l = 0.50m) while the welding torch is near to the robot center.
Fig. 9a shows the estimation of the distance to the welding joint y (blue), the robot orientation θ (green) and the distance l (red). The larger value of l makes it easier to estimate, and the estimation error reduces over time. As with the previous design, the estimation of y is not perfect but this does not prevent the robot from tracking the joint. As shown in Section 3.2, having the laser at the front improves the gain of the system when going forward. Hence both d and the torch quickly align during the first Alignment step as seen in Fig. 9b. Again, during the Return step between 30 and 60 s, the regulated parameter is a compromise between d and the angle, hence a small offset appears while the torch stays at the same distance to the joint. After 60 s, the Welding step begins and d is very quickly regulated to 0. The torch starts with a position error of 0.6 mm that converges to 0 without oscillations. This is within the acceptable error of 1 mm for the first pass on ship hulls. Once the robot is aligned, the torch position error is almost null which results in a very accurate welding process. This continues when going backward after 120 s: both the laser and the torch stay centered above the joint. In this analysis only 60 s forward and backward are demonstrated as the robot always stays aligned with the joint after this typical time period.
Comparing the performances of two designs reveals that from an automation point of view, and as shown in Section 3.2, the initial design induces lower performances both in terms of state estimation (larger estimation error on l) and in process quality (non desired oscillations of the torch before stabilization). The necessity to be able to stay aligned during the Return phases is caused by the small torch offset that may happen at the beginning of a new welding pass. From a process point of view, having the torch at the center of the robot actually protects the weld from external disturbances. The main drawback of the final design is that the welding is performed near to several robot parts (magnetic pads, tread wheels). As far as experiments revealed, no negative impact on the robot struc- ture was observed after having welded several tens of meters. We now present some welding results.
Welding results
Welding is performed by defining a simultaneous control of the 2-degrees of freedom arm and the robot forward motion, in order to have a 3D trajectory of the welding torch. The trajectory generation depends on the welding configuration (vertical or horizontal) and uses feedback from the joint profile measured by the laser scanner. The actual trajectories of the welding torch are adapted from [START_REF] Hascoet | Shiphull Welding: Trajectory Generation Strategies Using a Retrofit Welding Robot[END_REF], they are not detailed due to confidentiality reasons. Depending on the configuration being vertical or horizontal, the trajectories may correspond to a constant forward velocity of the mobile base, or some motionless time periods. Similarly, the arm motion may be constantly oscillating between two values that are extracted from the joint profile, or stay motionless for some duration at particular positions. In order to ensure an optimal welding process, the vertical position control of the torch is regulated from the welding current feedback that varies positively with the distance between the torch and the weld point. In practice the vertical position update is computed through a PID provided by the torch manufacturer.
The resulting process for vertical configuration is shown in Fig. 10. An interesting feature of the robot is that the joint profile is entirely scanned at each pass. Therefore, the complete evolution of the profile can be analyzed for process quality and optimization. Fig. 10a shows the complete 3D raw profiles for 10 welding passes of 1 m length. As discussed in Section 2.1.4, laser scan aberrations are visible and are due to the welding light. They are only a few peaks and can be filtered out. The highlevel welding strategy is visible on Fig. 10b. Here 10 consecutive passes are represented using the mean profile along the joint. We can see that depending on the current profile, the welding strategy is to fill only a part of the joint. The final pass is wider and covers the whole joint, even if more material is dropped on the right part of the joint. This strategy is based on numerous exchanges and discussions with welders and previous work in [START_REF] Hascoet | Shiphull Welding: Trajectory Generation Strategies Using a Retrofit Welding Robot[END_REF]. The visual aspect of the final pass can be seen in Fig. 11. As expected, the weld is very regular along the joint, which helps the quality assessment of the process.
Conclusion
We have presented a new robot designed to perform varying tasks on ship hulls. The mechanical design was exposed and is still being adapted to the latest experimental results. Compared to previous works on wall-climbing robots, one objective of this project is also to prove that autonomous welding can be performed on long distances. To do so, a control law based on the laser feedback was designed and makes the robot autonomously weld several passes. As shown in Section 3.2, the performances in forward motion are accurate enough to track a reference trajectory. Another solution would be to use additional sensors but cameras are difficult to use during welding, and another laser scanner is not possible within the targeted price of the system. Future works on this project now focus on improving welding quality through better laser-based welding torch trajectories. To reduce the twist effort on the chain, using two chains per track is also investigated. Inspection of welding quality typically has to be done several hours after the welding [START_REF] Weman | Welding processes handbook[END_REF], hence another mobile base may be equipped with inspection sensors while the welding robot works on another part of the hull. Finally, a strong limit of the prototype is of course that it is suited only for straight joints. However, ship builders currently spend most of their welding time on these joints. The proposed approach hence allows to automate this task while human welders cope with much more complex configurations.
Figure 1 :
1 Figure 1: Overview of the mobile robot in vertical position on a steel surface. Front support carry proximity sensors. The laser scanner is protected from external light. Embedded electronics are on the rear panel and dispatched around the CompactRio. The two external cables are the power supply (grey) and Ethernet connexion (blue). Welding is being performed at the center of the robot.
Figure 2 :
2 Figure 2: Graph of a typical process. Manual control is required before and after the welding.
Figure 3 :
3 Figure 3: Typical laser profile with the welding joint (Y and Z scales have been changed). Edges (red) are extracted, then the barycenter (green) is computed. Units and angles are intentionally not displayed.
Figure 4 :
4 Figure 4: Schematic of the robot above the welding joint. The laser (red beam) measures the distance d. The angle θ and distance y between the origin O and the joint are unknown. The distance l between the laser and the origin is also unknown.
Figure 5 :
5 Figure 5: Going forward. If d is regulated at 0 (joint in the middle of the laser beam) then the robot naturally aligns with the welding joint.
Figure 6 :
6 Figure 6: Going backward. If is regulated at 0 (joint in the middle of the laser beam) then the robot naturally moves away from the joint.
Figure 7 :
7 Figure7: Initial robot design. The laser is at the center, the welding torch is at the rear. Electronics integration was not done yet on this design.
y
error [mm] θ error [rad] l error [mm](a) EKF error on y (blue), θ (green) and l (red).
] torch position error [mm](b) Position (in mm) of the welding joint in the laser beam (blue) and distance of the welding torch (green).
Figure 8 :
8 Figure 8: Initial design. (a) Estimation error on (y, θ, l). (b) Joint center in the laser scan and torch position error.
mm] θ error [rad] l error [mm] (a) EKF error on y (blue), θ (green) and l (red).
] torch position error [mm] (b) Position (in mm) of the welding joint in the laser beam (blue) and distance of the welding torch (green).
Figure 9 :
9 Figure 9: Final design. (a) Estimation error on (y, θ, l). (b) Joint center in the laser scan and torch position error.
(a) 3D profile of 1 m welding. Laser scan outliers are visible as peaks. The consecutive passes are shown in different colors. (b) History of 10 performed welding passes for a 2 m welding.The mean value of the profile is used, hence some registration aberrations. The last pass is about 3 cm wide.
Figure 10 :
10 Figure 10: History of the joint profiles. Units and angles are intentionally not displayed due to confidentiality reasons. (a) 3D profiles. (b) Cross-section of average joint profiles for 10 consecutive passes.
Figure 11 :
11 Figure 11: Visual result after the final pass.
Acknowledgements
This work was supported by the CHARMAN project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced 10 Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, respectively, IRT Jules Verne, STX France, DCNS, Servisoud, LS2N and Bureau Veritas. |
01681645 | en | [
"sdv.bdlr",
"sdv.gen.gh"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01681645/file/Elkhatib_SUN5.pdf | Razan A Elkhatib
Marine Paci
Guy Longepied
Jacqueline Saias-Magnan
Blandine Courbiere
Marie-Roberte Guichaoua
Nicolas Levy
Catherine Metzler-Guillemain
Michael J Mitchell
email: [email protected]
Homozygous deletion of SUN5 in three men with decapitated spermatozoa
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
It has been estimated that 14% of couples fail to conceive a child naturally after one year, and compromised male fertility is identified in 60% of these couples [START_REF] Thonneau | Incidence and main causes of infertility in a resident population (1,850,000) of three French regions (1988-1989)[END_REF]. Assisted reproduction technology (ART) will provide a therapeutic solution for half of these couples, and today accounts for 1-3% of live births in developed countries [START_REF]European Society of Human Reproduction and Embryology[END_REF]. Despite this the genetic causes of male infertility and their pathophysiological consequences for gamete quality are known in few cases. The identification of genetic causes will be imperative for the development of personalised treatments, the extension of treatment to more infertile couples, and informed risk assessment for individuals conceived by ART and their descendants. Presently, the most solid progress has been made in uncovering the genetic basis of teratozoospermia.
Monomorphic teratozoospermia is a group of rare conditions in which almost all the spermatozoa share a specific malformation that renders them non-functional. Over the last decade, homozygous loss-of-function (LOF) mutations have been identified in four genes for three teratozoospermia phenotypes: AURKC (MIM 603495) in large-headed multiflagellar polyploid spermatozoa (MIM 243060) (3), DPY19L2 (MIM 613893) [START_REF] Koscinski | DPY19L2 deletion as a major cause of globozoospermia[END_REF][START_REF] Harbuz | A recurrent deletion of DPY19L2 causes infertility in man by blocking sperm head elongation and acrosome formation[END_REF] in globozoospermia (MIM 613958), SPATA16 (MIM 609856) [START_REF] Dam | Homozygous mutation in SPATA16 is associated with male infertility in human globozoospermia[END_REF] in globozoospermia (MIM 102530), and DNAH1 (MIM 603332) in MMAF (multiple morphological abnormalities of the flagellum) [START_REF] Ben Khelifa | Mutations in DNAH1, which encodes an inner arm heavy chain dynein, lead to male infertility from multiple morphological abnormalities of the sperm flagella[END_REF].
In a recent study of seventeen Chinese men with acephalic spermatozoa syndrome (MIM 617187), rare coding variants in the spermatid-specific SUN5 gene (MIM 613942) were found homozygous or compound heterozygous in eight cases [START_REF] Zhu | Biallelic SUN5 Mutations Cause Autosomal-Recessive Acephalic Spermatozoa Syndrome[END_REF]. In this syndrome the sperm head frequently separates from the flagellum, because the sperm head-tail junction is absent [START_REF] Perotti | Ultrastructural study of the decapitated sperm defect in an infertile man[END_REF][START_REF] Chemes | Acephalic spermatozoa and abnormal development of the head-neck attachment: a human syndrome of genetic origin[END_REF]. SUN5 is an attractive candidate for a gene involved in the attachment of the flagellum to the sperm head because the protein localises to the vicinity of the head-tail [START_REF] Zhu | Biallelic SUN5 Mutations Cause Autosomal-Recessive Acephalic Spermatozoa Syndrome[END_REF][START_REF] Yassine | Dynamics of Sun5 localization during spermatogenesis in wild type and Dpy19l2 knock-out mice indicates that Sun5 is not involved in acrosome attachment to the nuclear envelope[END_REF], and SUN family proteins are known to be part of links between the inner nuclear membrane and the cytoskeleton [START_REF] Tapley | Connecting the nucleus to the cytoskeleton by SUN-KASH bridges across the nuclear envelope[END_REF].
Six of the ten SUN5 variants reported are missense with an unknown effect on the protein. Although one variant, p.Gly114Arg (c.340G>A), was predicted to affect splicing, this was not confirmed experimentally. The other four variants were certainly LOF variants: intronic splice site (c.425+1G>A), frameshift (p.Val128Serfs*7) and two nonsense mutations (p.Trp72* and p.Ser284*). However, only one man was homozygous for a certain LOF variant, p.Ser284*, and the variants identified in SUN5 were designated as causal based on the high frequency of men carrying two rare SUN5 alleles in the study group. In three cases with a missense variant, the SUN5 protein was not detected at the head-tail junction, although it was present in spermatozoa from two control patients with acephalic spermatozoa syndrome but without a variant in SUN5 [START_REF] Zhu | Biallelic SUN5 Mutations Cause Autosomal-Recessive Acephalic Spermatozoa Syndrome[END_REF]. This is consistent with these missense variants being LOF, but the possibility that SUN5 absence is secondary to junction loss cannot be completely excluded, because the pathological mechanism in the acephalic controls is unknown, and could differ from that in the SUN5 variant cases. Furthermore six of the SUN5 variants are present at a low frequency in the gnomAD database [START_REF] Lek | Analysis of protein-coding genetic variation in 60,706 humans[END_REF], and four predominate in, or are exclusive to, the East Asian group, including the most frequent, the p.Val128Serfs*7 LOF variant, which is present in 1:140 individuals. Indeed overall in gnomAD approximately 1:100 individuals from the East Asian group are heterozygous for one of the rare variants designated as causal of acephalic spermatozoa syndrome. If all these variants were LOF, this sperm phenotype would be expected to have a high incidence among infertile men in East Asia, but this has never been reported, raising a doubt about the pathogenicity of some SUN5 missense variants.
Thus although the case is persuasive that the loss of SUN5 function causes a failure of the sperm head-tail junction, definitive proof requires a second case where two certain LOF
Results
Homozygous deletion of exon 8 of SUN5
Two brothers (13-1662 and 13-2016) and their cousin (13-4335) were diagnosed with acephalic spermatozoa syndrome following consultation for primary couple infertility at our clinic. They are members of a consanguineous family from Algeria (Fig. 1). We have previously described the phenotype of the two brothers [START_REF] Porcu | Pregnancies after ICSI using sperm with abnormal head-tail junction from two brothers: case report[END_REF], and clinical details for all three men are summarized in Table 1. Based on the recent report of the association of SUN5 variants with acephalic spermatozoa (8), we sequenced the coding region of SUN5 in two of the affected individuals (brother 13-1662 and cousin 13-4335) to determine if mutation in SUN5 was responsible for their infertility. We found no single nucleotide variants, but we failed to amplify exon 8. An additional marker tested in the middle of intron 8 was also negative in the three men, confirming the existence of a homozygous deletion in the three affected men (Fig. 2A). To further define the deletion, we amplified from upstream of exon 7 to downstream of exon 9 (7 kb) and upstream of exon 8 to downstream of exon 9 (6.2 kb) and obtained shorter than expected bands of, respectively, 2 kb (not shown) and 1.2 kb (Fig 2A). The 1.2 kb fragment was sequenced across the deletion junction revealing the deleted segment to cover 5090 bp that include the entirety of exon 8 (Fig. 2 and Fig. S1). There is an insertion of four bases, TGGT, at the junction of the breakpoint. There are no blocks of homology at the breakpoints, showing that the deletion was not mobilised by unequal crossing-over and is unlikely to be a recurrent event. The deletion of exon 8 is predicted to inactivate SUN5 as it will induce a frameshift, p.(Leu143Serfs*30), making the transcript lacking exon 8 a substrate for NMD (nonsense mediated decay). Using a PCR test that amplifies the deletion allele specifically, we did not detect the SUN5 deletion in 150 infertile men without the acephalic sperm phenotype recruited from our clinic, showing it to be a rare variant.
SUN5 exonic variant c.340G>A (p.Gly114Arg) inhibits splicing of exon 5
To further strengthen the evidence that loss of SUN5 function is a cause of acephalic spermatozoa syndrome, we investigated the effect of the c.340G>A (p.Gly114Arg) variant on splicing. This rare variant affects the last base of exon 5 and was previously reported in the homozygous state in a man with decapitated spermatozoa [START_REF] Zhu | Biallelic SUN5 Mutations Cause Autosomal-Recessive Acephalic Spermatozoa Syndrome[END_REF]. We made two expression constructs, one with the variant (SUN5-G114R) and the other without (SUN5-WT), in which the genomic region from exon 4 to exon 6 of SUN5 is under the transcriptional control of the CMV promoter. We transfected SUN5-G114R and SUN5-WT into HeLa cells and assessed the splicing of exons 4, 5 and 6 by RT-PCR with primers in exon 4 and in exon 6. Our results show that both constructs produce transcripts lacking exon 5, but that only SUN5-WT produces transcripts that include exon 5 (Fig 3). The transcripts without exon 5 from the SUN5-WT construct are certainly artefacts of our cellular assay system, since the same transcript was not amplified from human testis cDNA (Fig. 3). The absence of transcripts that include exon 5 from the SUN5-G114R construct shows that p.Gly114Arg greatly diminishes splicing efficiency, and will lead to exclusion of exon 5 (62 bases) from the mature SUN5 transcript. This will cause a frameshift preventing expression of the SUN5 protein.
Discussion
Here we present the second homozygous LOF mutation in SUN5, in three related men of North African origin who produce decapitated spermatozoa. We also provide functional evidence that the previously published variant p.Gly114Arg strongly affects splicing and will result in a reduction of SUN5 protein levels. Taken together with the recent report of biallelic The two brothers in our study have had treatment of their infertility by ICSI ( 14), using spermatozoa with the head and tail still joined. Despite the severity of the sperm phenotype in these men in whom the proximal centriole is not anchored to the nucleus ( 14), ICSI resulted in the development of good quality embryos in both cases. Following embryo transfer, pregnancy was obtained for each brother, resulting in the birth of a healthy girl and triplets, including identical twins. Thus for infertile men who produce decapitated spermatozoa consequent to the loss of SUN5 function, ICSI can be envisaged as a therapeutic solution. However a full evaluation of the efficiency of ICSI with SUN5-deficient spermatozoa, and the associated risks, will require additional cases and the long-term followup of the children conceived.
Several knockout mouse models share key features with acephalic spermatozoa syndrome in human: isolated male infertility, the presence of epididymal spermatozoa and an extreme fragility of the head-tail junction. The mouse genes inactivated in these models are Hook1 (15), Oaz3 (16), Odf1 (17) and Spata6 (18). Like SUN5, all are expressed specifically in the testis, and represent good candidates for genes that could carry causal mutations in unresolved cases of human male infertility with decapitated spermatozoa.
In conclusion, our results establish that SUN5 plays a key role in the attachment of the flagellum to the sperm head and demonstrate the prognostic value of testing for SUN5 mutations in men with decapitated spermatozoa seeking to father a child.
Materials and methods
F
Patients
Patient 13-4335 and his wife were 34 and 31 years old, respectively, and had been trying to have a child for three years without success. Both had a normal karyotype, the patient was 46,XY,21ps+ and his wife's was 46, XX. Both were healthy with no history of significant illness. His wife had regular menses, and her hysterosalpingography and hormone assessment was normal. Two semen samples from patient 13-4335, collected after three days of abstinence, were analysed and revealed severe oligozoospermia and complete teratozoospermia (Table 1). While there were only 2.3 x 10 6 intact spermatozoa/ml in each sample, there were, respectively, 41 x 10 6 and 64 x 10 6 headless flagella/ml. These characteristics are similar to those of his cousins (13-1662 and 13-2016) and are typical of acephalic spermatozoa syndrome.
The clinical details of the two brothers, 13-1662 et 13-2016, have been described elsewhere [START_REF] Porcu | Pregnancies after ICSI using sperm with abnormal head-tail junction from two brothers: case report[END_REF], but are summarised for comparison with those of 13-4335 in Table 1. The three men gave their informed consent for their samples to be used in the search for the genetic cause of their infertility.
SUN5 sequencing and deletion interval characterisation
The SUN5 gene was sequenced with the BigDye v1.1 terminator cycle sequencing kit (Applied Biosystems) following amplification of the coding exons with specific flanking primers (Table S1). PCR products for sequencing SUN5 and deletion mapping were amplified with Q5 Taq polymerase (New England Biolabs). The allele-specific PCR assay used to screen controls was performed as a duplex PCR with primers o5226 and o5340 that flank the deleted segment around exon 8 (deletion allele = 490 base pairs) and o5216 and o5217 that flank exons 2 and 3 of SUN5 (positive control = 802 bp). The segment from exon 4 to exon 6 of the SUN5 gene was amplified from genomic DNA from a fertile man using primers o5336 (exon 4) and o5337 (exon 6) with, respectively NheI and XhoI sites added at their 5 ́ end. To obtain the SUN5-WT construct the genomic fragment was cloned into the NheI and XhoI sites of pcDNA3.1-cGFP (pcDNA3.1+ -Life Technologies -with a C-terminal GFP cassette cloned between XhoI and ApaI sites). For the SUN5-G114R construct, a SUN5-G114R insert was cloned into pcDNA3.1-cGFP, following its amplification from SUN5-WT using two-step PCR-fusion with flanking primers o5336 and o5337 and central primers o5338 and o5339 specifying the p.Gly114Arg variant. The sequence of both constructs was determined and showed that neither contained any errors. All PCR steps used the Q5 DNA polymerase (New England Biolabs).
Transfection of SUN5-WT and SUN5-G114R constructs
Constructs were transfected into HeLa cells using JetPrime and the RNAs harvested after 24 h. RNA was reverse-transcribed to cDNA using the GFP-specific primer o5374 and the PCR performed using primers o5336 and o5337. 16. Tokuhiro, K., Isotani, A., Yokota, S., Yano, Y., Oshio, S., Hirose, M., Wada, M., Fujita, K., Ogawa, Y., Okabe, M., et al. (2009) OAZ-t/OAZ3 is essential for rigid connection of sperm tails to heads in mouse. PLoS Genet., 5, e1000712. 17. Yang, K., Meinhardt, A., Zhang, B., Grzmil, P., Adham, I.M. and Hoyer-Fender, S.
(2012) The small heat shock protein ODF1/HSPB10 is essential for tight linkage of sperm head to tail and male fertility in mice. Mol. Cell. Biol., 32, 216-225.
18. Yuan, S., Stratton, C.J., Bao, J., Zheng, H., Bhetwal, B.P., Yanagimachi, R. and Yan, W.
(2015) Spata6 is required for normal assembly of the sperm connecting piece and tight headtail conjunction. Proc. Natl. Acad. Sci. U. S. A., 112, E430-439.
Figure Titles and Legends
5 SUN5
5 alleles are found associated with acephalic spermatozoa syndrome. We therefore searched for pathogenic variants in the SUN5 gene in three men with decapitated spermatozoa.
7 SUN5
7 variants in Chinese men with acephalic sperm syndrome, our findings establish that loss of SUN5 function causes a failure of head-tail junction formation during spermiogenesis, and show that LOF mutation of SUN5 is also a cause of acephalic sperm phenotype in non-Asian populations.
shape (azh) mutant mouse. Hum. Mol. Genet., 11, 1647-1658.
Figure 1 :
1 Figure 1: Family pedigree of the three infertile men presenting with acephalic spermatozoa
Figure 2 :
2 Figure 2: Homozygous deletion of exon 8 from SUN5 in two brothers and a cousin with
Figure 3 :
3 Figure 3: The SUN5 p.Gly114Arg variant reduces splicing efficiency in HeLa cells.
Figure S1 :
S1 Figure S1: Schematic representation of the SUN5 deletion with the electropherogram of
1 PCRFigure 1 :Figure 2 :Figure 3 :
1123 Figure 1: Family pedigree of the three infertile men presenting with acephalic spermatozoa syndrome (MIM: 617187). Infertile men: 1) 13-1662; 2) 13-2016; 3) 13-4335. 109x73mm (300 x 300 DPI)
Table 1 :
1 Sperm parameters of the three related infertile men carrying a homozygous deletion within the SUN5 gene.
.G114R expression construct and reverse transcription (RT) primers
Primer Number Target Primer sequence
SUN5 gene sequencing
o5214 Exon 1f CACCAGCTCCCAGAGTTCC
o5215 Exon 1r GTGCCCAGAGATAGGCATCA
o5216 Exon 2-3f GGTAGGAGCAGACAAAGGAAC
o5217 Exon 2-3r CCTCTCCAGCCACCAAAGA
o5218 Exon 4f GTGCCCACCTGTAGTGACCT
o5219 Exon 4r GTGCAAATGTGTGGAGGTGG
o5220 Exon 5f CTGCCACTCATGAGCTCCCA
o5221 Exon 5r GGAGGGAGGAGATTCCTTTG
o5222 Exon 6f CAGCCAGCATGGAGATCATA
o5223 Exon 6r TCTGGGTCAAGTCAGGAACTG
o5224 Exon 7f GAGATAGGAGGCTAGGTGAA
o5225 Exon 7r TGACTTGCCTAAGGTCAGCC
o5226 Exon 8f CTGATGAATGGGTCCAGGGATG
o5227 Exon 8r CCCAGCCAAACTATCAACCA
o5228 Exon 9f GGTCAGGATGGCTTTGTGTC
o5229 Exon 9r CCAGCTTGGAACAGAGGATG
o5230 Exon 10f ACATGGCTGTAATCACATTTCC
o5231 Exon 10r GCCCATGCCCAGTGTAGTC
o5232 Exon 11f GAACTGAAGTAAGGCTTACC
o5233 Exon 11r GCCTTAACCAAGCTGCATTC
o5234 Exon 12f CTCTGACTCCCAGGCCAGTG
o5235 Exon 12r GCCACCCACTACGGAGACTCT
o5236 Exon 13f GTCTGTCCTTCTGGCCTCAG
o5237 Exon 13r ACCCGGAATGGGATCAACC
SUN5 deletion mapping
o5326 Intron 8f GGAGTCCCCAGGGAATATGCAG
o5327 Intorn 8r GTGGCCCTGGAGACAGCAG
o5340 Intron 8r TCTGCTGCCAGATGCGGATG
po5336 Exon 4f_NheI AGCCTGGCTAGCATGGCCTGGTTCACCTGTTTTGCCTG
o5337 Exon 6r_XhoI AGGTTCCTCGAGCTGCCAGACTTTCATTTTCGATGGTAAG
o5338 Exon 5f_G>A to add p.G114R variant CTGTGCTTTCAGTGTGTCCCAG
o5339 Exon 5r_C>T to add p.G114R variant CTGGGACACACTGAAAGCACAG
o5374 eGFPr_RT TGGTGCAGATGAACTTCA
Table S1 :
S1 Primer sequences.
Acknowledgements
We are grateful to the patients who gave their informed consent to the use of their samples for research. We thank C. Metton and M.J. Fays-Bernardin for technical assistance and Germetheque support. This work was supported by grants from the Agence de la biomédecine, AOR "AMP, diagnostic prenatal et diagnostic génétique" 2013, Inserm and Aix-Marseille Université. The Germetheque biobank was supported by grants from the ANR (Agence Nationale de la Recherche), the Agence de la biomedicine, the Centre Hospitalier Universitaire of Toulouse and APHM (Assistance Publique Hôpitaux de Marseille). RAE was funded by the Islamic Development Bank and the Lebanese Association for Scientific Research (LASeR).
Conflict of interest |
01767745 | en | [
"phys.phys.phys-plasm-ph"
] | 2024/03/05 22:32:15 | 2018 | https://pastel.hal.science/tel-01767745/file/65245_DOCHE_2018_archivage.pdf | M Patrick Mora
Mme Edda Gschwendtner
M Sébastien
M Victor
Julien Gautier
Amar Tafzi
Jean-Philippe Godet
Jean-Philippe Rousseau
Stéphane Sebban
Fabien Tissandier
Antoine Rousse
Davide Boschetto
Benoit Mahieu
Guillaume Lambert
Agustin Lifschitz
Pascal Rousseau
Boris Vodungbo
Cédric Thaury
Mickaël Martinez
Sandrine Tricaud
Octavie Verdun
Lucie Huguet
Catherine Buljore
Patricia Toullier
Merci Donc
Dan Lévy
Viacheslav Smartsev
Adrien Despresseux
Safir Lazar
Antonin Siciak
Carla Da
Silva Alves
Francesco Massimo
Olivier Delmas
Florian Mollica
Gaël Massé
Emilie Bayart
Evgeny Chelnokov
Igor Andriyash
Julien Ferri
Hermine Danvy
Flavien Museur
Chris Beekman
Benjamin Vauzour
Clément Caizergues
Emilien Guillaume
Andreas Döpp
Particle acceleration with beam driven plasma wakefield
Keywords:
iv communicative pendant les runs, ses discussions fort intéressantes pendant les pauses, Navid Vafaei pour son soutien moral quand les résultats tardaient à arriver… Merci à Chris Beekman pour les bons moments passés en Californie comme en France au cours de ce travail intensif. Merci à Erik Adli pour ses enseignements, son invitation à Oslo également, et merci à James Allen et Rafal Zgadzaj pour leur amitié, et pour le temps passé à travailler dans la bonne humeur. Merci à tous les autres personnels du SLAC qui ont rendu ce travail possible.
L'essentiel
Particle acceleration with beam driven plasma wakefield iii
Remerciements -Acknowledgements
Avant toute chose, il faut préciser que différents acteurs ont rendu possible les campagnes expérimentales sur lesquelles repose ce travail et la rédaction de ce manuscrit. Eux seuls méritent tous les honneurs qui découlent des accomplissements scientifiques présentés dans ce texte, et pour leur temps, leur aide et leur confiance je tiens à les remercier individuellement.
Je souhaite remercier en tout premier lieu mon directeur de thèse, Victor Malka pour son accueil au Laboratoire d'Optique Appliquée dès février 2014. C'est grâce à lui que ce manuscrit a pu être écrit, grâce à son soutien face aux difficultés, et à ses conseils quant à la direction à prendre à chaque moment important. J'exprime donc beaucoup de reconnaissance pour ses enseignements scientifiques et humains, pour toutes les opportunités qu'il a rendues possibles, notamment pour partir étudier sous d'autres horizons.
Je remercie à présent vivement Sébastien Corde, mon co-encadrant de thèse depuis Août 2015. Le difficile parcours scientifique qu'a constitué cette thèse n'a été possible que grâce à son intervention. Je souhaite exprimer mes remerciements pour la patiente infinie dont il a fait preuve, ses efforts incessants de pédagogie, son enthousiasme pour les discussions scientifiques et son sens inné du partage. Outre l'opportunité qu'il m'a offerte de partir découvrir la recherche scientifique outre-Atlantique, il restera un modèle de précision scientifique et plus largement de rigueur intellectuelle dans le travail. Il a sans aucun doute redonné de l'enthousiasme à cette thèse, et pour son action auprès des étudiants et de ces collègues il mérite plus largement toute la reconnaissance et l'estime des membres du laboratoire dans lequel cette thèse a eu lieu. L'expérience d'accélération d'un paquet distinct de positron dans un accélérateur par onde de sillage plasma est avant tout la récompense du travail de la collaboration E200 au sein de la plateforme FACET, au SLAC. Pour cela je tiens à remercier les différentes personnes qui ont rendu cette expérience possible, qui m'ont transmis d'innombrables savoir-faire, en même temps que beaucoup de nouveaux concepts scientifiques. Merci avant tout à Michael « Mike » Litos pour sa bonne humeur, sa passion communicative pour la physique et son enseignement dispensé avec patience. Merci à Spencer Gessner qui restera un modèle, par son efficacité dans le travail et sa passion pour les sciences. Merci à Mark Hogan pour son soutien bienveillant au cours des expériences, très important dans les moments de doute. Merci également au Professeur Chan Joshi pour ses conseils, remarques et enseignements, il oriente avec brio la collaboration E200, sa bienveillance et son influence sont une chance pour tous les étudiants. Merci à tous les gens qui ont partagé leur savoir sans compter, m'ont spontanément aidé à faire face aux imprévus divers : Carl Lindstrøm qui comme moi se bat pour défendre une thèse, Brendan O'Shea pour son aide face à l'inextricable système informatique de FACET, Kenneth « Ken » Marsh pour avoir partagé sans compter son savoir-faire expérimental, Christine Clarke et Selina Green pour leur disponibilité et leur soutien logistique quelle que soit l'heure du jour ou de la nuit. Je voudrais remercier également Christopher « Chris » Clayton pour sa motivation
Introduction
Plasmas as they are ionized media, are not limited by electrical breakdown, which explains why this medium has been investigated as an alternative to metallic cavities. By controlling the collective motion of electrons with lasers, accelerating fields of the order of hundreds of GeV/m have been demonstrated [Malka 02]. Such high gradients are more than three orders of magnitude higher than the best accelerating gradient of conventional facilities. The difference in sizes between conventional and plasma-based accelerators can be seen in Fig. 0.1: although the specific experiment (b) does not provide particles of GeV energy yet as the massive SLAC facility (a) does, conventional and plasma-based accelerators typically have this size ratio. Among the four plasma-based schemes [Joshi 03], my thesis will deal with two of them: Laser Wakefield Acceleration (LWFA) [Tajima 79] and Plasma Wakefield Acceleration (PWFA) [Fainberg 56, Chen 85]. In these two concepts, an electron density wave is excited in a plasma. The wave drivereither a laser pulse (LWFA, A spectrometer displays the energy of the particle bunch emerging from the plasma. This experiment can be carried out with electron or positron bunches.
driven in a gas jet, using a LWFA-produced electron beam. Obtaining a clear evidence of the excitation of an accelerating cavity in this hybrid setup is the second objective of the work presented here. In addition, manipulating LWFA produced electron beams in optics laboratories was driven by the interest for x-ray Betatron sources. The hybrid scheme introduced above opens prospects regarding the optimization of such x-ray sources. That is why we will study x-ray light emission as well in this second experiment.
Outline of the manuscript
Part I provides first a brief presentation of the history of particle accelerators and of their applications in Chapter 1. Second, useful concepts in laser and beam physics are introduced in both Chapter 1 and Chapter 2. In Chapter 3, theoretical details are given about LWFA and PWFA. The driving of plasma waves by a particle beam or a laser pulse is derived in detail in the "linear" regime case and some qualitative details about the "nonlinear" or "blow-out" regime are provided. In addition, a comprehensive description of the propagation of a laser pulse and of an electron bunch in a plasma is made.
Part II is dedicated to the main experimental result, the demonstration of the acceleration of a trailing positron bunch in a plasma wakefield accelerator. Chapter 4 and Chapter 5 introduce the experimental setup of the experiment that took place at the FACET (Facility for Advanced aCcelerator Experimental Tests) facility at SLAC National Accelerator Laboratory. The different experimental diagnostics and methods are then described in further details. Chapter 5 reports the results of the experiment and the simulations performed to obtain further insight into the acceleration process underlying this experimental achievement. Part II is concluded by a study of the wakefield regime driven in the plasma during the experiment.
Part III presents the hybrid LWFA-PWFA experiments accomplished at Laboratoire d'Optique Appliquée. Chapter 6 is dedicated to the experimental campaign of 2017 in which a LWFA-produced electron beam was used to drive plasma waves in a gas jet. In this experimental study, an electron beam created by laser-plasma interaction is refocused by particle bunch-plasma interaction in a second gas jet. A study of some physical phenomena associated to this hybrid LWFA-PWFA platform is then accomplished. Chapter 7 reports the work accomplished in 2016 to exploit the hybrid LWFA-PWFA scheme in order to enhance the X-ray emission produced by the LWFA electron beam. The first experimental realization of this last scheme is reported, and its promising results are discussed.
Part I Particle accelerators, laser and beam physics
This introductory chapter begins with a short history of particle accelerators. First is a chronological presentation of particle accelerator facilities and their corresponding applications. The second and third sections of this chapter provide a brief introduction to laser and beam physics, by introducing the main concepts of these fields that will be used throughout the manuscript. These sections introduce the formalism and the conventions used in the rest of the text as well. A century-long history
Accelerators and particle colliders have a long history made of many successive innovations; new facilities emerged all along the 20 th century. Several particle acceleration schemes were used, however only two of them are still in use in world-class facilities.
The very first accelerators were electrostatic ones, such as the Van de Graaff accelerator invented in the late 1920s [Van de Graaf 33]. These machines were using a static electric field set up between two electrodes to accelerate electrons. However, electrostatic accelerators were strongly limited by electrical breakdown that could easily destroy the setup; the maximal energy that the particles could reach was limited to 20-30 MeV [Humphries 02].
Starting from the 1930s, accelerators become radio-frequency (RF) electromagnetic machines.
Although they are called radio-frequency accelerators, most of these devices work with microwave electromagnetic fields. Guiding cavities sustain the oscillations of an electromagnetic wave with a wavelength ranging from a few centimeters to a few tens of centimeters and a phase velocity close to the velocity of the accelerated particles. Particle energies were then pushed higher thanks to the invention of a circular acceleration scheme: cyclotrons that would produce electron or proton beams from the 1930s on. [Lawrence 32] These circular facilities force accelerated particles to circle several times in an electromagnetic field before exiting the device. Cyclotrons were using magnetic fields to force the particles to move on two half-circle orbits and were accelerating them in between where the particles would see an electric field. Increasing slowly their energy each time they circle through the machine, the particles would rotate at a radius that becomes larger and larger until they escape from the ring. Particle energies in these machines were limited to a few tens of MeV at first, until a new generation of circular accelerators was invented: synchrocyclotrons. Facilities of this generation, some of whom are still being built nowadays, accelerate individual bunches of particles only, and match the radio-frequency electromagnetic field frequency with the bunches energy. This strategy needs the magnetic field amplitude to compensate for the energy gain by bending accordingly more the particles trajectories, leading to more turns inside the accelerator ring and therefore to particles with a higher final energy. This technology is still common for proton medical accelerators, as it is an efficient and repeatable way to produce proton bunches of 70 to 250 MeV for instance. An example of one of the first synchrocyclotron ever built can be seen in Fig. 1.1 (a). A modern facility used for proton therapy is displayed in Fig. 1.3 (a). This last picture is the treatment room of Orsay proton therapy center, a facility which relies on a synchrocyclotron to produce the ionizing radiations. The main technology for fundamental physics accelerators is the synchrotron [Veksler 44], used for instance at the Large Electron-Positron Collider from 1989 to 2000 [Myers 90] and at the Large Hadron Collider (LHC, Fig. 1.1 (b)) until now. Synchrotrons are modified cyclotrons in which the magnetic field amplitude increases with the increasing particle energy so that the particles can maintain their orbit inside the circular accelerator and reach very high energies. The LHC accelerates protons up to 7 TeV, the highest particle energy value ever obtained in an experiment; it led to the discovery of the Higgs boson [LHC 12].
Although circular facilities flourished around the world, driven by their multiple applications, linear accelerators were not abandoned. Indeed, the energy lost by synchrotron radiation in circular accelerators, makes their uses for high-energy (greater than hundreds of GeV) electron-positron colliders non-relevant. Only linear acceleration prevents such loses. With accelerating fields limited by the electric breakdown, the size of these facilities has to become higher and higher to reach high energies. Two different linear RF based accelerator technologies exist. Linear induction accelerators were invented in the early 1960s, and were using a phenomenon called induction isolation to maintain the potential differences in the facilities low, while the net electric potential on the axis of the beam line would efficiently accelerate particles [Christophilos 64]. Linear induction accelerators were not the first devices to use magnetic induction to transmit energies to particles. The Betatron, a circular accelerator invented in 1935 was already inductively accelerating particles in a torus shaped vacuum chamber. The most widespread kind of linear accelerators are the electromagnetic radio-frequency accelerators, based on a scheme that was suggested as early as 1924 [Ising 24]. Stanford Linear Accelerator (SLAC National Accelerator Laboratory) for instance is such an accelerator, and opened in 1966 [Neal 68]. It led three of its users to obtain a Nobel Prize, in 1976 for the discovery of the charm quark, in 1990 for the quark structure and in 1995 for the discovery of the tau lepton. Successive upgrades led SLAC accelerator energy to increase twice in its history. Two major linear accelerator facilities have been proposed for the next decades. The International Linear Collider (ILC) first, whose particles will reach the energy of 250 𝐺𝑒𝑉 and which may be built in Japan In Fig. 1.2 are summarized the different accelerators discussed in the previous paragraph along with their energy. The three first accelerators describe a general kind of devices, in contrast with the last five that are unique world-class facilities or proposed colliders. As can be seen, linear induction accelerators did not increase the maximal energy reached by particle bunches, however, they were able to produce the highest current bunches at the time they were invented. Large scale facilities, involving many countries and built in the late 20 th and 21 st century (LEP, LHC, ILC and CLIC) reach much higher energies than their predecessors.
b. Particle beams and applications
Accelerator facilities are built for different purposes: they are used either for fundamental research, medical or industrial uses.
With an annual market of few billions of euros with an annual increase of about 10%, the accelerators industry is a very flourishing activity. These applications concern cancer therapy, ion implementation, electron cutting and welding, electron beam and X rays irradiators, radio-isotope production, ion beam analysis, neutron generators, to cite the more important.
In medicine, the effects of radiations on human bodies were discovered at the end of the 19 th and at the beginning of the 20 th century, along with the interest cancer research had for ionizing radiations [Pusey 00]. The first era of radiation medicine considered only using gamma-rays, emitted by natural radioactive elements or by x-ray tubes. The discovery of the nuclear reactor made the production of artificial radioactive isotopes easier and boosted x-ray therapy development. However, from the early 1900s to the 1940s therapists were still fumbling on the use of radiations Hadrontherapy appears to be more interesting than classical x-ray therapy because the later has side effects such as provoking burns, and revealed itself to be less effective in several situations. The high costs and large footprints of particle therapy facilities fuel the need for Particle colliders are also the primary experimental apparatus of fundamental physics research. To study the fine structure of matter, researchers need to collide particles with higher and higher energies in order to study smaller and smaller subdivisions of their constituents. The LHC for instance is a 27 km in circumference torus shaped facility that accelerates protons to 7 𝑇𝑒𝑉 and led to the discovery of the Higgs boson [LHC 12]. The colossal footprint of the LHC can be seen in Fig. 1.1 (b). Its total cost is estimated to be of about 10 billion euros, which makes it one of the most expensive scientific machines ever built. Future colliders listed in Fig. 1.2 are also expected to operate with particle energies exceeding the 𝑇𝑒𝑉, and therefore their sizes will grow accordingly: the ILC is expected to measure 30 𝑘𝑚 [ILC 07]. The cost and footprint of modern facilities also suggest that a new kind of technology should be considered to accelerate more efficiently particle beams. CERN director Fabiola Gianotti emphasized the challenge for the scientific community: "Highenergy accelerators have been our most powerful tools of exploration in particle physics, so we cannot abandon them. What we have to do is push the research and development in accelerator technology, so that we will be able to reach higher energy with compact accelerators" [Gibney 15]. Plasma-based accelerators also appear to be very promising on that point of view: the size of the future conventional ILC facility is correlated with the maximal accelerating field achievable due to electrical breakdown limit, 100 𝑀𝑉/𝑚 . Plasma-based acceleration offers three orders of magnitude higher accelerating fields: 100 𝐺𝑉/𝑚 can be achieved Laser-plasma accelerators should contribute in the near future to medical and security particle beams applications. For security purpose, LWFA bremsstrahlung gamma-ray beams, that allow non-destructive inspection with a high spatial resolution, could become prime tools of nuclear facilities or astronautic companies. Indeed, actors from these fields need non- destructive testing techniques to study material fatigue, to ensure safety and to perform quality control [ Cartz 95]. SourceLab, a start-up, spin-off of LOA for instance is developing such sources to identify cracks spreading on materials. For the application of nondestructive testing, plasma-based acceleration could provide a cheaper and more efficient solution for all users [ Malka 08]. Betatron X-ray beam delivered by LWFA is another pertinent source for imaging application. The spatial coherence and the small dimension of the source allow to perform phase contrast X-ray imaging of biological object with a resolution of tens of micrometer [Corde 13]. This opens the possibility to detect breast cancer tumor at earlier stage with a moderate dose deposition. Direct use of very high energy electrons (in the 100 to 300 MeV) is envisaged for cancer treatments. It was shown that in the case of prostate cancer, this approach should reduce by 20% the dose delivered in safe tissues and sensitive organs compared to the dose deposited by X-MRT (modulated photon radiotherapy). These near-term applications are illustrated in Fig. 1.4.
Laser physics concepts and formalism
LWFA and PWFA both rely on concepts of plasma and laser physics. This section and the next introduce the formalism and the conventions used in these two fields of physics that will be used in the whole manuscript.
a. Laser fields and Gaussian pulses
Gaussian beams are preeminent in physics. In fact, Gaussian beams are laser fields solution of the Helmholtz equation, under the hypothesis of the paraxial approximation. Gaussian beams describe therefore the behaviour of a laser field propagating through an isotropic medium under the paraxial approximation. This is the realistic solution (as opposed for instance to the plane wave description, simpler but unrealistic) of the equation that describes accurately the beams scientists use in their experiments. In the rest of the manuscript, when an explicit form is needed for the laser field used in the experiments, we will consider a Gaussian beam.
For a Gaussian electromagnetic pulse the complex vector potential reads:
𝑨(𝑟, 𝑧, 𝑡) = 𝐴 0 𝑤 0 𝑤(𝑧) 𝑒 - 𝑟 2 𝑤(𝑧) 2 𝑒 -2 ln(2) (𝑐𝑡-𝑧) 2 𝑐 2 𝜏 0 2 𝑒 𝑖𝜔𝑡 𝑒 -𝑖(𝑘𝑧+ 𝑘𝑟 2 2𝑅(𝑧) -𝜓(𝑧)) 𝒖 (1.1)
Where the parameters are the following:
𝜔 is the angular frequency of the laser pulse.
𝑘 =
2𝜋 𝜆 is the wave vector of the laser pulse.
𝑤 0 is the waist dimension of the laser pulse.
𝑧 is the algebraic distance to the beam focal spot.
𝜓(𝑧) = arctan ( 𝑧 𝑧 𝑅
) is the Gouy phase, an additional phase term that contributes to shift the phase near the focal spot, but that is constant far from it. This term is responsible for the 𝜋-phase shift at focus. Formula (1.1) describes a laser pulse that has a Gaussian shape in the 𝑧 direction, whose envelope is given by the term 𝑒 -2 ln(2)
(𝑐𝑡-𝑧) 2
𝑐 2 𝜏 0 2 . The 𝑧 direction is also the direction of propagation in the formula written above. The pulse has a Gaussian shape in the transverse direction, given by the term 𝑒 -𝑟 2 𝑤(𝑧) 2 .
In the LWFA experiments described in this manuscript, the laser final focus is often accomplished with a parabolic mirror that focuses an initially well-collimated beam of diameter 𝐷 = 6 𝑐𝑚, and that has a focal length of typically 𝑓 = 1 𝑚. The convergence angle 𝜃~𝐷 2𝑓 of the beam enables to compare the waist 𝑤 0 with the parameters of the parabola. We have
𝑤(𝑧)
𝑧 ~𝜆 𝜋𝑤 0 far from the focal spot. For small angles, one also has:
𝜆 𝜋𝑤 0 ~𝐷 2𝑓 . Therefore,
the waist is typically: 𝑤 0 ~2𝜆𝑓 𝜋𝐷 . The laser spot size is therefore for a perfect beam of the order of ten micrometres.
The transverse size of the laser beam is given by 𝑤(𝑧) = 𝑤 0 √1 + 𝑧 2 𝑧 𝑅 2 , where 𝑤 0 is the waist dimension, the minimal transverse size of the beam. The Rayleigh length 𝑧 𝑅 is the distance over which the laser intensity is reduced by a factor of 2, it also corresponds to the distance after which the transverse size is increased by a factor √2, starting from the waist: 𝑧 𝑅 = 𝜋𝑤 0 2 𝜆 . By definition, the E and B fields can be deduced from the relations:
𝑬 = - 𝜕𝑨 𝜕𝑡 (1.2) 𝑩 = ∇ × 𝑨 (1.3)
The intensity of the Gaussian laser pulse reads:
𝐼(𝑟, 𝑧, 𝑡) = 𝐼 0 𝑤 0 2 𝑤 2 (𝑧) 𝑒 - 2𝑟 2 𝑤(𝑧) 2 𝑒 -4 ln(2) (𝑐𝑡-𝑧) 2 𝑐 2 𝜏 0 2 (1.4) b.
Relativistic regime
When the quiver velocity of an electron in the electric field of an electromagnetic wave reaches a value close to 𝑐, we say that the laser field is relativistic. Studying the motion of particles in such fields will be important in the rest of the manuscript. A parameter is often used to discuss whether a laser beam is relativistic: the normalized vector potential 𝑎 0 . We will introduce it in a simpler case that illustrates clearly how particles behave in extreme electromagnetic fields.
To simplify the derivation, we consider the simpler case of a particle in a plane electromagnetic wave. In the non-relativistic limit, in which the magnetic force can be neglected, the equation of motion of the particle writes:
𝑑𝒑 𝑑𝑡 = 𝑞𝑬 𝟎 𝑒 𝑖(𝜔 0 𝑡-𝑘.𝑧)
As by definition, 𝑬 = 𝑖𝜔𝑨, one can write: 𝒑 = 𝛾𝑚𝒗 𝒒𝒖𝒊𝒗𝒆𝒓 = 𝑞𝑨 When the momentum is of order of 𝑚𝑐, it is necessary to consider the relativistic correction to the motion of the electron, the non-relativistic approximation is not correct anymore. Therefore, it is common to define the normalized vector potential of the laser pulse to distinguish non-relativistic and relativistic regimes directly from this dimensionless parameter:
𝑎 0 =
𝑒𝐴 0 𝑚𝑐 When 𝑎 0 ≪ 1, the regime is non-relativistic, and when 𝑎 0 > 1, the regime is said to be relativistic.
One last formula may be of interest in the following, it is the expression of 𝑎 0 in terms of the wavelength of the electromagnetic wave and of its intensity:
𝑎 0 = 𝑒𝐴 0 𝑚𝑐 = 𝑒𝐸 0 𝑚𝑐𝜔 0 = [ 𝑒 2 2𝜋 2 𝜖 0 𝑚 2 𝑐 5 𝜆 2 𝐼 0 ] 1 2 = 0.86 𝜆[𝜇𝑚]√𝐼[10 18 𝑊 /𝑐𝑚 2 ]
(1.5) We have 𝑎 0 ∝ √𝐼 0 𝜆. The square root of 𝐼 in equation (1.5) was expected, as the intensity of a laser field grows as the square of the amplitude of the electric field, directly related to 𝑎 0 .
c. Maxwell equations
The propagation of an electromagnetic wave in a medium is described by Maxwell equations:
𝛁 × 𝑬 = - 𝜕𝑩 𝜕𝑡 (1.6) 𝛁 . (𝜖𝑬) = 𝜌/𝜖 0 (1.7) 𝛁 × 𝑩 = 𝜇 0 𝑱 + 𝜇 0 𝜖 𝑟 𝜖 0 𝜕𝑬 𝜕𝑡 (1.8) 𝛁 . 𝑩 = 0
(1.9)
One can define for the rest of the manuscript the relative permittivity by 𝜖 𝑟 (𝜔) = 𝐼 + 𝜒(𝜔).
The relation between the current density in the medium and the electric susceptibility 𝜒 is given by:
𝒋(𝒓) = 𝑖𝜔𝜖 0 𝜒(𝜔)𝑬(𝒓) (1.10)
By combining Maxwell equations, one can easily obtain the equation of propagation for the field 𝑬 for the case of a monochromatic wave with a time dependence 𝑒 𝑖𝜔𝑡 :
∆ 𝑬 -𝛁(𝛁. 𝐄) + 𝜔 2 𝑐 2 𝜖 𝑟 (𝜔)𝑬 = 0 (1.11)
Note that throughout the manuscript, 𝜖 will describe the emittance of a particle beam and only in the context of Maxwell equations 𝜖 is the electric permittivity.
d. Chirped pulse amplification
The field of laser interaction with matter opened many prospects to physicists for example in creating new matter conditions such as warm and dense matter, in triggering nuclear fusion reaction, in reproducing in laboratories the state of matter of stars, or in offering the possibility to produce energy thanks to inertial confinement fusion. These examples were performed with long and energetic laser pulse at intensities lower than 10 15 𝑊/𝑐𝑚 2 . Since the invention of the "Chirped pulse amplification (CPA)" technique accomplished by Strickland and Mourou [Strickland 85], laser intensities greater than 10 18 𝑊/𝑐𝑚 2 have been reached, with a record today of a few 10 21 𝑊/𝑐𝑚 2 . Such high intensities made laboratory laser-produced plasmas achievable, while keeping the experimental devices dimensions limited. CPA laser systems provided to researchers enough physical phenomena to study for several decades, along with many potential applications.
Before "CPA" was invented, physicists were facing a limitation in further increasing the power of their laser systems: during laser light production, the beam passes through an amplifying media and is reflected by several optical components. However, when the intensity inside the amplification media or on the optics becomes too high, the beam faces nonlinear effects that distort the spatial and spectral profile of the pulses [ Maine 88]. This ruins any hope to reach a higher power. CPA made possible to overcome this technological limitation. A schematic of a Chirped Pulse Amplification laser system is displayed in Fig. 1.6.
From an initial low-energy ultra-short pulse of a few femtosecond, a first grating pair stretches it to hundreds of picoseconds by adding a linear component to the pulse group delay. The stretched pulse is then amplified by many orders of magnitude over the whole spectrum, before being recompressed by suppressing the linear part of the group delay with the second grating pair. Amplification occurs when the beam is stretched, therefore the intensity in the amplifying media stays moderate.
The initial technology was using optical fibers as a stretcher and a grating pair to recompress the pulses. The reason for this is that CPA was developed in the context of radar research. The technology in use nowadays for high power laser facilities relies on grating pairs both for the stretcher and the compressor.
In particle accelerators, beams have to be transported over hundreds of meters, and reshaped to have the dimension and divergence required by the experiments. Several concepts and conventions in beam physics are necessary to discuss how LWFA and PWFA experiments set requirements on beam parameters. These concepts are introduced in the following section.
Beam physics concepts and formalism
Electron particle beams in conventional accelerators are usually produced by a diode Quantifying the beam quality is a matter of first importance for the different applications: a plasma-based collider requires high luminosity and therefore beams with large numbers of particles and with very small bunch sizes. As can be understood from this example, a low divergence and a low transverse beam size are the requirements for a good transverse quality beam. A figure of merit for this quality, the emittance, will be introduced in this section, then the Twiss parameters that describe the beam and its propagation along the beamline of a particle accelerator will be introduced, as well as matrix formalism of beam physics. From these concepts, the equation of the beam envelope in a focusing field will be derived. The trace-space shape of the beam and its evolution will be commented in further detail in the third paragraph, with a discussion regarding the sources of emittance growth, that degrade the beam quality.
a. Emittance
We consider in the following a beam propagating in the 𝑧 direction, whose transverse dimensions are labeled 𝑥𝑦. In the 𝑥 dimension, the angle of the particle is
𝑥 ′ = 𝑑𝑥 𝑑𝑧 ≈ 𝑷 𝑥
𝑷
. From the particle distribution, we define the geometrical emittance:
𝜖 𝑥 = √𝑥̅ 2 𝑥̅ ′ 2 -𝑥𝑥 ̅̅̅′ 2
(1.12) Where 𝑥̅ indicate the mean of the quantity 𝑥 over all the particles in the beam. This quantity is defined for each axis of the transverse plane, 𝜖 𝑥 and 𝜖 𝑦 . It is often called the Root Mean Square (RMS) emittance [ Reiser 08].
The plane 𝑥 -𝑥′ is sometimes called the trace-space and is usually used in the beam physics community. The 𝑥 -𝑝 𝑥 plane is more common in classical and quantum mechanics and is called the phase-space.
Ideal beams are made of particles moving exactly in the same direction. No trajectory crossing can occur in these beams, that is why they are also called "laminar beams". In the 𝑥 -𝑥′ trace-space, for a given 𝑥, all particules have the same angle 𝑥′. Therefore, the RMS emittance is null, and the 𝑥 -𝑥′ profile is a line. Beams can be either converging (for example when there are being focused by a lens), or diverging (for example after they passed their waist during a free drift).
The product mean 𝑥𝑥 ̅̅̅′ over all the particles in a bunch describes the correlation in trace-space between the parameters 𝑥 and 𝑥′. If all the transverse forces on the beamline are linear, there should not be any nonlinearity in the bunch representation in trace-space and the emittance should be approximately equal to the area of the beam in trace-space 𝑥 -𝑥′. In the rest of the manuscript, the linear transverse force assumption will be made.
In the particular case of a beam focused by a quadrupole lens, the term 𝑥𝑥 ̅̅̅′ represents the inward or outward flow. This term is null at the waist of the beam. At the waist, the emittance can be written: 𝜖 𝑥 = 𝜎 0 𝛩 where 𝜎 0 = √𝑥̅ 2 is the transverse RMS size and 𝛩 = √𝑥̅ ′2 the RMS divergence.
However, from the definition (1.12), 𝑥 ′ and 𝜖 𝑥 will decrease when the beam energy increases in the accelerator. To compare the emittances of beams with energies different from each other, we need to take into account the energy dependence and therefore define the normalized emittance: 𝜖 𝑛,𝑥 = 𝛽𝛾𝜖 𝑥 This definition will be implicitly used in the rest of the manuscript. While the geometrical emittance measures the area in the trace-space, the normalized emittance measures the area in the normalized phase-space.
Liouville's theorem ensures that the emittance is invariant under ideal accelerating conditions [ Lejeune 80]. To improve the beam emittance, one can consider improving the generation process of the particle beam. In order to produce beams of lower initial emittance, one can also focus on mitigating all possible sources of emittance growth during the acceleration and transport of the beam, last one can also rely on cooling mechanisms such as damping rings.
The transport of a beam is a process of fundamental importance in particle accelerators and can be easily described using the formalism presented now.
b. Transfer matrices, transport
For a single particle, when the particle drifts in free space along the beamline of an accelerator facility or moves through a quadrupole magnet, its position in trace-space evolves.
The displacement between two 𝑧 positions labeled 1 and 2 in trace-space can be conveniently described by a transfer matrix 𝑀:
( 𝑥 𝑥 ′ ) 2 = 𝑀 ( 𝑥 𝑥 ′ ) 1 .
For a drift in free space 𝑀 = ( 1 𝐿 0 1 ), where 𝐿 is the distance between the two positions.
For a quadrupole in the thin-lens approximation i.e. if the quadrupole length is much smaller than the focal lengths:
𝑀 = ( 1 0 - 1 𝑓 2 𝑓 1 𝑓 2
). In general, 𝑓 1 and 𝑓 2 , the distances to the principal planes of the quadrupole are chosen identical.
The drift space transfer matrix and the thin-lens matrix both apply to describe the evolution of the beam envelope in phase-space. The general form of the transfer matrix without the thin-lens approximation is more complex, and it is not indispensable for the work reported in this thesis. Furthermore, thick lens matrices cannot be applied to the beam envelope coordinates, but rather to individual particle motion.
The matrix description is a convenient way to calculate the effect of quadrupoles on the beam. Quadrupoles are the equivalent of lenses in optics, that is why in beam physics it is common to call them lenses. They are often used to image the beam from one point of the beam line onto a screen. An example of such a system is given for the FACET energy spectrometer diagnostic in Part II.
c. Twiss parameters and beam envelope equation
Along with the emittance, three parameters conveniently describe the propagation of the beam in the beamline, noted 𝛼, 𝛽 and 𝛾 and called the Twiss parameters. Their definitions provide a direct insight into their meanings:
𝛽 ̂= 〈𝑥 2 〉 𝜖 (1.13) 𝛼 ̂= - 〈𝑥𝑥 ′ 〉 𝜖 (1.14) 𝛾 ̂= 〈𝑥 ′ 2 〉 𝜖 (1.15)
There is the relation 𝛾 ̂𝛽 ̂-𝛼 ̂2 = 1 between them, by definition of the RMS emittance 𝜖. 𝛽 ̂ expresses the RMS spatial width in the x direction 𝑅 = √〈𝑥 2 〉, while 𝛾 ̂ expresses the RMS angle of the distribution of particles in the 𝑥 dimension 𝜃 = √〈𝑥′ 2 〉 . 𝛼 ̂ contains the correlations between the first two parameters. A Gaussian distribution in the trace-space would write [Frederico 16]:
𝜌(𝑥, 𝑥 ′ ) = 1 2𝜋𝜖 𝑒 -𝛾 ̂𝑥2 +2𝛼 ̂𝑥𝑥 ′ +𝛽 ̂𝑥′2 2𝜖 (1.16)
Such a distribution is plotted as an example in Fig. 1.7. The 𝑅 and 𝜃 parameters appear as plotted. In this trace-space, the distribution of the particles in the beam will be an ellipse whose equation is: 𝛾 ̂𝑥2 + 2𝛼 ̂𝑥𝑥 ′ + 𝛽 ̂𝑥′2 = 𝜖 . It is easy to understand that the RMS emittance is the area of the ellipse drawn on the trace-space plane, by definition.
Twiss parameters are very convenient for beam physicists as their evolution when the beam propagates along the beam line is described by differential equations simpler than the equations of the couple (𝑅, 𝜃). However, we will derive the equation of evolution of the beam envelope 𝑅 as well from the equations over 𝛼 ̂ and 𝛽 ̂. This equation will be useful to study the evolution of the beam in a plasma wakefield accelerator. The following calculation is accurate whenever individual particles in the beam face a linear focusing force: 𝑥 ′′ = -𝜅𝑥.
Starting from the definition (1.13) -(1.15) and keeping in mind the relation 𝑥 ′′ = -𝜅𝑥 for individual particles, we have:
𝛼 ̂′ = - 〈𝑥 ′2 + 𝑥𝑥′′〉 𝜖 = -𝛾 ̂+ 𝜅𝛽 ̂ 𝛽 ̂′ = 2 〈𝑥𝑥 ′ 〉 𝜖 = -2𝛼 ̂
These first order equations describe the evolution of 𝛼 and 𝛽 along the beam line. One can reach the independent second order equations over 𝛼 and 𝛽: We want now to derive the envelope equation for 𝑅, we have to recall the definition of 𝛽 ̂ in terms of the RMS extent of the beam in the 𝑥 dimension. As said earlier 𝑅 2 = 𝛽 ̂𝜖 (as shown in Fig. 1.7 (b)), and 𝜖 is a constant during the evolution of the beam, therefore:
𝛼 ̂′′ = 2𝜅 〈𝑥𝑥 ′ 〉 𝜖 -2𝛼 ̂𝜅 = -4𝜅𝛼 ̂ 𝛽 ̂′′ = -2𝛼 ̂′ = 2𝛾 ̂-2𝜅𝛽 ̂= 2 1 + 𝛽 ̂′2 4 𝛽 ̂-2𝜅𝛽 ̂
𝛽 ̂= 𝑅 2 𝜖 𝛽 ̂′ = 2𝑅𝑅 ′ 𝜖 𝛽 ̂′′ = 2 𝜖 (𝑅𝑅 ′′ + 𝑅 ′2 )
The second order equation over 𝛽 ̂ writes for 𝑅:
𝑅 ′′ + 𝜅𝑅 - 𝜖 2 𝑅 3 = 0 (1.17)
Equation (1.17) describes the evolution of the envelope of the beam, the RMS value of the beam size in 𝑥. In this equation, 𝑅 ′′ is the term of evolution of the envelope, 𝜅𝑅 is the focusing linear force, while
𝜖 2
𝑅 3 is often called the "emittance force" [Humphries 02]. The "emittance force" acts as if the beam was being forced to spread transversally. This spreading would be due to the beam emittance, and it opposes the external focusing force.
d. Evolution of the trace-space ellipse in free space
The evolution of the ellipse in trace-space illustrates the role of the Twiss parameters as well.
For a drift in free space, we have from the previous paragraphs the transfer matrix 𝑀 = ). 𝛽 0 is the analogous of the Rayleigh length 𝑧 𝑅 for a Gaussian laser beam.
The evolution of parameters (1.13) -(1.15) for a beam propagating in free space is as follows: for a converging beam, initially we have 𝛼 > 0, at focus 𝛼 = 0 and after focus when the beam diverges 𝛼 < 0. The beam emittance (area of the ellipse in trace-space) is constant. Before focus, a particle that has 𝑥 > 0 must also have 𝑥 ′ < 0 (the beam converges, Fig. 1.8 (a)). After focus, it is the opposite (Fig. 1.8 (c)). At focus (𝛼 = 0, Fig. 1.8 (b)) the beam reaches its minimal possible size in 𝑥: the ellipse is "upright".
e. Periodic focusing systems
In conventional accelerators particle beams are transported along the beam lines over kilometers. A periodic set of lenses maintains the beam close to the axis by refocusing it regularly. In real space, each particle undergoes pseudo harmonic oscillations with a wavelength 𝜆 = 2𝜋 √𝜅 . In trace-space, the beam ellipse accomplishes complete rotations around the origin [Humphries 02]. Scientists define the phase advance per cell 𝝈 as the fraction of the complete rotation of the beam ellipse in trace-space between two consecutive lenses. In Fig. 1.9 is plotted a particle trajectory in a periodic focusing system, along with the envelope evolution. We can say that the beam is correctly matched if the envelope oscillations are stable, such as in Fig. 1.9. A stability threshold can be found from the derivation of the parameters (1.13) -(1.15) [Humphries 02]. This matching condition is the requirement to be able to maintain the beam close to the axis of the beam line, while keeping a constant emittance over the full length of the accelerator.
f. Sources of emittance growth
Preserving the normalized emittance during acceleration and transport along the beam line is a major and well-known issue. Focusing components provide linear forces that do preserve the emittance. SLAC National Accelerator Laboratory provides for instance a beam with typical 𝑥 and 𝑦 normalized emittances of 100 𝑚𝑚. 𝑚𝑟𝑎𝑑 × 10 𝑚𝑚. 𝑚𝑟𝑎𝑑 at the experimental area, for a minimal beam size of 30 𝜇𝑚.
Several phenomena are sources of emittance growth, some of them, occurring in Plasma Wakefield Acceleration experiments are listed below with further details.
Nonlinear focusing forces:
Beams facing nonlinear focusing forces such as in the wakefield seen by large transverse size bunches in a plasma wakefield accelerator do not conserve their emittance. Such focusing fields distort the ellipsoidal shape of the beam in trace-space.
For perfectly harmonic forces, the trace-space beam ellipse rotates without any distortion.
In plasma or laser wakefield accelerator schemes, Betatron oscillations can occur to all accelerated particles [Rousse 04]. In the case of a highly nonlinear blowout regime, the focusing force due to the ion cavity is perfectly linear in 𝑟 (the distance of the electron undergoing Betatron motion to the axis). However, far from the axis the focusing force can become higher than the linear force close to the axis. Therefore, outer particles phase in trace-space can evolve faster. This is directly responsible for distortion of the ellipse in trace-space. Particle will spread in phase-space and form a uniform circular shape with a higher area than the initial ellipse.
This phenomenon occurs also in conventional accelerators beam lines, in which periodic focusing quadrupole doublets are used to transport the beam (when matching conditions are met, nonlinearity has the smallest effect, while it can be very strong when mismatched). The ellipse in trace-space therefore rotates as the beam propagates in the line. If the focusing forces are nonlinear, outer particle phase advance will evolve faster as well. In that case, the trace-space beam ellipse will be distorted.
Linear forces dependent on the beam longitudinal coordinate
If the force depends on the longitudinal coordinate, emittance growth will occur. This phenomenon can happen for instance in the linear regime of PWFA or LWFA. If the plasma wave is sampled by a beam whose longitudinal size is of the same order as the variation length of the wakefield in the longitudinal direction, then the particles will rotate with different speeds in trace-space [ Mehrling 12].
Chromaticity spread
Beam energy spread implies a variation of the emittance. The first chapter was an introduction to particle accelerators and to the technologies underlying particle energy gain in these facilities. Examples of applications were given and the necessary increase in acceleration gradient to push fundamental research further was illustrated. A solution to this issue could be plasma based acceleration. Two sections of the first chapter were dedicated to some of the physics concepts useful in the manuscript: laser and particle beam physics. The next chapter is dedicated to the presentation of plasma physics results, results necessary to derive the theory behind plasma acceleration of particles.
Introduction to plasma physics
Plasma-based acceleration techniques rely on several important concepts of plasma physics that will be presented here. After the introduction of the basic concepts and parameters, a simple model of electromagnetic waves propagation in plasmas is derived and the fluid description of a plasma and its formalism are introduced.
Chapter 2 1. Plasmas
Plasma is the most common state of matter in the visible universe. A gaseous medium is called "plasma" if the presence of free charged particles is in proportion high enough for collective phenomenon to take place in it. Charged particles in the gas interact with all others through the Coulomb force. Globally, any change in the charge distribution leads to an increase of the electrostatic energy. Such a state is therefore unstable and particles are quickly dispatched to reach a lower energy state. As a consequence, matter tends to be globally quasineutral with as many positive and negative charges everywhere. Disequilibrium only takes place on a very short time scale whose duration will be discussed in this chapter.
In most phenomenon studied in physics laboratories almost all atoms are partially or fully ionized with all of them having lost at least an electron. This is the case in all the experiments described in this manuscript.
a. Electronic plasma frequency
The most important parameter that characterizes the displacements of electrons in a plasma is called the electronic plasma frequency. It appears naturally when one considers an electrostatic wave in a plasma. In this model as in all this manuscript, ions are supposed to be immobile on the time scale of electron motion.
To study collective oscillations of plasma electrons in 1-D, the common model relies on an initially homogeneous and neutral gas plasma with an electron density 𝑛 0 . We consider now a planar sheath of electrons, initially at 𝑥 = 𝑥 0 , whose position is perturbed of a quantity 𝜉(𝑥 0 , 𝑡). Such a perturbation is displayed in Fig. 2.1, where a sheath of electrons is displaced of the quantity 𝜉 in the dimension 𝑥. To maintain a fluid description for this model, we have to assume that during the oscillations, slices of electrons at different initial positions do not cross each other during their motion.
𝑛 0 is the plasma ion/electron density, before the perturbation occurs.
Poisson equation writes:
∆𝜙 = - 𝜌 𝜖 0
(2.1)
In one dimension ∆𝜙 = 𝜕 2 𝜙 𝜕𝑥 2 = - 𝜕𝐸 𝜕𝑥
, and integrating equation (2.1) between -∞ and 𝑥 0 + 𝜉 writes:
∫ - 𝜕𝐸 𝜕𝑥 𝑑𝑥 𝑥 0 +𝜉 -∞ = ∫ (𝑛 𝑒 (𝑥) -𝑛 0 (𝑥))𝑒 𝜖 0 𝑑𝑥 𝑥 0 +𝜉 -∞
At -∞ the 𝐸 field is considered null. All the electrons between 𝑥 0 and 𝑥 0 + 𝜉 are displaced to positions 𝑥 > 𝑥 0 + 𝜉 (to satisfy the hypothesis that slices of electrons at different initial positions do not cross). The previous integral becomes:
𝐸(𝑥 0 + 𝜉) = 𝑛 0 𝑒 𝜖 0 𝜉
The equation above, injected into Newton's second law of motion, becomes:
𝜉 ̈= - 𝑒 𝑚 𝐸 = - 𝑒 2 𝑛 0 𝑚𝜖 0 𝜉
This is the equation of a harmonic oscillator, whose frequency writes:
𝜔 𝑝 = ( 𝑒 2 𝑛 0 𝑚𝜖 0 ) 1 2 (2.2)
Parameter (2.2) describes the collective oscillations of electrons in a plasma after they have faced a small perturbation around their equilibrium positions. It is the equation associated to an electrostatic wave in a cold plasma, whose dispersion relation writes therefore:
𝜔 = 𝜔 𝑝 (2.3)
This parameter sets the time scale of relaxation of internal electrostatic perturbations: 𝜔 𝑝 -1 .
Parameter (2.2) has a major role in plasma physics [Rax 05]. The simple 1-D model and the definition given of 𝜔 𝑝 provide enough details to introduce the models of LWFA and PWFA. We call parameter (2.2) the plasma frequency.
As can be seen in the definition of the plasma frequency, 𝜔 𝑝 -1 ~𝑛0 -1 2 . This indicates that the denser a plasma is, the faster the electrostatic interaction will pull the electrons back to their initial equilibrium.
b. Debye Length
Another important parameter to describe collective phenomena in plasma physics is the Debye length. This parameter is defined as the product between the thermal speed of plasma electrons and the typical time length of electron oscillations in a neutral plasma 𝜔 𝑝 -1 :
𝜆 𝐷 = 𝑣 𝑇𝑒 𝜔 𝑝 = ( 𝑘 𝐵 𝑇 𝑒 𝑚 ) 1 2 . 1 𝜔 𝑝 = ( 𝜖 0 𝑘 𝐵 𝑇 𝑒 𝑛 0 𝑒 2 ) 1 2
This parameter is also the typical damping length of electrostatic phenomena in plasmas [Debye 23]. When the electrostatic equilibrium is broken, after a typical distance of 𝜆 𝐷 , the effects of the electrostatic perturbation are strongly damped. As can be seen in the definition of the Debye length, 𝜆 𝐷 ~𝑛0 -1 2 , the denser a plasma is, the stronger the electrostatic perturbation is screened.
After this brief introduction, it is necessary to describe the physical phenomenon underlying the production of laboratory plasmas used in LWFA and PWFA experiments.
Ionization
To set up both LWFA and PWFA experiments, it is necessary to produce plasmas in a repeatable way. For PWFA it is crucial to produce a long and stable homogeneous plasma. The use of intense and very short laser pulses is a reliable solution. Plasma production from a laser pulse occurs after a total ionization of the gas on the laser path. Several phenomena explain the ionization of a gas atom when it is exposed to a laser field, they are detailed below.
a. Low-Field Ionization: the photoelectric effect
When the intensity of the laser field is small, 𝐸 𝐿 < 10 11 𝑉/𝑚, the regime of ionization is called Low-Field Ionization. Examining Bohr model gives an insight into the ionization process. In a low intensity laser field, if the laser photons have an energy high enough to hit gas atoms and pull their valence electron out, ionization occurs. To be specific, this process occurs when photons energy is greater than the ionization potential 𝑈 𝐼 of the atom as depicted in Fig 2 .2 (b). The final energy of the pulled-out electron is given by the very well-known Einstein formula for the photoelectric effect:
𝐸 𝑓 = ℏ𝜔 -𝑈 𝐼
The ionization rate depends on the cross section of the atoms and of the flux of photons. This technique is not used to create laboratory plasmas. It is hard to produce photons of energy high enough (typically a few eV) to reach the Low-Field Ionization regime. On the contrary, very high power pulses can be achieved with photons of more moderate energy.
For very high fields, and photons of lower energy, several more complex physical mechanisms lead to ionization of a gas when it is hit by a femtosecond laser pulse: Multi-Photon Ionization, Barrier Suppression Ionization and Field Ionization.
b. Multi-Photon Ionization
This phenomenon is similar to the simple photoelectric effect described earlier in the lowfield regime, where a photon directly pulls out an electron from the atom. In case of Multi-Photon Ionization, several photons simultaneously contribute to ionize an atom, as seen in Fig. 2.2 (c). In a gas, Multi-Photon Ionization is dominant at intensities of 10 11 to 10 13 𝑊/ 𝑐𝑚 2 [Agostini 68]. One of the assumptions made in the Multi-Photon Ionization model is that the laser field does not modify the atomic potential the electron sees, which is not the case of some of the processes described below. Einstein formula can be adapted to the case of Multi-Photon Ionization, and becomes:
𝐸 𝑓 = 𝑛ℏ𝜔 -𝐸 𝐼
c. Tunnel Ionization and Barrier Suppression Ionization
These two processes of ionization are two regimes of the same phenomenon. The model comes from the intuitive idea that when the electric field of the laser is high enough, it suppresses the electric field generated by the nucleus of the atom that valence electrons see.
[Gibbon 05] provides an intuitive derivation of the laser field needed to totally ionize a gas by suppressing the barrier potential.
The model for Barrier Suppression Ionization is obtained from a coulomb potential by adding the laser field to the potential [Bethe 57], where Z is the atomic number of the gas molecules:
𝑉(𝑟) = - 1 4𝜋𝜖 0 𝑍𝑒 2 𝑟 -𝑒𝐸𝑟
The barrier potential is lowered on the right of the atom as shown in Fig. 2.3 (b). When the total energy of the photons is 𝑈 𝐼 the process is called Multi-Photon Ionization. In the specific case where the total energy is higher than 𝑈 𝐼 , the process is called Above-Threshold Ionization, case (c).
If the maximum value of the potential on the right side in Fig. 2.3 (b) is lowered below 𝑈 𝐼 , the regime is called Barrier Suppression Ionization and ionization occurs directly. From [Faure 07], the laser intensity required for Barrier Suppression Ionization to occur is:
𝐼 𝑠𝑏 [𝑊 𝑐𝑚 -2 ] = 4 10 9 𝐸 𝑖 4 𝑍 2 In the case of hydrogen ionization, it is 𝐼 𝑠𝑏 = 1.4 10 14 𝑊 𝑐𝑚 -2 . However, Tunnel Ionization can lower this threshold, and ensure that ionization occurs below this limit. In fact, the theory of quantum mechanics shows that there is a probability for the electron to tunnel through the barrier potential although the minimum of 𝑉(𝑟) is still larger than 𝑈 𝐼 .
The transition from Multi-Photon Ionization to Tunnel Ionization -Barrier Suppression Ionization is smooth. Tunnel and Multi-Photon Ionization were first studied by Keldysh, who defined the so-called Keldysh parameter [ Keldysh 65]:
𝛾 = √ 𝑈 𝐼 2𝑈 𝑝
where 𝑈 𝐼 is the ionization potential, as seen above and 𝑈 𝑝 is the ponderomotive potential. 𝑈 𝑝 is the energy associated to the quiver motion of an electron in a laser field. The Keldysh parameter distinguishes the Tunnel and Multi-Photon regimes. When 𝛾 ≪ 1 , Tunnel Ionization is dominant, while Multi-Photon Ionization is significant for 𝛾 ≫ 1.
This paragraph introduced the different phenomenon that lead to laser produced plasmas. It is now important to present the formalism in use to describe plasmas in most PWFA and LWFA experiments.
Fluid description of a plasma
A full description of a plasma has to take into account the position and speed of each of the N particles composing the plasma. Such a description is called a kinetic model and takes into account an immense number of parameters as the systems studied in laboratories are macroscopic. A set of equations describe the behaviour of particles in the plasma. Each particle evolution follows Newton's second law of motion. Maxwell equations describe the evolution of the E and B fields in the system.
Solving a problem using a kinetic model implies to use the tools of statistical mechanics, as too many parameters are necessary to describe the particles individually. A kinetic model to solve a plasma physics problem relies on the distribution function 𝑓 𝑗 (𝒓, 𝒗, 𝑡) which represents the mean number of particle 𝑗 at time 𝑡 in a unit volume of the phase-space at position (𝒓, 𝒗).
The mean values of 𝒓 and 𝒗 are obtained by averaging over many evolutions of the system. The particles 𝑗 can be electrons or ions. In this section, we note it 𝑗 but we will see that only electrons will be considered in the following chapters.
In the context of laser and plasma wakefield experiments, we will neglect the collisions between particles [Mora 13]. Under that hypothesis, a study of the mean number of particles in an arbitrary volume leads to the continuity equation [Mora 13]:
𝜕𝑓 𝑗 𝜕𝑡 + 𝛻 • (𝑓 𝑗 𝑽) = 0 (2.4)
where V is a 6-element vector of the phase-space, whose first 3 elements correspond to the usual velocity and the last 3 elements correspond to the acceleration. When one separates the position and speed components of equation (2.4), one reaches the Vlasov equation: Along with Maxwell equations and the definitions: 𝜌 = ∑ 𝑞 𝑗 𝑖 ∫ 𝑓 𝑗 (𝒗)𝑑𝒗 and 𝒋 = ∑ 𝑞 𝑗 𝑖 ∫ 𝒗. 𝑓 𝑗 (𝒗)𝑑𝒗 (where 𝑗 distinguishes the classes of particle, electrons or ions, and 𝑖 is the indices with which the particles in the class are listed) this system of equations forms the Vlasov-Maxwell equations system. To reach this system, one has to neglect the differences between the mean fields (the 𝑬 and 𝑩 fields in the equations) and the fields in the plasma for each realisation of the evolution of the system. Maxwell-Vlasov system of equations is written using the distribution function. To go further in the study of the behaviour of a plasma, physicists often have to consider mean values of the parameters of the system. We will therefore integrate the previous system to make the mean value and moments of the distribution function appear. Such a model is called a fluid description of a plasma. Each fluid parameter is the integral over the velocity of the distribution function multiplied by the corresponding fluid parameter. Examples are given below:
The particle density is defined as:
𝑛 𝑗 (𝒓, 𝑡) = ∫ 𝑓 𝑖 (𝒓, 𝒗, 𝑡)𝑑𝒗 (2.6)
The fluid velocity is defined as: The previous paragraph introduced the main concepts and the formalism in use in plasma physics. In the following chapters, the physics of electromagnetic waves in a plasma will be of prime importance in the models. We derive therefore in this section a fundamental concept regarding the propagation of electromagnetic waves in a plasma: the critical density.
𝑛 𝑗 (
Electromagnetic waves in plasmas
To study how electromagnetic waves propagate in plasmas, one has to start from equation (1.11). For a monochromatic wave of frequency 𝜔:
∇ (∇ • 𝑬) -𝛥𝑬 - 𝜔 2 𝑐 2 𝜖 𝑟 (𝜔)𝑬 = 𝟎
We recall the relation between 𝑬 and 𝒋: 𝒋(𝒓) = 𝑖𝜔𝜖 0 𝜒(𝜔)𝑬(𝒓), with 𝜖 𝑟 (𝜔) = 1 + 𝜒(𝜔).
We The definition of 𝜒 along with the last equation writes:
𝑖𝜔𝜖 0 𝜒(𝜔)𝑬(𝒓) = 𝑒 2 𝑛 0 𝑬(𝒓) 𝑖𝜔𝑚
And we obtain for 𝜖 𝑟 (𝜔):
𝜖 𝑟 (𝜔) = 1 - 𝜔 𝑝 2 𝜔 2
Purely electromagnetic waves verify ∇ • 𝑬 = 0, so that equation (1.11) becomes:
𝛥𝑬 + 𝜔 2
𝑐 2 𝜖 𝑟 (𝜔)𝑬 = 𝟎 (2.12)
Inserting the plane wave expression for 𝐸 , 𝐸 = 𝐸 0 𝑒 𝑖(𝜔𝑡-𝑘𝑟) , equation (2.12) leads to the dispersion relation for electromagnetic waves in a cold, non-collisional plasma:
𝜔 2 = 𝜔 𝑝 2 + 𝑘 2 𝑐 2 (2.13)
Equation (2.13) indicates in particular that the wave vector k, which appears in 𝑬 = 𝑬 𝟎 𝑒𝑥𝑝(𝑖(𝜔𝑡 -𝒌. 𝒓)) has an imaginary part if 𝜔 < 𝜔 𝑝 . In that case, the E field is exponentially damped and therefore the electromagnetic wave cannot propagate in the plasma, it is an evanescent wave. In Fig 2 .4 is plotted the dispersion relation in the (𝑘, 𝜔) plane.
The discussion above indicates that 𝜔 𝑝 is the limit between two regimes:
• If 𝜔 𝑝 > 𝜔 the electromagnetic wave cannot propagate, the plasma is said to be overdense. • If 𝜔 𝑝 < 𝜔 the electromagnetic wave can propagate, the plasma is underdense. In this thesis, we only consider the second kind of plasmas and electromagnetic fields. In terms of density, for a fixed electromagnetic wavelength, one has: 𝜔 𝑝 < 𝜔 ⇔ 𝑛 𝑒 < 𝑛 𝑐 = 𝜔 2 𝑚𝜖 0 𝑒 2 . When the electron density is lower than the critical density for the frequency of the electromagnetic wave, then the wave can propagate. This conclude the preliminary results necessary to derive the theory of wakefield excitation by a drive beam, in a plasma. The linear theory will be presented in details in the following chapter, starting from the conventions and concepts introduced so far.
Plasma-based accelerators
The plasma-based acceleration schemes LWFA and PWFA can be understood as the combination of three physical phenomena. First is the excitation of a plasma wave, which is the accelerating structure of plasma-based accelerators. Second is the injection of particles in this accelerating structure. Injected particles can come from the drive particle bunch itself (in PWFA), from electrons of the plasma, or from an externally injected "trailing bunch". And third is the acceleration of the injected particles by the accelerating structure. To accurately describe the excitation of plasma waves along the full length of the plasma accelerator, it is also necessary to know how drive beams (laser pulse in LWFA, particle beam in PWFA) propagate and evolve while moving through a plasma. We will see as well that depending on the parameters of the drivers, plasma waves can be excited in two different regimes, a linear one which has an analytical solution, and a nonlinear "blow-out" regime.
Laser pulses propagation in a plasma
Several processes occur to a laser pulse propagating in a plasma. First is presented a relativistic correction to equation (2.12), significant for very intense drivers. Second is a description of self-focusing and guiding of laser pulses, a phenomenon responsible for the efficient propagation and driving of plasma waves by high power laser pulses. Last, a short paragraph discusses the phase velocity of the plasma wave driven by a laser pulse propagating in a plasma, which will be important in the following.
Relativistic transparency
When a relativistic electromagnetic wave propagates in a plasma, the response of the medium depends on the polarization of the wave. Only in the case of a circularly polarized beam a simple solution exists. We will briefly derive in this section the dispersion relation of a relativistic electromagnetic plane wave with a circular polarization [Mora 13].
Using normalized quantities, in particular 𝒂 = 𝑒𝑨/𝑚𝑐 and 𝒖 = 𝒑/𝑚𝑐, and using Maxwell-Ampere equation, the Coulomb gauge 𝜵. 𝒂 = 0 and the relativistic equation of motion for an electron fluid element, we have:
𝛥𝒂 - 1 𝑐 2 𝜕 2 𝜕𝑡 2 𝒂 = -𝑒𝜇 0 𝒋 ⊥ /𝑚𝑐 (3.1) 𝒖 ⊥ = 𝒂 (3.2) 𝒋 ⊥ = -𝑒𝑛 0 𝒖 ⊥ 𝑐/𝛾 (3.3)
Inserting (3.3) into (3.1) leads to:
( 𝜕 2 𝜕𝑧 2 - 1 𝑐 2 𝜕 2 𝜕𝑡 2 ) 𝒂 = 𝜔 𝑝 2 𝛾𝑐 2 𝒂 (3.4)
Inserting the expression of a plane wave 𝒂(𝒓, 𝑡) = 𝒂 𝟎 𝑒 𝑖(𝜔𝑡-𝑘𝑟) into (3.4), we obtain:
𝜔 2 = 𝜔 𝑝 2 𝛾 + 𝑘 2 𝑐 2 (3.5)
Under the condition 𝑛 0 < 𝛾𝑛 𝑐 , the wave vector 𝑘 is real, which means that the wave can propagate in the plasma. To conclude, in the relativistic regime, the wave can propagate through plasmas of higher densities than the critical density (2.13).
Self-focusing
Due to the perturbation of the refractive index they induce, laser pulses can self-focus when they propagate into plasmas [Sprangle 92]. In fact, when the refractive index has a maximum on-axis, the phase front of a laser propagating along the axis bends and self-focuses [ Esarey 97]. For intense laser pulses, relativistic effects must be taken into account, [Litvak 69, Sprangle 87].
The refractive index can be deduced from 𝜂 = 𝑐/𝑣 𝜙 . The phase velocity 𝑣 𝜙 = 𝜔/𝑘 is obtained from equation (3.5). Linearizing 𝑣 𝜙 and expressing the local plasma frequency (that takes into account the local plasma electron density 𝑛 𝑝 = 𝑛 0 + 𝛿𝑛 instead of 𝑛 0 ) as a function of 𝜔 𝑝 2 = 𝑛 0 𝑒 2 /𝑚𝜖 0 leads to [Esarey 96b]:
𝜂(𝜔) = 1 - 𝜔 𝑝 2 2𝜔 2 (1 - 𝑎 2 2 + 𝛿𝑛 𝑛 0 ) (3.6)
The term If the laser intensity is peaked on axis, equation (3.6) satisfies the requirements of selffocusing. In addition, in that case the paraxial equation of the laser in the plasma takes the form of an equation with a third-order nonlinearity, as in nonlinear optics. Therefore, selffocusing will occur when the power of the laser exceeds a critical power:
𝑃 𝑐 . [Sprangle 87]
The theory of relativistic optical guiding provides the result:
𝑃 𝑐 [𝐺𝑊] = 17.4(𝜔/𝜔 𝑝 ) 2 .
When 𝑃 < 𝑃 𝑐 , beam diffraction dominates the behavior of the pulse: the laser quickly defocuses. On the contrary, when 𝑃 > 𝑃 𝑐 , a focusing effect will occur and guided propagation will not be possible neither. Last, when 𝑃 = 𝑃 𝑐 optical guiding occurs: in Fig 3 .1 (a), (iii) the laser beam radius has an initial "plateau", compared to the case 𝑃 < 𝑃 𝑐 (ii). The curve (iv) shows that a preformed plasma channel is extremely interesting as the channel guides the laser over many Rayleigh lengths.
Phase velocity of the plasma wave
Knowing the phase velocity of a laser-driven plasma wave is of prime importance to discuss how particles can be "trapped" inside the wave and increase their energy by staying in the accelerating 𝐸 𝑧 field. The dispersion relation (2.13) gives access to the group velocity of the laser, and thus to the phase velocity of the plasma wave in the linear regime.
b. Electron beam propagation in a gas plasma
When an electron beam propagates into a plasma, several physical phenomena perturb its evolution. Although these processes are far different from the processes occurring to a laser pulse, the experimental observables are quite similar [Joshi 02]. An electron beam entering into a plasma expels plasma electrons close to its propagation axis. In turn, the induced charge density in the plasma perturbs the balance between the self-magnetic and self-electric forces of the beam. If 𝑛 𝑏 < 𝑛 0 , the beam is self-pinched by the self-magnetic force, the focusing is highly nonlinear [Geraci 00, Humphries 02]. On the contrary, if 𝑛 𝑏 > 𝑛 0 (also called the underdense condition for an electron beam) the beam expels the electrons in its wake creating an ion cavity. This cavity is responsible for beam focusing, the focusing force can be linear in this regime and its effect can be described in more details. Some of the processes are listed below.
Beam focusing
The first effect of the plasma on the beam is a strong focusing due to the cavity transverse force introduced above. Before any description of the exact longitudinal structure of the ion cavity, if plasma electrons are fully expelled, it can be noticed that the transverse force will be linear in r. As said in chapter 2, a linear focusing force does not create any aberration, and does not induce emittance growth.
Betatron oscillations of the beam envelope
In the case evocated above, when a strong focusing happens with a linear force, the beam can undergo several oscillations called Betatron oscillations of the beam envelope. [Clayton 02] This process is due to the successive effects of the emittance term and of the linear focusing term that appear in (1.17). When the particles oscillate in the plasma, equation (1.17) writes:
𝜎 𝑟 ′′ (𝑧) + [ 𝜔 𝑝 2 (2𝛾)𝑐 2 - 𝜖 𝑛 2 𝛾 2 𝜎 𝑟 4 (𝑧)
] 𝜎 𝑟 (𝑧) = 0 (3.7)
Where 𝜎 𝑟 is the transverse radius of the beam and 𝑘 𝛽 = √ 𝜔 𝑝 2 (2𝛾)𝑐 2 is the Betatron wavenumber. The beam is said to be matched in the plasma when 𝛽 = 1/𝑘 𝛽 = 𝜆 𝛽 /2𝜋 and 𝛼 = 0. If this condition is fulfilled, the beam will propagate along the whole plasma without any evolution of its radius because the focusing force is exactly balanced by the emittance term. The corresponding matched beam radius is:
𝜎 𝑟 = ( 𝜖 𝑛 𝛾𝑘 𝛽 ) 1 2 (3.8)
The spatial period of the envelope oscillations is 𝜆 𝛽 /2 . The beams used in PWFA experiments do not always have a circular symmetry in the transverse plane, therefore their sizes and emittances can differ in transverse directions. Fig. 3.1 (b) illustrates these oscillations.
Electron hosing stability
Going further in the details, we have to discuss the electron hosing instability. For long beams, there is a coupling between the beam electrons and the electrons at the border of the ion cavity [ Whittum 91]. This coupling leads to growing transverse perturbations of the beam, a growth that can lead to a transverse breaking of the bunch. The beam centroids of the bunch longitudinal slices will undergo harmonic oscillations due to this coupling. The equations describing the coupling can be numerically solved and describe oscillations whose amplitudes grow quickly. Some of the phenomena occurring to a bunch of electrons propagating in a plasma were described. A comprehensive description would also include other phenomena such as the individual Betatron oscillations and radiation emission of particles, or the collective refraction of the beam if it encounters a plasma boundary [Muggli 01, Joshi 03]. Individual Betatron oscillations will be discussed in further details in Part III as they were an important concept for the hybrid experiment carried out at LOA and reported in this thesis. The description of these other phenomena are not needed to understand the experiment of part II and III.
Regarding positron bunches, the problem of their propagation is more complex and no theoretical model explains the evolution of a positron bunch in a plasma. However, some phenomena have already been identified and described [Hogan 03] and will be introduced in Part II of this manuscript.
After the introduction of the phenomena occurring to some drivers when they propagate in a plasma, we can now describe the main concept of this chapter: the driving of plasma waves.
Solution of plasma waves in the linear regime a. Plasma wave excitation
In this section, we wish to derive first a general equation for laser or beam excitation of charge density waves in a plasma. This will be a general introduction to the two schemes studied in the work reported here. The derivation uses the fluid approximation for electrons and assumes immobile ions and a cold plasma. The following notations with be used throughout the manuscript: 𝑛 0 is the unperturbed plasma density, 𝑛 𝑝 = 𝑛 0 + 𝛿𝑛 is the perturbed plasma electron density, 𝑛 𝑏 is the drive particle beam density, 𝑞 its particle charge (for the PWFA case), and 𝒗 the velocity of the plasma electron fluid element. The definitions for the electromagnetic potentials read:
𝑬 = -∇𝑉 - 𝜕𝑨 𝜕𝑡 (3.9) 𝑩 = ∇ × 𝑨 (3.10)
Coulomb gauge is a convenient gauge choice for plasmas and lasers related problems, which we will use in the following: ∇ • 𝑨 = 0. Furthermore, we will introduce the plasma wave When a beam propagates in a plasma, Poisson equation writes:
∇ 2 𝜙 = -𝑘 𝑝 2 ( 𝑞 𝑒 𝑛 𝑏 𝑛 0 - 𝛿𝑛 𝑛 0 ) (3.11)
The first term in the right-hand side is the source term due to the particle drive beam. The linearity of equation (3.10) allows to define the potential associated to the beam particles only:
𝜙 𝑏 . In fact, when the beam propagates in free space, Poisson equation is
∇ 2 𝜙 𝑏 = - 𝑘 𝑝 2 𝑛 0 𝑞 𝑒 𝑛 𝑏 .
The remaining term in the right-hand side of (3.10) describes the charge density arising from the ion background and the perturbed electron density of the plasma.
We now need to express the motion of an electron fluid element of the plasma. Defining the normalized momentum 𝒖 = The term 𝑐𝛁𝜙 is the electric force due to the charge density of the plasma and of the drive particle beam (PWFA case) and the term -𝑐𝛁𝛾 is the relativistic laser ponderomotive force (LWFA case) which pushes the plasma electrons towards the regions of lower 𝛾. The next derivations are performed by linearizing these two terms. 𝑎 = |𝒂| and 𝑛 𝑏 /𝑛 0 are supposed to be small compared to 1, and the induced perturbation of the electron density 𝛿𝑛 = 𝑛 𝑝 -𝑛 0 is assumed to be small compared to 𝑛 0 . In particular the LWFA source term can be rewritten: 𝑐𝛁γ = 𝐜𝛁(𝑎 2 /2). Under these assumptions, the regime of excitation of plasma waves will be called the linear regime.
Linearizing equation (2.9) and taking the time derivative leads to:
𝜕 2 𝛿𝑛 𝜕𝑡 2 + 𝑛 0 𝑐𝛻 • ( 𝜕𝒂 𝜕𝑡 + 𝑐𝛁 (𝜙 - 𝑎 2 2 )) = 0
Coulomb gauge ensures 𝛻 • 𝒂 = 0, therefore, one gets by replacing 𝜙 using Poisson equation (3.10):
( 𝜕 2 𝜕𝑡 2 + 𝜔 𝑝 2 ) 𝛿𝑛 𝑛 0 = 𝜔 𝑝 2 𝑞 𝑒 𝑛 𝑏 𝑛 0 + 𝑐 2 𝛻 2 𝑎 2 2 (3.13)
This equation describes the excitation of a plasma wave by a particle beam (first term of the right-hand side) or a laser (second term of the right-hand side), and shows the similarity of the laser-driven (LWFA) and beam-driven (PWFA) schemes in the linear regime.
To help readers imagine how we can excite plasma waves in a real experiment, details of a LWFA experiment are depicted in Fig. 3.2. The laser pulse (orange) of typical transverse dimension 20 𝜇𝑚 drives a plasma wave in its wake as seen in
b. Beam driven plasma density waves
Equation (3.12) relates 𝛿𝑛/𝑛 0 with the source terms for both particle and laser drivers. As we shall see in the next section for laser driven waves, equation (3.12) leads directly to an equation over 𝜙 that can be solved for a Gaussian laser driver and thus the 𝑬 and 𝑩 fields can be derived directly in that case. For a particle driver, deriving the electromagnetic fields is more difficult and requires to write and solve directly the equations over 𝑬 and 𝑩. This is done in this section, by following the calculation first published by Keinigs and Jones [ Keinigs 86].
The outline of the derivation is the following: we write first the equations describing the evolution of the 𝑬 and 𝑩 fields. Then we will switch to Fourier space for the longitudinal variable. The Fourier transformed equations can then be solved for a point-like source, which leads to the corresponding Green function solution. Coming back to real space with an inverse Fourier transform, a convolution of the solution for a point-like source with the real drive beam profile will lead to the general solution. This strategy will be applied to the 𝐸 𝑧 field, to the transverse force 𝑊 = 𝐸 𝑟 -𝑐𝐵 𝜃 and to the electron density in the plasma 𝑛 𝑝 .
In this section, 𝜌 is the charge density of the problem: 𝜌 = 𝑒𝑛 0 -𝑒𝑛 𝑝 + 𝑞𝑛 𝑏 = -𝑒𝛿𝑛 + 𝑞𝑛 𝑏 , the velocity of the beam is 𝑣 𝑏 and there is no laser.
Applying operator 𝛻 to Poisson equation and operator (3.17)
The equation for 𝑬 can therefore be rewritten:
(𝛥 - 𝜔 𝑝 2 𝑐 2 - 1 𝑐 2 𝜕 2 𝜕𝑡 2 ) 𝑬 = 𝜇 0 𝜕𝒋 𝒃 𝜕𝑡 + 𝜵𝜌 𝜖 0 (3.18)
The curl of equation (3.17) along with Maxwell-Faraday equation leads to the equation for 𝑩:
(𝛥 - 𝜔 𝑝 2 𝑐 2 - 1 𝑐 2 𝜕 2 𝜕𝑡 2 ) 𝑩 = -𝜇 0 𝛁 × 𝒋 𝒃 (3.19)
These two equations describe the evolution of the fields in the plasma, the source terms appear on the right. The 𝑩 field evolution depends only on the drive beam current, in contrast with the electric field 𝑬 evolution. It means that in the linear regime, the response of the plasma will be purely electrostatic.
An important approximation is needed at that point, we assume now that the fields in the wake only depend on the variable 𝜉 = 𝑣 𝑏 𝑡 -𝑧. This assumption is called the "quasistatic approximation" and means that the drive beam envelope evolves much slower than plasma electrons move. The change of variables from (𝑡, 𝑧) to 𝜉 = 𝑣 𝑏 𝑡 -𝑧, leads to the following equations:
(𝛥 ⊥ - 𝜔 𝑝 2 𝑐 2 + 1 𝛾 𝑏 2 𝜕 2 𝜕𝜉 2 ) 𝑬 = 𝜇 0 𝜕𝒋 𝒃 𝜕𝑡 + 𝜵𝜌 𝜖 0 (3.20) (𝛥 ⊥ - 𝜔 𝑝 2 𝑐 2 + 1 𝛾 𝑏 2 𝜕 2 𝜕𝜉 2 ) 𝑩 = -𝜇 0 𝛁 × 𝒋 𝒃 (3.21)
With the assumption of cylindrical symmetry, the right-hand term of equation (3.19) can be written:
𝜇 0 𝜕(𝒋 𝒃 ) 𝜕𝑡 + 𝜵𝜌 𝜖 0 = 1 𝜖 0 𝜕 𝜕𝜉 (𝑞 (( 𝑣 𝑏 𝑐 ) 2 -1) 𝑛 𝑏 + 𝑒𝛿𝑛) 𝒖 𝒛 + 1 𝜖 0 𝜕 𝜕𝑟 (𝑞𝑛 𝑏 -𝑒𝛿𝑛)𝒖 𝒓 (3.22)
Besides the linearization of the equations, we can make another assumption to go further into the demonstration: it can be assumed that 𝑣 𝑏 = 𝑐 and therefore, 𝛾 𝑏 = ∞ . Under this assumption:
(𝛥 ⊥ -𝑘 𝑝 2 )𝑬 = 𝑒 𝜖 0 𝜕𝛿𝑛 𝜕𝜉 𝒖 𝒛 + 𝑒 𝜖 0 𝜕 𝜕𝑟 ( 𝑞 𝑒 𝑛 𝑏 -𝛿𝑛) 𝒖 𝒓 (3.23) (𝛥 ⊥ -𝑘 𝑝 2 )𝑩 = 𝑞 𝑐𝜖 0 𝜕𝑛 𝑏 𝜕𝑟 𝒖 𝜽 (3.24)
The limit we just considered here illustrates an important property of relativistic particle beams: before considering the relativistic limit, there was a term containing 𝑛 𝑏 in the equation for 𝐸 𝑧 , which disappeared when 𝑣 𝑏 → 𝑐. This means that the 𝑬 field of a relativistic beam has the shape of a disk, or a pancake, it is purely transversal, and in the case of a cylindrically symmetric beam, it is also cylindrically symmetric.
(𝛥 ⊥ -𝑘 𝑝 2 )𝐸 ̂𝑧 = - 𝑒 𝜖 0 𝑖𝑘𝛿𝑛 ̂ (3.25) (𝛥 ⊥ - 1 𝑟 2 -𝑘 𝑝 2 ) 𝐸 ̂𝒓 = 𝑒 𝜖 0 𝜕 𝜕𝑟 ( 𝑞 𝑒 𝑛 ̂𝑏 -𝛿𝑛 ̂) (3.26) (𝛥 ⊥ - 1 𝑟 2 -𝑘 𝑝 2 ) 𝐵 ̂𝜽 = 𝑞 𝑐𝜖 0 𝜕𝑛 ̂𝑏 𝜕𝑟 (3.27)
We need to express the right-hand side of all three equations (3.25) -(3.27) using only 𝑛 ̂𝑏, therefore, we express 𝛿𝑛 ̂ using equation (3.12):
( 𝜕 2 𝜕𝜉 2 + 𝑘 𝑝 2 ) 𝛿𝑛 = 𝑘 𝑝 2 𝑞 𝑒 𝑛 𝑏 → (𝑘 𝑝 2 -𝑘 2 )𝛿𝑛 ̂= 𝑘 𝑝 2 𝑞 𝑒 𝑛 ̂𝑏
Therefore, 𝛿𝑛 ̂ can be replaced by
𝑘 𝑝 2 𝑘 𝑝 2 -𝑘 2 𝑞 𝑒
𝑛 ̂𝑏 in the equations over 𝑬:
(𝛥 ⊥ -𝑘 𝑝 2 )𝐸 ̂𝑧 = 𝑞 𝜖 0 𝑖𝑘𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 𝑛 ̂𝑏 (3.28) (𝛥 ⊥ - 1 𝑟 2 -𝑘 𝑝 2 ) 𝐸 ̂𝒓 = 𝑘 2 𝑘 2 -𝑘 𝑝 2 𝑞 𝜖 0 𝜕𝑛 ̂𝑏 𝜕𝑟 (3.29)
The equation over 𝐵 𝜃 can be replaced by a more interesting one, over the transverse force 𝐹 𝑟 (divided by the elementary charge here) on the beam, that can be evaluated thanks to the Lorentz force expression:
𝐹 𝑟 𝑒 = (𝑬 + 𝑣 𝑏 𝒖 𝒛 × 𝑩). 𝒖 𝒓
The source is a beam whose speed is considered in the relativistic limit: 𝒗 𝒃 = 𝑐𝒖 𝒛 . Therefore, the transverse force is:
𝑊 = 𝐹 𝑟 𝑒 = 𝐸 𝑟 -𝑐𝐵 𝜃 (3.30)
Combining the equations for 𝐸 𝑟 and 𝐵 𝜃 , gives:
(𝛥 ⊥ - 1 𝑟 2 -𝑘 𝑝 2 ) 𝑊 ̂= 𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 𝑞 𝜖 0 𝜕𝑛 ̂𝑏 𝜕𝑟 (3.31)
We are now going to replace the source term by a point-like source, the solution of the corresponding equation is the Green function of our problem. We will then convolve this solution with the general source profile to reach the general solution of the problem.
The point-like source term written in cylindrical coordinates with azimuthal symmetry, has the following expression:
𝑛 𝑏 (𝑟, 𝜉) = 𝛿(𝑟 -𝑟′) 2𝜋𝑟 𝛿(𝜉) → 𝑛 ̂𝑏(𝑟, 𝑘) = 𝛿(𝑟 -𝑟′) 2𝜋𝑟
The equations are now:
(𝛥 ⊥ -𝑘 𝑝 2 )𝐸 ̂𝑧 = 𝑞 𝜖 0 𝑖𝑘𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 𝛿(𝑟-𝑟 ′ ) 2𝜋𝑟 (3.32) (𝛥 ⊥ - 1 𝑟 2 -𝑘 𝑝 2 ) 𝑊 ̂= 𝑞 𝜖 0 𝑘 𝑝 2 𝑘 2 -
(𝛥 ⊥ -𝑘 𝑝 2 )𝑔 0 = - 4𝜋 𝑟 𝛿(𝑟 -𝑟′) (3.35)
is 𝑔 0 (𝑟, 𝑟′) = 4𝜋𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > ), where 𝐼 0 and 𝐾 0 are Bessel modified functions and the normalization factor 4𝜋 is given by the discontinuity of the equation at 𝑟′ and where 𝑟 < = min(𝑟, 𝑟′) and 𝑟 > = max (𝑟, 𝑟′).
Therefore, the most general solution for 𝐸 ̂𝑧,𝐺 will be:
𝐸 ̂𝑧,𝐺 (𝑘, 𝑟) = - 𝑞 𝜖 0 𝑖𝑘𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 1 2𝜋 𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > ) (3.36)
The inverse Fourier transform of the solution writes:
𝐸 𝑧,𝐺 (𝑟, 𝜉) = - 𝑞 𝜖 0 𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > ) 2𝜋 𝑝. 𝑣. ∫ 𝑖𝑘𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 +∞ -∞ 𝑒 𝑖𝑘𝜉 𝑑𝑘 (3.37)
The usual method to calculate the integral term uses the Residue theorem, with a well-chosen contour. The contour integral must take into account the two real poles of the function, and have this contour enlarged to the whole complex plane. Our contour 𝛾 is depicted in Fig. 3.3 (a). For positive values of 𝜉, the complex exponential 𝑒 𝑖𝑘𝜉 will decay at infinity when ℐ(𝑘) > 0. We therefore choose the contour as depicted in Fig. 3.3 (a).
The poles of
lim 𝑅→∞ ∫ (𝑖𝑅𝑒 𝑖𝜃 ) 2 (𝑅𝑒 𝑖𝜃 ) 2 -𝑘 𝑝 2 𝑒 -𝑅(sin (𝜃)-𝑖𝑐𝑜𝑠(𝜃))𝜉 𝑑𝜃 𝜋 0 = 0 lim 𝜖→0 ∫ 𝑖(±𝑘 𝑝 + 𝜖𝑒 𝑖𝜃 ) (±𝑘 𝑝 + 𝜖𝑒 𝑖𝜃 ) 2 -𝑘 𝑝 2 𝑒 𝑖(±𝑘 𝑝 +𝜖𝑒 𝑖𝜃 )𝜉 𝑖𝜖𝑒 𝑖𝜃 𝑑𝜃 0 𝜋 = -𝑖𝜋 [ 𝑖 2 (𝑒 𝑖𝑘 𝑝 𝜉 + 𝑒 -𝑖𝑘 𝑝 𝜉 )] lim 𝜖→0 ∫ 𝑖𝑘 𝑘 2 -𝑘 𝑝 2 𝑒 𝑖𝑘𝜉 𝑑𝑘 𝐼 = 𝑝. 𝑣. ∫ 𝑖𝑘 𝑘 2 -𝑘 𝑝 2 𝑒 𝑖𝑘𝜉 𝑑𝑘 +∞ -∞
As a result:
𝑝. 𝑣. ∫ 𝑖𝑘 𝑘 2 -𝑘 𝑝 2 𝑒 𝑖𝑘𝜉 𝑑𝑘 +∞ -∞ = -𝜋 cos(𝑘 𝑝 𝜉)
The Green function solution is therefore in real space, with H the Heaviside function:
𝐸 𝑧,𝐺 (𝑟 ′ , 𝜉) = 𝑞 2𝜖 0 𝑘 𝑝 2 𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > )cos(𝑘 𝑝 𝜉)𝐻(𝜉) (3.38)
The general solution is the convolution between the solution (3.38) and the real source profile:
𝐸 𝑧 (𝑟, 𝜉) = 𝑞 2𝜖 0 𝑘 𝑝 2 ∫ ∫ 𝑛 𝑏 (𝑟 ′ , 𝜉 ′ )𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > )cos (𝑘 𝑝 (𝜉 -𝜉 ′ )) 𝐻(𝜉 -𝜉 ′ )𝑟′𝑑𝑟′ ∞ 0 ∞ -∞
𝑑𝜉′
If we consider a source profile of the form:
𝑛 𝑏 = 𝑁 (2𝜋) 3/2 𝜎 𝑟 2 𝜎 𝑧 𝑒 -𝑟 2 /2𝜎 𝑟 2 𝑒 -𝜉 2 /2𝜎 𝑧 2
, where 𝑁 is the bunch particle number, 𝜎 𝑟 𝜎 𝑧 the transverse and longitudinal extents of the bunch, we obtain:
𝐸 𝑧 (𝑟, 𝜉) = 𝑞 𝜖 0 𝑘 𝑝 2 𝑁 2(2𝜋) 3 2 𝜎 𝑟 2 𝜎 𝑧 ∫ 𝑒 - 𝜉 ′ 2 2𝜎 𝑧 2 cos (𝑘 𝑝 (𝜉 -𝜉 ′ )) 𝑑𝜉 ′ 𝜉 -∞ ∫ 𝑒 -𝑟′ 2 /2𝜎 𝑟 2 𝐼 0 (𝑘 𝑝 𝑟 < )𝐾 0 (𝑘 𝑝 𝑟 > )𝑟′𝑑𝑟′ ∞ 0
(3.39)
The solution (3.39) is plotted in Fig. 3.3 (b), in the linear regime, the 𝐸 𝑧 field in the wakefield appears with the cosines shape. The wave period is given by the plasma wavelength 𝜆 𝑝 , we will see in the following that the transverse extent of 𝐸 𝑧 is larger than the electron density of the plasma.
Solution for 𝑾:
The equation for 𝑊 is the following:
1 𝑟 𝜕 𝜕𝑟 (𝑟 𝜕 𝜕𝑟 𝑊 ̂) -( 1 𝑟 2 + 𝑘 𝑝 2 ) 𝑊 ̂ = 𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 𝑞 𝜖 0 𝜕𝑛 ̂𝑏 𝜕𝑟 (3.40)
Because of the additional term 1 𝑟 2 , the solution is a combination of Bessel modified functions of first order, 𝐼 1 and 𝐾 1 . The general solution for a point-like source writes [Jackson 62]:
𝑔 1 (𝑟, 𝑟 0 ) = 𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > ) 𝑊 𝐺 ̂= - 𝑞 𝜖 0 𝑘 𝑝 2 𝑘 2 -𝑘 𝑝 2 1 2𝜋 𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > ) (3.41)
where 𝑟 < = min(𝑟, 𝑟′) and 𝑟 > = max (𝑟, 𝑟′).
To calculate the inverse Fourier transform, we should consider the same contour as for 𝐸 𝑧 , and we obtain the following expression:
𝑊 𝐺 = - 𝑞 𝜖 0 1 2𝜋 𝑘 𝑝 2 𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > ) 𝑝. 𝑣. ∫ 𝑒 𝑖𝑘𝜉 𝑘 2 -𝑘 𝑝 2 𝑑𝑘 +∞ -∞
The complex integral is:
𝑝. 𝑣. ∫ 𝑒 𝑖𝑘𝜉 𝑘 2 -𝑘 𝑝 2 𝑑𝑘 +∞ -∞ = - 𝜋 𝑘 𝑝 sin (𝑘 𝑝 𝜉)
The solution yields in real space, with 𝐻 the Heaviside function:
𝑊 𝐺 = 𝑞 2𝜖 0 𝑘 𝑝 𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > )sin (𝑘 𝑝 𝜉)𝐻(𝜉) (3.42)
The convolution formula for 𝑊 𝐺 and the whole source distribution is the following:
𝑊(𝑟, 𝜉) = - 𝑞 2𝜖 0 𝑘 𝑝 ∫ ∫ 𝜕𝑛 𝑏 𝜕𝑟 (𝑟′, 𝜉 ′ )𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > )sin(𝑘 𝑝 (𝜉 -𝜉 ′ ))𝑟′𝑑𝑟′ ∞ 0 𝜉 -∞
𝑑𝜉′
which is a more explicit formula for 𝑊. Replacing the source profile we chose to consider, 𝑛 𝑏 = 𝑁 (2𝜋) 3/2 𝜎 𝑟 2 𝜎 𝑧 𝑒 -𝑟 2 /2𝜎 𝑟 2 𝑒 -𝜉 2 /2𝜎 𝑧 2 , the previous equation writes:
𝑊(𝑟, 𝜉) = 𝑞 𝜖 0 𝑘 𝑝 𝑁 4(2𝜋) 3 2 𝜎 𝑟 4 𝜎 𝑧 ∫ 𝑒 - 𝜉 ′2 2𝜎 𝑧 2 sin(𝑘 𝑝 (𝜉 -𝜉 ′ ))𝑑𝜉 ′ 𝜉 -∞ ∫ 𝑟 ′2 𝑒 -𝑟′ 2 /2𝜎 𝑟 2 𝐼 1 (𝑘 𝑝 𝑟 < )𝐾 1 (𝑘 𝑝 𝑟 > )𝑑𝑟′ ∞ 0 (3.43)
𝑊 is plotted in Fig. 3.4 (a), the force has a symmetry with respect to the propagation axis 𝑧.
The integral term in formula (3.43) different from the integral in (3.39) explains the sign difference on each side of the axis. The sinusoidal shape is visible, the wakefield has alternative focusing and defocusing region for the accelerated particles.
Solution for 𝜹𝒏:
The derivation for the plasma electron density is more direct, from equation (3.34):
𝛿𝑛 ̂(𝑟, 𝑘) = 𝑘 𝑝 2 𝑘 𝑝 2 -𝑘 2 𝑞 𝑒 𝑛 ̂𝑏 (3.44) 𝛿𝑛 𝐺 (𝑟, 𝜉) = - 𝑞 𝑒 1 2𝜋 𝑘 𝑝 2 𝑝. 𝑣. ∫ 𝑒 𝑖𝑘𝜉 𝑘 2 -𝑘 𝑝 2 𝑑𝑘 +∞ -∞
The same calculation, with the same contour as performed for 𝐸 𝑧 and 𝑊 leads to:
𝑝. 𝑣. ∫ 𝑒 𝑖𝑘𝜉 𝑘 2 -𝑘 𝑝 2 𝑑𝑘 +∞ -∞ = - 𝜋 𝑘 𝑝 sin (𝑘 𝑝 𝜉)
Therefore, the solution for a point-like (in 𝜉) source writes, with 𝐻 the Heaviside function:
𝛿𝑛 𝐺 (𝑟, 𝜉, 𝜉′) = 𝑞 2𝑒 𝑘 𝑝 sin(𝑘 𝑝 𝜉) 𝐻(𝜉) (3.45)
Replacing the source distribution profile 𝑛 𝑏 = 1 (2𝜋) 3/2 𝜎 𝑟 2 𝜎 𝑧 𝑒 -𝑟 2 /2𝜎 𝑟 2 𝑒 -𝜉 2 /2𝜎 𝑧 2 and convolving the Green function with the source profile: ), it is possible to solve the equation over 𝜙 directly. From the explicit formula of 𝜙, the fields can be obtained easily. In this section, the results are recalled briefly.
𝛿𝑛(𝑟, 𝜉) = 𝑞 𝑒 𝑘 𝑝 𝑁𝑒 - 𝑟 2 2𝜎 𝑟 2 2(2𝜋) 3/2 𝜎 𝑟 2 𝜎 𝑧 ∫ e - 𝜉 ′2
Combining equation (3.10) and (3.12) with 𝑛 𝑏 = 0 leads to: , where 𝜏 0 is the FWHM duration of the Gaussian laser pulse. After averaging the source term over a laser period, one obtains for 𝜙 when 𝜉 ≫ 𝑐𝜏 0 :
𝜙(𝑟, 𝜉) = - 𝑘 𝑝 4 ∫ 𝒂 2 +∞ -∞ sin (𝑘 𝑝 (𝜉 -𝜉 ′ )) 𝑑𝜉 ′ 𝜙(𝑟, 𝜉) = - 𝑘 𝑝 4 ∫ 𝒂 2 (cos(𝑘 𝑝 𝜉 ′ ) sin(𝑘 𝑝 𝜉) + cos(𝑘 𝑝 𝜉) sin(𝑘 𝑝 𝜉 ′ )) +∞ -∞ 𝑑𝜉 ′ 𝜙(𝑟, 𝜉) = -𝑎 0 2 𝑘 𝑝 4 𝑒 - 2𝑟 2 𝑤 0 2 ∫ 𝑒 - 𝜉 ′ 2 𝑙 2 cos(𝑘 𝑝 𝜉 ′ ) +∞ -∞ sin(𝑘 𝑝 𝜉) 𝑑𝜉 ′ (3.48)
as the integral of the term containing 𝑠𝑖𝑛(𝑘 𝑝 𝜉 ′ ) is null. The result is:
𝜙(𝑟, 𝜉) = -√𝜋𝑎 0 2 𝑘 𝑝 𝑙 4 𝑒 - 2𝑟 2 𝑤 0 2 𝑒 - 𝑘 𝑝 2 𝑙 2 4 sin(𝑘 𝑝 𝜉)
Therefore, Poisson equation (3.10) leads to the density in the wake of the laser pulse:
𝛿𝑛 𝑛 0 = -√𝜋𝑎 0 2 𝑘 𝑝 𝑙 4 𝑒 - 𝑘 𝑝 2 𝑙 2 4 𝑒 - 2𝑟 2 𝑤 0 2 sin(𝑘 𝑝 𝜉) (3.49)
One gets also the 𝐸 𝑧 and 𝐸 𝑟 fields after the laser pulse thanks to the electrostatic potential: The density perturbation, as the electrostatic potential depends on 𝜉 as ∝ sin (𝑘 𝑝 𝜉).
𝐸 𝑧 = -
In many experiments, the drive beams are very strong and the wakefield regime is not linear anymore, the description above does not hold anymore, the regime is nonlinear. Furthermore, the nonlinear regime of wakefield acceleration has interesting properties for the applications of particle beams.
One-dimensional solution of plasma waves in the nonlinear regime
The relativistic nonlinear theory of laser driven plasma waves can be solved analytically in one dimension [Akhiezer 56, Dawson 59].
Regarding laser driven waves, from the general equation of motion in the relativistic case (3.4), in the quasi-static approximation, we can reach the following equation for 𝜙, with
𝛽 𝑝 = 𝑣 𝑝 𝑐 , 𝛾 𝑝 = (1 -𝛽 𝑝 2 )
-1 2 and 𝑣 𝑝 the phase velocity of the plasma wave:
𝜕 2 𝜙 𝜕𝜉 2 = 𝑘 𝑝 2 {𝛾 𝑝 2 [𝛽 𝑝 (1 - 1+<𝑎 2 > 𝛾 𝑝 2 (1+𝜙) 2 ) - 1 2 -1] } ≅ 𝑘 𝑝 2 2 [ 1+<𝑎 2 > (1+𝜙) 2 -1 ] (3.52)
In this specific one-dimensional case, equation (3.52) has been extensively studied [Dalla 93, Dalla 94, Teychenné 94]. The solution for the potential, the corresponding longitudinal field 𝐸 𝑧 and the density perturbation
𝛿𝑛 𝑛 0
are plotted in Fig. 3.5 (a). The shape of the longitudinal electric field is not sinusoidal anymore, but it has a saw tooth typical shape, with steep gradients. Its plasma wavelength, 𝜆 𝑁𝐿 depends on the intensity of the driver. The plasma electron density profile has peaks of high amplitude at the points where the electric field sign changes. At these very points, plasma electrons have velocities close to the plasma wave velocity, and therefore they stay in these regions for a long time. This behaviour is in clear contrast with the behaviour of electrons in the regions where
𝛿𝑛 𝑛 0
is minimal whose velocities are close to -𝑣 𝑝 .
Regarding beam driven waves, in the linear regime, formulas (3.39), (3.43) and (3.46) are identical for positrons and electrons. In fact, the charge of the drive beam particles in each of the explicit formulas for 𝑛 𝑝 , 𝑊 and 𝐸 𝑧 only appears as a pre-factor. In both electron and positron driven wakefields, the beam gives an initial kick to plasma electrons either by pulling or pushing them. In addition, the initial displacement is quite weak. When the beam density increases, the excited waves will not be described by the linear model and the formulas above will not hold any more. The difference in the interaction for positrons and electrons leads to very non-symmetrical behaviours of plasma electrons. In electron driven nonlinear waves, a one-dimensional model provides an equation over 𝛽, the speed of the plasma electrons, first derived in 1959 [Akhiezer 59]:
𝑛 𝑝 = 𝑛 0 1-𝛽 (3.53) 𝑑 2 𝑑𝜉 2 (√ 1-𝛽 1+𝛽 ) = 𝜔 𝑝 2 ( 𝛽 1-𝛽 + 𝑛 𝑏 𝑛 0 ) (3.54)
Written here for simplicity under the hypothesis that the drive beam velocity is the speed of light. the solution to the one of (3.52). In Fig. 3.5 (b) the solution for an electron driver is displayed.
The wakefield has the same properties as the laser driven one.
A qualitative description of plasma electron motion in three-dimensional waves for positron driven wakes will be given in Part II. The next section is dedicated to the model of plasma electron motion in three-dimensional laser or electron beam driven waves.
Nonlinear "Blow-out" regime a. The Bubble regime
A model exists to predict the behaviour of three-dimensional plasma waves driven by an extremely intense laser or an electron driver. With increasing laser power available in the laser facilities around the world and high electron beam density in conventional accelerators, the nonlinear regime of wakefield acceleration, 𝑎 0 ≫ 1, or 𝑛 𝑏 ≫ 𝑛 0 became experimentally achievable. Drive beams do not just create small perturbations of the density but rather expel electrons out of a cavity (Fig. 3.6 (a) and (b)) in the plasma in this regime: this is the "blowout" [Sun 87, Pukhov 02].
The nonlinear "blow-out" regime is all the more interesting that from 2006 on, a phenomenological model provides several useful scalings of the characteristics of the accelerating cavity with the drive beam and plasma parameters. It is the first comprehensive three-dimensional theory of nonlinear plasma waves [Lu 06a, Lu 06b].
As said in the first paragraph, the model assumes that for very intense drive beams, all background electrons within a blow-out radius are totally expelled outward from the axis. These electrons then form a dense sheath surrounding an ion cavity, before crossing the axis and closing the cavity (see In addition, the "bubble model" predicts that when using an intense laser pulse as the driver, with a size matched to the condition 𝑘 𝑝 𝑤 0 = 2√𝑎 0 , the cavity takes a quasi-spherical shape. This spherical shape gave to the model its name "bubble model". In fact, the model provides an ellipse equation for the shape of the cavity. It is then possible to retrieve from this equation an order of magnitude for the cavity length: 𝐿 ≈ √1 + 𝛽 2𝑘 𝑆 𝑟 𝑚 . The model also shows that this condition is the requirement for a stable optical guiding of the laser. Fulfilling this condition ensures therefore that acceleration can occur over a significant distance. Furthermore, the model provides also an approximate solution for the slope of the field 𝐸 𝑧 in the cavity
𝑑𝐸 𝑧 𝑑𝜉 ≈ - 1 2 .
Inside the cavity, there is a transverse focusing force for electrons due to the background ions, that scales as 𝐹 ⊥ ∝ -𝑟, 𝑟 being the distance to the axis. The focusing force is purely linear in 𝑟 and should not introduce abberations in principle, and thus should not lead to emittance growth.
b. Wavebreaking limit
It appears that the coherent motion of electrons in a plasma wave is limited, and the determination of this limit is of prime importance to physicists. It is important especially when it comes to applications of plasma based accelerators to high energy physics, for which a limitation in accelerating gradients is a major drawback.
When the amplitude of a plasma wave becomes higher than a limiting value, coherent motion of plasma electrons disappears. If some particle trajectories have a too high amplitude, they may cross other particles trajectories. This is the situation when wavebreaking takes place. In 1D for instance, if plasma particles displacement occurs along axis 𝑂𝑧, wavebreaking means that plasma particles do not stay ordered along 𝑂𝑧 as they were before the perturbation reaches them. The limit corresponds as well to the limit of the fluid approximation: when wavebreaking occurs, some plasma electrons share the same position but have different velocities.
The model derived here is a simple one-dimensional model [Mori 90], and considers a cold plasma in the fluid approximation.
Starting with the case of a wave with a non-relativistic phase velocity, we consider the particle conservation equation, in one dimension. The change of variables 𝑧(𝑧 0 , 𝜏) = 𝑧 0 + 𝜁(𝑧 0 , 𝜏) and 𝑡 = 𝜏 , 𝜁 being the fluid particle displacement from the initial position 𝑧 0 , leads to: is the non-relativistic wavebreaking limit. It must be pointed out that wavebreaking occurs when there is no unique correspondence between the Lagrangian and Eulerian coordinates.
𝜕𝑛
Replacing 𝑣 𝜙 by 𝑐 in 𝐸 0 gives a convenient order of magnitude of the accelerating gradient in a plasma wakefield. It appeared in formulas (3.50) and (3.51) for the field values in the linear regime of LWFA. Some of the plots of the fields in this thesis are normalized to . This is the relativistic cold plasma wavebreaking limit. As said in the derivation for the non-relativistic case, when the plasma density wave is driven to an amplitude larger than this limit, the speed of the electrons can outrun the phase velocity of the wave and therefore a longitudinal crossing occurs, that is wavebreaking occurs.
Experimental physicists in plasma based acceleration rely on the theory presented in this chapter: beam propagation in a plasma, plasma wave excitation by a drive beam and acceleration regimes are key concepts. The experiment discussed in the next part takes place in this theoretical context. This experiment relied on a positron driven wakefield to accelerate particles from a second, distinct bunch of positrons.
Part II Plasma wakefield acceleration of positrons
The core experiment of my thesis is the first demonstration of the acceleration of a distinct bunch of positrons in a plasma wakefield accelerator. Part II will be dedicated to the presentation of the experiment and of its results. This chapter is dedicated to a more comprehensive introduction of positron driven nonlinear plasma waves as the experiment was accomplished, for technical reasons, with a positron driver instead of an electron or laser one. A presentation of the state of the art experiments regarding positron acceleration is given in the second section, followed by a presentation of the SLAC and FACET facilities and of their beam parameters. Chapter 4
Positron driven plasma wakefield
We evoked briefly in Part I the difference between positron and electron driven plasma wakefield, in the nonlinear regime. This difference was illustrated in Fig. 3.6 (b) and (c).
Positron drive bunches pull plasma electrons inward instead of pushing them outward. The phenomenon is called the "suck-in regime" [Lee 01].
We focus now only on positron drivers. When the initial kick due to the driver is weak, then plasma electrons oscillate with a small amplitude around their initial positions. This is the linear regime of positron driven wakefields. The positron driven wakefield in this regime has the same analytic solution as the electron driven linear plasma wakefield. When the positron driver density increases, the force on plasma electrons becomes accordingly stronger, the motion induced to the electrons can cause them to reach and cross the axis as seen in Fig. 3.6 (c). This major difference has consequences on the wakefield that explain the difficulty to use positron driven nonlinear wakefield for plasma-based particle acceleration.
First, electrons from different initial radii cross the axis at different times, the on-axis compression is not optimal [Lee 01, Sahai 15]. This is due first to the different distances between the sucked-in electrons and the driver and second to the force each ring of electrons faces that decreases with the distance from the ring to the axis. Consequently, the compression area where electrons are densely pulled around the axis is not optimally localized. As a result, the wakefield is accordingly weaker: accelerating fields in a positron driven wakefield are weaker.
Second, in the wake of the driver, when plasma electrons undergo radial oscillations, accelerator cavities are formed. In the first cavity for instance, the focusing and accelerating region for positrons is very limited. For each period of the wave, this region lies at the back of the cavity. In fact, plasma electrons are being focused to the axis over a larger length as explained in the above paragraph. However, in the area where they are not being focused, there are unshielded background ions whose fields are defocusing for positrons [ Sahai 15]. In Fig. 4.1 is drawn a positron driven nonlinear wakefield, with the focusing area for positrons highlighted. It is therefore very challenging to use a positron driven wave to efficiently accelerate an externally injected bunch of positrons.
In addition to these difficulties, it must be added that the process of creating a positron bunch, detailed in section 3 is not energetically efficient compared to a laser pulse or an electron bunch. This is an important drawback to the use of positron driven waves in plasma wakefield experiments compared to laser or electron beam driven ones.
PWFA technology can be applied to colliders either as an "energy booster" to a conventional collider (also called "afterburner") or as a multi-staged PWFA accelerator [Lee 02] for an allplasma collider. To stage plasma accelerator sections, synchronization between a drive and an externally produced beam is necessary. This is still a major challenge for the scientific community. Up to now, as it is accomplished in the experiment described in this chapter, for technical reasons the drive and the trailing bunch come from the same initial bunch that is reshaped.
Positron acceleration experiments
As said in the previous section, the acceleration of positrons is a crucial result for future plasma-based 𝑒 + /𝑒 -colliders. However, plasma-based positron acceleration lags far behind electron acceleration [Downer 14] for the reasons listed above. Some experiments were already performed to study either acceleration in a positron driven wakefield itself or physical phenomena occurring to the positron drive bunch when it propagates in a plasma.
Regarding propagation and transverse forces acting on a positron bunch traversing a plasma, an experiment reported in 2003 [Hogan 03] studied the focusing effect of a 1.4 𝑚-long plasma on a long positron beam (𝜎 𝑧 = 3 𝑚𝑚), whose density was higher than the plasma electron density. The transverse effects of a positron wakefield can be more complicated due first to the incoherent convergence of plasma electrons toward the axis described in section 1. Furthermore, the transverse force is also impacted by the large quantity of surrounding electrons converging as opposed to the limited number of background ions inside the volume blown-out by an electron drive beam. The authors of this work showed a focusing of the tail of the beam (Fig. 4.2 (a)) and a clear dependence on the plasma density and identified an optimal density value. The first experimental demonstration of positron acceleration was accomplished concomitantly, and also at SLAC [Blue 03]. The experiment relied on a 1.5 𝑚𝑚 -long positron bunch, in a 1.40 𝑚 -long plasma of density 2 10 14 𝑐𝑚 -3 . The authors showed a reasonable agreement between theory and experiment, proving that the front of the drive bunch deposits energy in the plasma while the positrons at the back of the bunch gain energy. The spectrum of the outgoing bunch is continuous in this experiment, with a maximal energy gain of about 85 MeV and an energy loss of 70 MeV, for an initial energy spread of 20 MeV.
As concluded by the authors, this precursor work on acceleration opens prospects towards either a multiple plasma accelerator stages device or an extremely high gradient single stage also known as "afterburner" or "energy booster".
In the context of creating an "afterburner", positron driven waves can be used to increase the energy of some particles from the drive beam itself. However, in such a scheme, it is necessary to preserve emittance of the initial beam when accelerating some of its particles.
The transverse force of a positron driven wakefield are not linear in 𝑟, and are dependent on the longitudinal coordinate 𝜉 [Lee 01]. An experiment of 2008, accomplished at SLAC [Muggli 08] studied the consequences of the nonlinearity of the transverse forces on the emittance of a 700 𝜇𝑚-long beam in a plasma with a 5 10 14 𝑐𝑚 -3 electron density. They observed an accumulation of accelerated positrons on a ring surrounding the axis during the acceleration, and an increase of the emittance of the beam during the interaction with the plasma.
More recently, an experimental result published in Nature [ Corde 15], demonstrated very high acceleration gradients for positrons, in a self-loaded plasma wakefield. Using a 1.4 10 10 particles in the initial positron bunch and a plasma of density 8 10 16 𝑐𝑚 -3 , the authors demonstrated the acceleration of some positrons coming from the back of the bunch, forming an accelerated bunch with good properties as can be seen in Fig. 4.2 (b). The accelerated bunch had an energy spread of around 2%, a charge in the range 100 -200 𝑝𝐶, and an energy gain ranging from 3 𝐺𝑒𝑉 to 10 𝐺𝑒𝑉. Such good properties were due to a phenomenon the authors identified through the comparison with numerical simulations: a longitudinal and transverse beam loading. Longitudinal beam loading is described in detail in the next chapter of the manuscript. This self-loaded scheme demonstrated the feasibility of an "energy booster" using positrons, with very high accelerating gradients.
SLAC and FACET Facilities a. Accelerator facility
The PWFA experiments described in this chapter took place at SLAC National Accelerator Laboratory. The main SLAC accelerator is a linear radiofrequency accelerator located in Menlo Park, California. It operated for the first time in 1966. In 1972, an extension, SPEAR, opened. The new project consisted in the building of two storage rings, that were used to collide electrons and positrons of 3 𝐺𝑒𝑉 and to produce X-ray beams. A platform called Stanford Synchrotron Radiation Lightsource started in 1973. This platform was using the synchrotron radiation emitted by the particles moving in the rings for molecular imaging. After the shutdown of FACET in April 2016, LCLS-II, an X-ray free electron laser operating at MHz repetition rate and using superconducting technology, is being built and is expected to give first light in 2020. LCLS-II accelerator will use the first kilometer of the historical SLAC 3-km linac tunnel. In parallel, FACET is being upgraded to FACET-II, a new facility expected to provide state-of-the-art electron and positron beam parameters that could make many discoveries possible in the field of PWFA. FACET-II will use the second kilometer of the historical SLAC 3-km linac.
SLAC National Accelerator Laboratory is one of the ten laboratories of the US Department of Energy and is operated on its behalf by Stanford University. As said in the general introduction about particle accelerators of Chapter 1, SLAC led three of its users to be awarded a Nobel Price, in 1976, 1990 and 1995.
Minimal interbunch distance ~100 𝜇𝑚
In the linear accelerator, klystrons provide the RF fields used to accelerate the particles. The 2 𝑘𝑚-long straight line is where the acceleration takes place.
When it penetrates into the W-chicane area at the end of the accelerator, the particle beam can be reshaped to have a two-bunch longitudinal structure. This can be necessary for some experiments, such as the one described in the next chapter. In this section of the accelerator, successive dipoles and quadrupoles manipulate the beam. From a beam with an initial correlation between the energy of the particles and their longitudinal position, an energydependent transverse position (and therefore a correlation between longitudinal and transverse position) is given. After the first dipole, a tantalum bar can be inserted to block the central particles, those at the middle of the bunch energy range. The correlation between transverse position and energy is cancelled by the rest of the chicane. The minimum longitudinal distance between the two bunches that can be created thanks to this technique is ~100 𝜇𝑚. In our experiment, the drive beam energy was centred on 20.55 𝐺𝑒𝑉, and the trailing on 20.05 𝐺𝑒𝑉. Furthermore, for technical reasons, the drive beam was also the first beam in time to reach the experimental area. For the experiment described in this part, the two-bunch structure consists of a first bunch, the drive bunch of energy 𝟐𝟎. 𝟓𝟓 ± 𝟎. 𝟐𝟓 𝑮𝒆𝑽 , followed 100 -150 𝜇𝑚 later by a second bunch, the trailing, of energy 𝟐𝟎. 𝟎𝟓 ± 𝟎. 𝟏𝟓 𝑮𝒆𝑽.
In the W-chicane, a non-invasive energy spectrometer called SYAG is setup. This setup uses the synchrotron X-ray radiation produced by the horizontally-dispersed beam when it goes through a magnetic wiggler system composed of three dipoles deflecting the beam vertically.
In particular, this energy spectrometer, combined with toroid charge diagnostics setup along the beamline, allows us to measure the charge in each of the two bunches and will be described in more details in Chapter 5.
Before the Interaction Point a final focus device composed of five quadrupole magnets focuses the beam to the transverse size indicated in Fig. 4.4 This section allows also to adjust the beta function of the beam at the entrance of the plasma.
c. Plasma source
To setup the PWFA experiments at FACET, it was necessary to setup a gas column, whose density could reach 10 14 -10 17 𝑐𝑚 -3 or even more. It was also necessary to employ a gas whose first ionization energy would be low (of a few eV) and whose second ionization potential would be much higher (a few tens of eV). Such a gas would ensure that when the plasma is generated by the passage of a laser in the gas, each atom releases exactly one electron. where T is the temperature and P is the pressure in Torr. The density in the gas is therefore directly given by the performances of the oven [Vafaei-Najafabadi 12], in Fig. 4.5 is depicted the temperature and the density profile along the length of a plasma oven. There is almost no density fluctuation in the plateau.
For the oven used in the experiments, the density plateau is 1.15 𝑚 long, with a 15 𝑐𝑚 up or down ramps on each side. The FWHM length of the oven is therefore 𝟏. 𝟑𝟎 𝒎.
The plasma electron density can be established from the models of the ionization processes introduced in Chapter 1. A femtosecond TW laser pulse is focused on a line by an axicon over the full length of the oven to ionize the lithium vapor. With a lithium vapor, Multiphoton Ionization is the process to be taken into account. The ionization rate for an ionization potential 𝑈 𝐼 is given by the formula for Multiphoton Ionization, in a k-photon process [Bruhwiler 03]:
𝑊(𝑠 -1 ) = 𝜎 𝑘 ( 𝐼 ℎ𝜈 ) 𝑘 (4.2)
Where 𝐼 is the flux of photon, 𝜈 is the photon frequency, and 𝜎 𝑘 is the Multiphoton Ionization cross-section for a process with k photons. The electron density is given by 𝑛 0 = 𝑛 𝐿𝑖 (1 -exp(-∫ 𝑊𝑑𝑡)). With a laser pulse containing ~100 𝑚𝐽 of energy, we can easily ionize all lithium atoms and reach 𝑛 0 = 𝑛 𝐿𝑖 .
d. FACET laser systems
The laser system available for the PWFA experiments at FACET operates at 10 𝐻𝑧, and has a central wavelength of 800 𝑛𝑚 [ Green 14]. A schematic of the whole laser chain is depicted in Fig. 4.6.
The femtosecond laser chain begins with a Ti:Sapphire femtosecond oscillator, providing a 800 𝑛𝑚 beam at 68 𝐻𝑀𝑧 with a 60 𝑛𝑚 bandwidth. Next in the chain is a Regenerative Amplifier that is setup to deliver a beam of 1 𝑚𝐽, at a repetition rate of 10 𝐻𝑧. A preamplifier increases the main beam energy up to 30 𝑚𝐽 and is pumped by a YAG laser of 130 𝑚𝐽, at 532 𝑛𝑚. The outgoing beam is then delivered to the main amplifier stage which consists of a Ti:Sapphire crystal pumped by two Thales SAGA YAG lasers of 1.8 𝐽 at 532 𝑛𝑚 . The amplified beam reaches an energy of up to 1 𝐽 after the main amplifier and is not yet compressed.
The beam is transported over 28 𝑚 from the laser room down to the Interaction Point Table.
During transport, two telescopes focus the beam once each time, which can cause distortion of the beam profile. This is why a vacuum inferior to 10 -5 𝑇𝑜𝑟𝑟 pressure is setup in the transport line. The compressor stands along the Interaction Point Table, and the pulses can be compressed down to 50 fs. However, to prevent self-phase modulation it is necessary to slightly modify the distance between the gratings in the compressor to have 100 -200 𝑓𝑠 long pulses.
The laser pulse is synchronized with the linac to ionize the plasma around 100 𝑝𝑠 before the 𝑒 ± beam from the FACET accelerator passes in the oven. The maximum laser energy available after compression is of about 500 𝑚𝐽. This energy is high enough to fully ionize a column of lithium vapor of 1.30 𝑚 with a diameter of the order of 1 𝑚𝑚.
Previous results in the field of beam driven plasma wakefields were presented in chapter 4, along with the difficulties in accomplishing particle acceleration using positron driven wakefields. In the following, the experimental apparatus of the trailing positron bunch experiment will be described in details before the discussion regarding the results themselves is made.
Acceleration of a distinct positron bunch in a plasma wakefield accelerator
The first section of this chapter is dedicated to a comprehensive description of the experimental apparatus that led to accelerate a distinct bunch of positrons. The different diagnostics are described in details. In the second section the proof of acceleration is given, along with results regarding beam loading effects occurring in the process. Last, a study of the acceleration regime in the experiment is accomplished and in particular, the existence of a transition from a very nonlinear to a quasilinear regime is discussed. The experimental setup of the trailing positron bunch acceleration experiment is depicted in Fig. 5.1. On this schematic, the main process and all the associated diagnostics appear.
A femtosecond laser pulse is used to pre-ionize a column of plasma. After it exits the compressor, the laser pulse is focused on a line by an axicon, before being superimposed on the positron beamline by the holed mirror. The axicon angle determines the length of the focal line.
In our case, the laser is focused on a line all along the plasma oven, and the peak intensity is approximately constant in the gas. As said in chapter 4, in the section dedicated to the plasma oven, the electron density is almost constant over 1.15 𝑚, and two linear up and down ramps of 15 𝑐𝑚-long lay on both sides of the oven density profile. The FWHM of the oven is therefore approximately 1.30 𝑚. The laser and axicon parameters used in the experiment are depicted on the table of Fig. A schematic of the principle of an axicon lens is given in Fig. 5.3. Such an optic is used to obtain complex laser intensity distributions over large areas. In our experiment, an axicon was needed to produce a cylindrical plasma all along the plasma oven. The on-axis intensity at a distance 𝑧 from an axicon is the result of positive interference of the light emerging from a small annulus on the axicon. The analytic calculation shows that the intensity distribution in the transverse plane 𝑂𝑥𝑦 at distance 𝑧 created by a plane wave going through the axicon is given by an order 0 Bessel function 𝐽 0 (𝑘𝛼𝑟), where 𝛼 is the angle of the rays after the axicon (calculated from the axicon angle and material indices), 𝑘 is the wavevector and 𝑟 is the distance to the axis in the transverse plane. For simplicity, we consider here the case of a Gaussian beam incident on the axicon. The intensity after the optic in the case of a Gaussian beam illuminating the axicon is given by the formula:
𝐼(𝑟, 𝑧) = 𝐼 0 (4𝜋 2 𝛼 2 𝑧 𝜆 ) exp (- 2(𝛼𝑧) 2 𝑤 0 2 ) 𝐽 0 2 (𝑘𝛼𝑟) (5.1)
Where 𝜆 is the wavelength of the laser and 𝑤 0 and 𝐼 0 are respectively the waist and the on-axis intensity of the Gaussian beam illuminating the axicon.
In our setup, a mask blocks the light near the center of the optics, allowing to focus the light on a line starting at the beginning of the plasma oven for an appropriate choice of mask radius and axicon-plasma distance. This is why the axicon line focus in Fig. 5.3 begins at a distance from the optic.
In reality, the transverse shape of the FACET laser pulse is closer to a flat-top than a Gaussian beam. The laser intensity after the axicon can be approximated in this case by simply removing the exponential term in (5.1). The intensity of the laser pulse incident on the axicon is:
𝐼 0 = 𝐸 𝜋𝑟 0 2 𝜏 ~ 5 10 10 𝑊 𝑐𝑚 -2 (5.2)
where 𝐸 = 120 𝑚𝐽 is the laser energy, 𝑟 0 is the radius of the laser and 𝜏 = 200 𝑓𝑠 is the laser pulse duration.
On axis, the maximal intensity after the axicon obtained is:
𝐼 𝑚𝑎𝑥 = 𝐼 0 (4𝜋 2 𝛼 𝑟 0 𝜆
) ~ 2 10 14 𝑊 𝑐𝑚 -2 (5.3) for a wavelength 𝜆 = 800 10 -9 𝑚 and a convergence angle 𝛼 = 0.28°. Such a peak laser intensity ensures that a mm-diameter column of lithium vapor is fully ionized all along the oven [ Green 14]. The compressor has its own vacuum chamber, the rest of the optics: axicon, delay line, holed mirror are in a separate vacuum chamber called the picnic basket, and shown in Fig. 5.4 (b). The plasma oven is in the same volume as the picnic basket.
In the picnic basket is setup another diagnostic, the electro-optic sampler (EOS) that provides a direct longitudinal profile measurement of the positron beam. The two-bunch positron beam passes above the EOS crystal and modifies its properties, then goes through the holed mirror and enters the plasma, 100 ps after the laser pre-ionizes the gas. The beam-plasma interaction process can then take place. As described in the section dedicated to the EOS diagnostics, a fraction of the main laser intensity is used as the EOS probe. This probe is synchronized with the passage of the bunches above the crystal. This probe is the laser beam on the bottom left of Fig. 5.1 that goes through the crystal.
The EOS crystal is setup on a mobile mount, and can be moved to be replaced by a Titanium wedge, used in the second part of the experiment to modify the emittance of the bunch. The mount is visible in Fig. 5.4 (a). The modification of the particle beam parameters when it passes through the metallic wedge is calculated in detail in section 3.
b. Energy spectrometer
The energy of the particles is measured by a Cherenkov light imaging spectrometer, appearing on the right, in Fig. 5.1. The details of this setup are given in Fig. 5.5 (a). First, a dipole magnet which has an equivalent length and field of 97.8 𝑐𝑚 and 0.8 𝑇 deflects the positrons vertically, according to their energy. The particles then reach a device composed of two silicon wafers separated by a 5 𝑐𝑚 long air gap, and emit Cherenkov light when they move through the air in the gap as depicted in In addition, error estimation is of prime importance in experimental plasma wakefield acceleration. FACET offers the possibility to acquire data at 10 𝐻𝑧, therefore, it is extremely convenient to record a large amount of data when performing a parametric study. As a result, statistical errors can efficiently be reduced. However, systematic errors remain.
When using the Cherenkov light spectrometer system, the limitation in the resolution of the energy measurement can have two origins. Resolution can be limited by the Cherenkov optical system itself. The conceptual design of the spectrometer used at FACET leads to an estimation of the resolution of 76 𝑀𝑒𝑉 [Adli 15]. Limitation in the resolution can also be due to the Betatron beam size in the vertical dimension, this resolution is estimated to be of the order of 50 𝑀𝑒𝑉 . A systematic error is induced when the calibration of the energy axis on the Cherenkov system camera is accomplished. However, when measuring energy differences, the error on the origin of the axis cancels out and the remaining error is only due to the uncertainty on the dispersion 𝜂 0 .
c. EOS diagnostic
The Electro-Optic Sampler diagnostic provides a measurement of the longitudinal profile of the beam before it passes through the plasma. As seen in Fig. 5.1, the EOS crystal is setup on a mount in the picnic basket, before the plasma.
The crystal is a 50 𝜇𝑚-thick GaP crystal standing a millimeter below the beam path. A probe laser beam passes through the crystal at an angle, with an adjustable delay relative to the positron beam passage [Litos 11]. The GaP electro-optic crystal birefringence property is modified by the sliced shaped electric field of the positron bunches [Steffen 07]. The femtosecond probe laser pulse passes through the crystal to sample its instantaneous properties. The modification of the crystal birefringence induced by the positron beam field in the crystal results in a relative phase delay between the two orthogonal components of the laser probe pulse. Two polarizers are used in the laser path, a first one before the crystal, with a main direction at a 45° angle relative to the crystal axis and a second one, located before the camera and after the crystal and aligned for laser extinction in the absence of a positron beam. A higher laser signal on the detector is observed in the areas in which birefringence was modified by the positron beam electric field. Those areas correspond to a perfect synchronicity between beam field and probe pulse in the crystal. The two bunches (trailing and drive bunch) are not synchronized with the probe at the same position on the crystal, as the probe comes at an angle on the crystal. Therefore, the longitudinal profile of the positron beam can be reconstructed from a single measurement, with a temporal resolution of about 150 𝑓𝑠. An example of the measurement of the interbunch distance for a whole dataset is given in Fig. 5.6.
The longitudinal separation between the drive and the trailing bunch can be modified experimentally by varying the arrival time of the positron beam in the linear accelerator. The EOS diagnostic monitors the resulting change.
d. Beam charge diagnostics
Toroidal charge monitors are setup at 30 stations along the beam line of the SLAC linear accelerator. These devices act as current transformers, when the charged particles move through them. Their output signals allow to monitor the charge of the beam [ Larsen 71]. Close to the experimental area, the toroids have the highest precision and provide a measurement of the beam total charge with an accuracy of ±0.1%. However, this technique does not permit to monitor the charge in each of the two bunches composing the beam when the notch collimator is used to produce a two-bunch structure.
Another system, a non-invasive energy spectrometer, is used to monitor efficiently the charge distribution within the bunch. We refer to this system as the SYAG system. After the notch collimator, the two bunches have different energies: the drive bunch is centered on 20.55 ± 0.25 𝐺𝑒𝑉 and the trailing bunch is centered on 20.05 ± 0.15 𝐺𝑒𝑉. At the far end of the Wchicane, the beam is horizontally dispersed: there is a correlation between the energy of the particles and their transverse position. Then, a magnetic vertical wiggler leads the beam to produce synchrotron X-rays. A scintillating YAG screen collects the X-ray radiation due to the wiggling, reproducing the energy spectrum of the two-bunch beam. A camera records the signal on the screen for each shot, the energy spectrum of the two bunches leads to two distinct spots on the camera. The toroid monitors are used to calibrate this signal. As the result, the SYAG camera provides a direct measurement of the charge in each of the two bunches.
Beam charge is one of the parameters of the beam that fluctuate the most. These natural fluctuations can be useful, as they are used to identify correlations between the beam parameters and the parameters of the plasma acceleration. For example, it is possible to plot the energy peak of the accelerated bunch of positrons as a function of the initial trailing beam charge. The natural fluctuations ensure that the initial trailing beam charge spans the range [150 𝑝𝐶 , 400 𝑝𝐶].
e. Optical Transition Radiation (OTR) screens
A perfect alignment of the beam and of the laser axis in the plasma oven is necessary. As seen in Fig. 5.1, a holed mirror is setup on the beam line. The positron beam passes through the hole, and the laser is superimposed thanks to the mirror. Fine alignment is performed thanks to insertable thin OTR foils, which are set up on both sides of the plasma oven.
Optical Transition Radiation is a radiation emission process that occurs when relativistic charged particles pass from a medium to another one. In fact, the electric field of the bunch of particles in each of the media is not the same, and the difference gives birth to optical transition radiation [ Dolgoshein 93]. In the experiment, the foils are setup at an angle of 45° compared to the beamline. A part of the OTR radiation is emitted on the front face of the foil, at 90° compared to the beam line, where a camera collects it. The alignment method in this experiment uses the cameras and motors mounted on the optics in the picnic basket to superpose the beam and the laser spots on the OTR camera screens on both sides of the oven.
f. Simulations
Numerical simulations play a major role in plasma-based acceleration studies. Regarding the experiment described in this section, a parametric study was accomplished before the experiment, using a particle-in-cell (PIC) code, to examine whether the experiment could work for a choice of parameters.
Simulations were performed using the particle-in-cell QuickPIC code [Huang 06, An 13b].
QuickPIC is a kinetic simulation code in which the quasi-static approximation is made. This hypothesis assumes that the particle beam is evolving slowly compared to the time scale of the plasma response.
In the simulations performed for the trailing positron bunch experiment, the simulation box moves at the speed of light in the beam propagation direction. The coordinate system used in this chapter is 𝑥, 𝑦 (in the transverse plane) and 𝜉 = 𝑐𝑡 -𝑧 (in the longitudinal dimension).
The simulation box has a size of 400 × 400 × 400 𝜇𝑚 3 (in 𝑥 , 𝑦 and 𝜉 ) divided into 512 × 512 × 1024 cells.
In the simulations, the drive and trailing bunches initially have Gaussian profiles in all dimensions. In most simulations presented in this part, the initial r.m.s. spot sizes used are 𝜎 𝑥 × 𝜎 𝑦 = 35 × 25 𝜇𝑚 2 and an r.m.s. bunch length of 𝜎 𝑧 = 30 𝜇𝑚 (for the drive beam) and 𝜎 𝑧 = 40 𝜇𝑚 (for the trailing beam) is always used.
Both bunches in the simulation have no initial energy spread and their central energies are 20.55 𝐺𝑒𝑉 (for the drive bunch) and 20.05 𝐺𝑒𝑉 (for the trailing bunch). These values correspond to the measurements accomplished on the Cherenkov spectrometer, without plasma, but neglecting the energy spread of the beams.
As will be seen on the experimental results, the charge of the bunches used in the simulations is chosen to be 50% of the charge measured experimentally. In fact, this reduced charge shows better agreement with the experimental data. This adjustment is justified by the presence of possibly large transverse tails in the positron charge distribution that do not take part in the interaction [Litos 14]. This is a consequence of the hypothesis of perfectly Gaussian bunches in the simulations.
Acceleration of a trailing positron bunch a. Proof of acceleration
The demonstration of acceleration of a trailing positron bunch took place in several steps. As said earlier, the electron density of the plasma was chosen to be 10 16 𝑐𝑚 -3 , the laser energy was set to 120 𝑚𝐽, in order to ionize the whole 1.3 𝑚-long oven. The positron beam has a twobunch longitudinal shape, where the two bunches, the drive and the trailing are separated by 100 𝜇𝑚. The bunches are centered on the energies 20.05 ± 0.15 𝐺𝑒𝑉 (trailing bunch, second in time) and 20.55 ± 0.25 𝐺𝑒𝑉 (drive bunch, first in time).
Before the bunches pass in the oven, it was possible in the experiment to choose not to preionize the gas by blocking the laser. In that case, the energy spectrum of the beam after the oven was unmodified. In fact, no interaction occurred. This unmodified spectrum can be used as a reference to illustrate energy gain or loss. In addition, it was also possible in the experiment to block any of the bunches, when the beam was being reshaped in the W-chicane.
As a result, it was possible to send in the oven, either the two bunches, drive and trailing bunches, only the trailing or only the drive bunch.
One affirmation can be made already: causality implies that the trailing bunch (second bunch in time) cannot influence the drive bunch. The bunches propagate with an ultrarelativistic velocity, and only the drive bunch (or maybe the trailing bunch itself) can influence the spectrum of the trailing bunch.
As a result, the demonstration of the acceleration of particles from the trailing bunch, using a plasma density wave excited by the drive bunch requires:
1. To demonstrate that when the trailing only, or the drive bunch only is sent, some beam particles lose energy: they deposit energy in the plasma by exciting the density wave, but no acceleration is observed.
2. To demonstrate that when both bunches are sent, some beam particles reach an energy higher than the upper-limit energy of the drive beam.
The remark regarding causality, added to proposition 1. and 2., would demonstrate that if we observe accelerated particles in the experiment, then those particles must come from the trailing: if only the presence of both bunches give birth to particles above 20.8 𝐺𝑒𝑉 then we can conclude that the demonstration is successful.
In Fig. 5.7 (a), two integrated spectra are displayed, when only the drive bunch was sent into the oven. When the plasma was preformed, some particles lost energy, but no acceleration was observed.
In Fig. 5.7 (b), two integrated spectra are displayed, when only the trailing bunch was sent into the oven. When the plasma was preformed, some particles lost energy, but no acceleration was observed.
In Fig. 5.7 (c), two integrated spectra are displayed, when both the trailing and the drive bunch were sent into the oven. When the plasma was preformed, some particles lost energy, and some particles reached an energy above 20.8 𝐺𝑒𝑉 . When the plasma was not preformed, no interaction occured, the spectrum was unchanged.
As a result, we can conclude that This result by itself is a first accomplishment: a distinct bunch of positrons had never been accelerated before in any type of plasma-based accelerators (beam-driven or laser-driven).
The shots displayed in Fig. 5.7 come from a stack of 500 shots for each case: the case in which the trailing bunch only is sent, the case in which the drive bunch only is sent and the case in which both bunches are sent. Fig. 5.7 (c) presents a successful acceleration of the trailing bunch. The accelerated bunch displayed in this figure has especially good properties. The r.m.s. energy spread associated to the fit of the peak is 1.0%. The charge contained in the peak is 85 𝑝𝐶 and the peak energy is 21.5 𝐺𝑒𝑉. The initial trailing bunch energy was centered on 20.05 𝐺𝑒𝑉, which corresponds to an energy gain of 1.45 𝐺𝑒𝑉. Considering a plasma length of 1.3 𝑚, this is an accelerating energy gradient of 1.12 𝐺𝑒𝑉 𝑚 -1 . The wake-to-bunch energy extraction efficiency can also be estimated from the data. This parameter is defined as the total amount of energy gained by all the particles in the trailing bunch with final energy above 20.8 𝐺𝑒𝑉 relative to the total amount of energy lost by all the particles in the drive bunch with final energy below 19.9 𝐺𝑒𝑉. It is estimated to be 40% for this shot of Fig. 5.7 (c). This parameter describes the fraction of the energy transferred to the plasma wake that is extracted by the trailing bunch.
Going further into the description of the acceleration mechanism, a physical process was observed in the data recorded during the experiment: beam loading of the wakefield.
b. Beam loading, theory and experimental observation
Theory
Beam loading is an effect due to the wakefield driven by the accelerated bunch itself. This physical process is not limited to the specific trailing positron bunch acceleration scheme performed at FACET. In a plasma wakefield, if the accelerating field is sampled by a particle beam, the wakefield driven by this accelerated bunch modifies the initial driven wakefield.
In conventional accelerators, the phenomenon of beam loading exists as well. In a conventional accelerating cavity, a bunch of particles surfing on the cavity radiofrequency field can drive a wakefield [ Ng 06].
In the linear theory of plasma wakefield acceleration, the calculation of the modified wakefield due to the drive beam and to the accelerated beam (in the experiment described in this chapter, due to the trailing beam) is straightforward: for the linearized equations, the wakefield is the sum of the wakefields due to each of the two bunches.
For a beam density:
𝑛 𝑏 (𝜉, 𝑟) = (𝑛 𝑑 (𝜉) + 𝑛 𝑡 (𝜉)) • exp (- 𝑟 2 2𝜎 𝑟 2 )
where 𝑛 𝑑 (𝜉) = 𝑁 𝑑 (2𝜋) 3/2 𝜎 𝑟 2 𝜎 𝑧,𝑑 (5.6)
An example of a loaded wakefield in the linear regime is shown in Fig. 5.8 (c). Two observations regarding this wakefield can be made. First, when the wakefield is not modified by beam loading as seen in Fig. 5.8 (a), the field is steep in the trailing bunch grey area. As a result, the maximal and minimal fields that the trailing particles face are very different from each other. In Fig. 5.8 (c), the difference between the maximum and minimum field in the grey area is lower. That is why, beam loading leads to a lower maximum accelerating field and to a lower energy spread of the accelerated bunch. Beam loading of electron driven plasma waves was studied in details, it was shown [ Katsouleas 87] that an accelerated bunch in an electron beam driven linear wakefield, whose current linearly increases with 𝜉 leads to an almost flat field and therefore prevents energy spread growth during the acceleration process.
A nonlinear theory of beam loading in laser or electron driven wakefields also exists and was written by Tzoufras and al. [ Tzoufras 08]. It describes in the very nonlinear, blow-out regime, the perturbation the accelerated beam makes to the cavity shape, and therefore to the fields. However, as explained earlier in the manuscript, there is no analytical theory of positron driven nonlinear plasma waves and of positron nonlinear beam loading.
However, an experimental study of beam loading in positron driven nonlinear waves, as we consider in this manuscript, would provide insights into the acceleration process.
Experiment
Two observables are particularly related to beam loading. The main observable is the maximum energy, as explained in the introduction, the maximum energy decreases when beam loading increases. The second observable is the energy spread of the accelerated beam. Without beam loading, the trailing bunch should face an 𝐸 𝑧 field rapidly varying with 𝜉 from the front to the back of the bunch. However, as beam loading effects occur, the field becomes flatter, until it reaches an optimum (dependent on the trailing bunch current profile). If beam loading effects keep increasing, the field progressively becomes strongly reduced or even decelerating for the particles at the back of the trailing bunch: the wakefield becomes overloaded as shown in Fig. 5.8 (d).
During the experiment described in this section, the initial parameters of the beams slightly changed from a shot to another one. This was due to the natural fluctuations of the conventional accelerator. In particular, the compression of the beam, strongly related to the entrance time of the beam in the accelerator led the charge in each of the two bunches to fluctuate from a shot to another one. One dataset was particularly better than the others in terms of the energy spread of the accelerated beam, of the accelerated charge and peak energy. The quality of the data allowed to make a Gaussian fit of the accelerated beam, which is a repeatable and rigorous parameter measurement methods. From the EOS data of this dataset, the estimated mean interbunch distance was 100 𝜇𝑚, which is shorter than the usual value. This may be accounted for the better results of this dataset.
The dataset is composed of 160 shots, that are displayed in Fig. 5.9 (a). Fig. 5.9 (a) is a waterfall plot of the spectrum of all the shots, sorted by increasing initial trailing bunch charge. The left y-axis is the shot number, the top x-axis represents the initial charge of the trailing bunch, while the bottom x-axis is the energy of the particles. Each horizontal line is an image of the spectrometer, integrated over the horizontal position, the dimension perpendicular to the dipole dispersion (vertical). Qualitatively, two correlations appear: the peak energy and the energy spread decrease with the initial trailing charge. The colorful waterfall plot is a first tool to identify this kind of correlations.
Going further into the quantification of the process, the trailing charge spanned in this dataset from 150 to 400 𝑝𝐶 and the mean charge was 260 𝑝𝐶. The standard deviation was 55 𝑝𝐶. The average energy of the accelerated peak was 21.75 𝐺𝑒𝑉, this is a 1.70 𝐺𝑒𝑉 gain on average, and the mean r.m.s. energy spread was 1.5%. On the waterfall plot depicted in Fig. 5.9 (a), energy spectra are sorted by increasing trailing bunch charge. Accelerated particles appear on the right, above 20.8 𝐺𝑒𝑉 (upper limit of the drive bunch). Two correlations can be seen: the peak energy and the energy spread of the accelerated bunch decrease when the charge of the trailing bunch increases. The correlations with the trailing bunch charge are plotted in Fig. 5.9 (b) (for the energy of the accelerated peak) and Fig. 5.9 (c) (the energy spread is the width of the Gaussian fit of the peak). As the trailing bunch charge reaches its maximum value in the dataset, the energy spread decreases from 2% down to 1% while the energy gain reduces from 1.95 𝐺𝑒𝑉 to 1.45 𝐺𝑒𝑉. Beam loading implies that the maximum longitudinal electric field is reduced as the charge of the trailing bunch is increased as demonstrated with Fig. 5.7 (c). Beam loading results also in a flattening of the electric field, both effects are observed and depicted in Fig. 5.9 (b) and (c).
The correlations of Fig. 5.9 (b) and (c) are coherent with a beam loading process. However, they may in principle be due to correlations of the peak energy and energy spread with the interbunch distance as well. By looking at Fig. 5.7 (a) we can guess the hypothetical effect of the interbunch distance on the peak energy and the energy spread. When the interbunch distance increases, the energy peak should increase as the maximum accelerating field seen by the trailing particles increases. On the other hand, the energy spread should decrease as the slope decreases "on top" of the 𝐸 𝑧 field oscillation. This is not the effect observed in Fig. 5.9 (b) and (c).
In addition, the data contain clear evidence that the interbunch distance is not correlated with the accelerated bunch parameters and with the initial trailing charge. In Fig. 5.10 (a) and Fig. 5.10 (b) can be seen plots of the peak energy and the energy spread as a function of the interbunch distance, no clear correlation is visible. Furthermore, in Fig. 5.10 (c) is the evidence that the trailing bunch initial charge is not correlated with the measured interbunch distance and that the clear correlations observed in Fig. 5.9 are not due to a coincidental correlation between interbunch distance and trailing charge in the incoming beam. These three clear results demonstrate that the initial trailing bunch charge alone is responsible for the correlation showed in Fig. 5.9.
Therefore, it is indeed only a modification of the initial beam charge that is to be accounted for the change in the peak energy and the energy spread of the accelerated positron bunch.
In the light of the theoretical models described in the beginning, we reached the conclusion that a clear beam loading phenomenon occurred during the acceleration process. The specific
Figure 5.10: Absence of correlations with the interbunch distance. Plot of the peak energy of the accelerated bunch as a function of the interbunch distance (a), plot of the energy spread of the accelerated bunch as a function of the interbunch distance (b), plot of the interbunch distance of the accelerated bunch as a function of the trailing bunch charge (c).
parameters of the beams (low interbunch distance of order of 100 𝜇𝑚) during this dataset seem to explain the relatively good results. The acceleration of a distinct positron bunch has been accomplished.
However, the wakefield is expected to be very nonlinear in the result presented in this section. Therefore, this regime does not open the prospect of using a laser or electron driven wakefield to accelerate a distinct bunch of positrons. In fact, positron driven nonlinear wakefields are specific to positron drivers.
Acceleration regimes
In the electron or laser driven case, the transition from the linear to the nonlinear regime of wakefield driving has been studied both theoretically and experimentally. The parameters responsible for the regime transition were discussed. It appeared that the ratio 𝑛 𝑏 /𝑛 0 plays a major role in the wakefield regime. When this ratio becomes much larger than 1, the regime becomes necessarily nonlinear [Lu 05]. Furthermore, another parameter was shown to be important in predicting the regime (relativistic versus non-relativistic), it is the normalized peak current of the drive bunch 𝛬 = 2𝐼 𝑝 /𝐼 𝐴 [Lu 05, Lu 10]. However, these results do not apply to positron drivers.
In fact, for positron driven waves, numerical simulations seemed to show that for an initial emittance achievable at FACET 𝜖 𝑥 × 𝜖 𝑦 = 100 × 10 𝑚𝑚 2 . 𝑚𝑟𝑎𝑑 2 , whatever the drive beam initial diameter the beam would self-focus into the plasma and drive a nonlinear wakefield.
This section will be dedicated to the study of a wakefield regime transition. We will compare this experimental result with numerical simulations to show that initial emittance and beam diameter are both important parameters to reach a positron driven quasilinear wakefield.
a. Emittance manipulation system
A convenient method to modify the emittance of the beam when it enters the plasma relies on the insertion of a block of metal in the beam line to "spoil" the emittance just before the plasma oven. The system is visible in Fig. 5.1: a titanium wedge can be inserted in the beam, the variable thickness of the wedge allows to choose the degree of emittance spoiling.
Emittance growth when a charged particle beam goes through a dense material is a well described process. In fact, when a beam of charged particles crosses a dense block of matter, multiple scatterings of the particles in the beam occur. Therefore, the beam Twiss parameters and emittances are modified [Reid 91, Olive 14].
The r.m.s. angle at which a particle will be scattered after traversing a thickness 𝐿 of material is given by: 𝜃 𝑟𝑚𝑠 (𝐿) = Where 𝑋 0 is the radiation length of the scattering material, for titanium, 𝑋 0 = 3.56 𝑐𝑚. The material we consider in our experiment is Titanium as it is convenient to use and has a radiation length which modifies the emittance of the beam by factors acceptable for our studies. In the following the subscript "Ti" indicates that the parameter value is given at the Titanium wedge position along the beam line, and the prime indicates that the Titanium effect has been taken into account. The Twiss parameters are modified according to the formulas:
𝛽 𝑇𝑖 ′ = 𝛽 𝑇𝑖
√1+𝜉
(5.8)
𝛼 𝑇𝑖 ′ = 𝛼 𝑇𝑖
√1+𝜉
(5.9) In formula (5.11), 𝜖 is the geometrical emittance, 𝛽 0 is the Twiss parameter 𝛽, taken at focus before spoiling the beam. The modification of the emittance is given by the formula:
𝛾 𝑇𝑖 ′ = 𝛾 𝑇𝑖 1+𝜉 0 √1+𝜉 (
𝜖 ′ = 𝜖√1 + 𝜉 (5.12)
In the table of Fig. 5.11 are listed the modifications of the beam parameters due to the to presence of the Titanium wedge in the beam line. The wedge is setup 𝐷 = 156 𝑐𝑚 before the entrance of the plasma, in the picnic basket, but before the holed mirror, along the beam line.
The mount was setup to insert the wedge at several fixed positions that correspond to the Titanium thicknesses given in the first column of the The emittances of the beam are therefore multiplied by factors of 1.7 to 3 in 𝑥 and 3.4 to 7 in 𝑦.
The Twiss parameters were modified also, these modifications were taken into account in the simulations. The corresponding experimental and numerical results will be introduced and discussed in the next section.
b. Nonlinear to quasilinear positron driven waves
Experimental results
Figure 5.11: Titanium wedge thickness and modification of the beam parameters.
When the beam emittance was increased, the beam-plasma interaction became accordingly weaker, and the accelerated bunch energy became closer to the drive bunch initial energy upper limit of 20.8 𝐺𝑒𝑉. We had to use therefore, in order to quantify the energy gain, the maximum energy of the accelerated particles. The maximum energy was defined as the energy at which the accelerated spectrum crosses the 10 𝑝𝐶 𝐺𝑒𝑉 -1 threshold.
The comparison between the experimental results and the simulations is displayed in Fig. 5.12. This plot shows the maximum energy of the accelerated particles, as a function of the Titanium thickness. It shows that, in the experiment, the maximum trailing bunch energy decreases when the titanium thickness is increased. These experimental results show similar trends with particle-in-cell simulations.
A dataset was recorded to acquire these data. Each step contained 500 shots. The average interbunch distance measured on the EOS system was 135 𝜇𝑚. The initial charge in the beam was on average 480 𝑝𝐶 in the drive beam and 140 𝑝𝐶 in the witness bunch.
Wakefield regime evolution
The plasma wakefield and its evolution are computed using the particle-in-cell code QuickPIC, with beam and plasma parameters similar to those of the experiment. The beam and the plasma wakefield evolve in the density up-ramp and in the first ten centimeters of the density plateau and then reach a quasi-steady state with negligible evolution.
In Fig. 5.13 is displayed the shape of the plasma wakefield in the middle of the plasma (at 𝑧 = 72.5 𝑐𝑚 , i.e. after the quasi-steady state is reached), for emittances of 100 × 10 𝑚𝑚 2 . 𝑚𝑟𝑎𝑑 2 (no titanium), of 171 × 34 𝑚𝑚 2 . 𝑚𝑟𝑎𝑑 2 (100 𝜇𝑚 of titanium), of 214 × 46 𝑚𝑚 2 . 𝑚𝑟𝑎𝑑 2 (179 𝜇𝑚 of titanium) and of 270 × 60 𝑚𝑚 2 . 𝑚𝑟𝑎𝑑 2 (297 𝜇𝑚 of titanium).
In the lower emittance cases Fig. 5.13 (a) and (b), the wake has a strong nonlinear structure. 𝑛 𝑏𝑒𝑎𝑚 /𝑛 𝑝𝑙𝑎𝑠𝑚𝑎 = 14.2 in (a) and 𝑛 𝑏𝑒𝑎𝑚 /𝑛 𝑝𝑙𝑎𝑠𝑚𝑎 = 3.0 in (b) for the drive bunch in the plasma. The longitudinal fields show very steep and asymmetric gradients and the shape of the transverse force (Fig. 5.13 (a)) strongly depends on the longitudinal coordinate 𝜉 = 𝑧 -𝑐𝑡. To be specific, the transverse wakefield is non-separable (it cannot be written as the product of a function of 𝑥 and a function of 𝜉). The plasma wakefield in the 𝜉𝑦 plane shows a nonlinear structure as well. In this regime, the accelerated bunch takes a specific arrowhead shape due to the shape of the pseudo-potential [ Corde 15]. The pseudo-potential well confines the trailing bunch in the transverse direction and permits its acceleration over the whole plasma length. Off-axis, the pseudo-potential well has two minimums that lead to a high trailing bunch density around these two positions. This is the source of the arrowhead shape. In the higher emittance cases, Fig. 5.13 (c) and (d) the wakefield shows a quasi-linear structure.
In addition, 𝑛 𝑏𝑒𝑎𝑚 / 𝑛 𝑝𝑙𝑎𝑠𝑚𝑎 = 1.9 and 𝑛 𝑏𝑒𝑎𝑚 / 𝑛 𝑝𝑙𝑎𝑠𝑚𝑎 = 1. 2 for the drive bunch in the plasma. The maximum longitudinal electric field 𝐸 𝑧 is reduced from about 1.6 𝐺𝑒𝑉𝑚 -1 in Fig. 5.13 (a) to 0.8 𝐺𝑒𝑉𝑚 -1 in Fig. 5.13 (d). On this last figure, the wakefield is more regular and the transverse force takes a separable and sinusoidal form. Such a shape is characteristic of the linear regime of PWFA. The regime of In the experiment, acceleration of positrons from the trailing bunch was observed on the spectrometer over the whole range of achievable titanium thicknesses. However, when the emittance was progressively increased, the interaction of the beam with the plasma became accordingly weaker, and the energy of the accelerated bunch became close to the initial drive bunch energy, leading to a much less pronounced spectral peak on the spectrometer.
To conclude with this section, the use of a Titanium wedge allowed us to accomplish acceleration of a distinct positron bunch spanning nonlinear to quasilinear regimes. This is the first time that such a scheme is successfully demonstrated, and it also opens the prospect to accelerate an individual positron bunch from an independent laser or electron driver.
Regime transition and emittance
To search for a linear wakefield in the case of positron driven waves, numerical simulations seemed to show at first that when the emittance of the SLAC positron beam was used in the simulations (𝜖 𝑥 , 𝜖 𝑦 ) = (100 𝜇𝑚, 10 𝜇𝑚), the beam would always self-focus, for any initial density. This evolution would always lead to a nonlinear wakefield regime. The previous paragraph showed that the spoiling of the beam led to accelerate particles first in a very nonlinear regime and progressively in a more linear one, until we could qualify the regime of "quasilinear". It seems that the emittance played a major role in this transition. A brief discussion regarding the regime transition and the role of emittance is therefore accomplished in this section.
In the experiment as it was performed at SLAC, the beam parameters at the plasma entrance were (𝜎 𝑥 , 𝜎 𝑦 ) = ( 35 setting the beam sizes to correspond to the values corresponding to 297 𝜇𝑚 of Titanium ((𝜎 𝑥 , 𝜎 𝑦 ) = (89 𝜇𝑚, 86 𝜇𝑚)). The four simulations are referred to as simulations A, B, C and D, and the corresponding beam parameters are listed in Fig. 5 , where the initial beam density is decreased while the initial emittance is kept constant, the regime is still nonlinear but the field amplitude is reduced compared to case A, Fig 5 .14 (a). The transverse force does not exhibit any sinusoidal shape and the variables 𝜉 and 𝑥 are not separable, this is still a nonlinear wakefield regime. As a result, increasing only the emittance as in C leads to a quasilinear regime. In fact, the large emittance prevents a strong self-focusing so that the beam reaches larger spot sizes and contains more charge in its central spot. Most of the charge spreads transversally and does not contribute to drive a strong wakefield as can be seen in the table of Fig. 5. 15. In case D by contrast, the central spot has a lower relative charge than in C, but is also much smaller due to the strong self-focusing. Therefore, the drive bunch in case D can excite a more intense wakefield, and as a result, the regime remains nonlinear.
To conclude with this discussion, while the change in the initial beam density does have the effect of reducing the field amplitude, the emittance seems to be mainly responsible for the change of regime, from nonlinear to quasilinear.
The second part of the manuscript was dedicated to the trailing positron bunch experiment report. After a description of the context, SLAC facility was presented, and the results were discussed thoroughly. This experiment was serving the ambition of the plasma based acceleration research community to build a plasma based particle collider. The next chapter will be dedicated to an experiment that serves the same purpose: in order to facilitate research regarding beam driven wakefields, it would be convenient to accomplish such experiments in a small-scale university laboratory. This is why a hybrid LWFA-PWFA experiment was setup at LOA: it intended to use a laser produced electron beam to drive a plasma wakefield.
Part III Hybrid LWFA-PWFA experiment at Laboratoire d'Optique Appliquée
In this chapter I report on the first hybrid LWFA-PWFA experiment performed at LOA. The goal of this experiment was to study the interaction of an electron beam created by LWFA with a plasma and to see how it can be used for PWFA purpose. I first introduce the theory of physical processes involved in Laser Wakefield Acceleration. In the second section, I give a short presentation of the Salle Jaune facility in which LWFA experiments take place. The last section is dedicated to the experimental setup of the hybrid LWFA-PWFA and to the results obtained during the 2017 campaign.
Chapter 6 1. Acceleration, trapping and injection of particles in plasma wakefield
Laser Wakefield Acceleration is a plasma-based scheme in which a plasma wave is excited by a very intense and ultrashort laser pulse. The theory of laser driven plasma waves was derived in Chapter 3. Unlike in some PWFA experiments (see Chapter 4 and 5) when LWFA experiments are performed in facilities such as the Salle Jaune at LOA, accelerated electrons, come from the plasma itself. Important physical processes related to LWFA experiments such as electrons trapping needs to be presented. The theory of injection is summarized below. It explains how some electrons can be trapped in the plasma wave and gain a substantive amount of energy.
a. Phase velocity of plasma density waves
It is important to recall first the wakefield velocity expression in the case of a LWFA scheme.
In fact, the phase velocity of the plasma wave is a concept of prime importance to understand how some particles can be "trapped" inside the wakefield and increase their energy by staying in the accelerating 𝐸 𝑧 field. The 1D dispersion relation (2.12) provides the group velocity of the laser, and thus the phase velocity of the plasma wave. From the relation 𝜔 2 = 𝜔 𝑝 2 + 𝑘 2 𝑐 2 , one gets:
𝑣 𝑔 = 𝑐 (1 - 𝜔 𝑝 2 𝜔 2 ) 1/2
. The phase velocity of the laser is given by
𝑐 (1 - 𝜔 𝑝 2 𝜔 2 ) -1/2
.
The phase velocity of the plasma wave is the group velocity of the laser. It is therefore smaller than 𝑐, and smaller than the velocity of the relativistic electrons produced in the LWFA experiments. Note that corrections of the phase velocity of the plasma wave in the case of very strong drivers [Decker 94, Lu 07] are required.
For instance, in a plasma of density 𝑛 0 = 10 19 𝑐𝑚 -3 , the group velocity of the laser is ~ 0.997 𝑐 , by contrast, electrons whose energy is 100 𝑀𝑒𝑉 have a velocity of ~ 𝑐 (1 -1.3 10 -5 ).
b. Acceleration, trapping and LWFA phase detuning
Trapping and acceleration of particles in the plasma wake
In 1D, the dynamics of particles in the wake can be simply described using the model shortly presented here. This model already gives insight into the physics and allows to understand behaviors that occur in 3D problems. Effects such as transverse motions, evolution of the laser driver or beam-loading require 3D PIC simulations. Starting from the equations of motion of a test electron in the phase space (𝜉, 𝛾), where 𝛽 = The Hamiltonian for a test electron can be calculated from these last two equations, it is given by [Esarey 95, Esirkepov 06]:
𝐻( 𝜉, 𝛾) = 𝛾 -𝛽 𝑝 √𝛾 2 -1 -𝜙(𝜉) (6.3)
Where 𝜙(𝜉) is the potential of the wakefield, oscillating between 𝜙 𝑚𝑖𝑛 and 𝜙 𝑚𝑎𝑥 . 𝐻 does not depend explicitly on the variable 𝑧 and it is therefore constant over the orbit of a test electron. The relation 𝐻(𝜉, 𝛾) = 𝐻 0 = 𝛾 -𝛽 𝑝 √𝛾 2 -1 -𝜙(𝜉) allows to draw the trajectory of a test electron in phase space. The phase portrait, i.e. the trajectory of particles in phase space, provides information about the electrons that can extract energy from the plasma wave. It is depicted in Fig. 6.1.
The phase portrait can be understood thanks to the study of the Hamiltonian. The Hamiltonian 𝐻 has fixed points, corresponding to In particular, the minimum and maximum energy for particles whose orbits are infinitesimally close to the separatrix are: with 𝛥𝜙 = 𝜙 𝑚𝑎𝑥 -𝜙 𝑚𝑖𝑛 . The picture in phase space helps to understand how electrons can be trapped. When an electron is at 𝜉 = 𝜉 𝑚𝑎𝑥 with an energy higher than 𝛾 𝑚𝑖𝑛 𝑚𝑐 2 and smaller than 𝛾 𝑚𝑎𝑥 𝑚𝑐 2 , it is simply trapped.
When one considers a thermal model for the plasma, it appears that a fraction of the electrons from the background, those whose energy is greater than the trapping energy, can be trapped [Esarey 09]. However, this is not a reliable solution to produce high charge accelerated beams. Different concepts have been proposed and demonstrated to control and to improve electrons injection.
The maximum energy the accelerated particles can reach can be estimated using the expression of 𝛾 𝑚𝑎𝑥 [ Esarey 95].
Phase detuning
Phase detuning is a fundamental limitation of Laser Wakefield Acceleration schemes. This limit comes from the difference between the phase velocity of laser driven plasma waves (approximately equal to the group velocity of the laser) and the speed of accelerated particles that becomes close to 𝑐 as soon as their energy is of the order of 𝑚𝑐 2 .
If an electron is injected at the back of a plasma period, and remains trapped in the wakefield, it will be accelerated further. Its speed will become close to 𝑐 and will outrun the speed of the plasma wave [Joshi 84]. The particle will eventually reach the front half of the plasma period where the field is no longer accelerating as depicted in Fig. 6.2. In the front half of the plasma period, the particle begins to decelerate, this phenomenon limits the maximum energy reached by the accelerated particles in LWFA schemes. where
𝛾 𝑝 = (1 -𝑣 𝑝 2 /𝑐 2 ) -1/2 [
c. Injection techniques
Laser driven nonlinear wakefields are usually produced in plasmas whose densities are in the range 10 18 -10 19 𝑐𝑚 -3 . Therefore, the corresponding plasma wavelengths are in the range 10 -40 𝜇𝑚. In such wakefields, particles have to be injected in an area whose dimension is of the order of the plasma wavelength. Injection in such cavities is a real challenge. Several physical mechanisms leading to this process were studied and demonstrated, the most important ones are listed below.
Optical or ponderomotive injection
A scheme using a secondary laser pulse was proposed to inject electrons in a plasma wakefield driven by a primary intense laser [Umstadter 96]. A second, perpendicular laser beam forces a fraction of plasma electrons into motion thanks to its transverse ponderomotive force. These accelerated electrons are trapped in the wakefield driven by the primary laser. A similar scheme was suggested in the so-called colliding laser pulse scheme [ Esarey 97] and experimentally demonstrated with a counter propagative beam [Faure 06] that has allowed to produce high quality electron beams with controllable parameters. It was shown that the beam energy and charge of the accelerated particles were controlled by changing the intensity and delay of the secondary laser pulse (the injection pulse) [Rechatin 09]. Optical injection can provide extremely stable, quasi-monoenergetic bunches of electrons. [Rechatin 09] accomplished it with two lasers of parameters 𝑎 0 = 1.3 and 𝑎 1 = 0.4, in a helium plasma of density 𝑛 𝑒 = 7.5 10 18 𝑐𝑚 -3 . The bunches, for a specific choice of synchronization position in the gas [Rechatin 09], have a central energy of 206 𝑀𝑒𝑉, with an energy spread of 14 𝑀𝑒𝑉 ± 3 𝑀𝑒𝑉, and a total charge of 13 𝑝𝐶.
The colliding pulse optical injection scheme we just mentioned is responsible for a longitudinal injection of particles in the wakefield. Another optical-injection scheme was discovered through numerical simulations by LOA researchers [Lehe 13], but it relies on transverse optical injection. This scheme is expected to provide low emittance beams (0.17 𝑚𝑚. 𝑚𝑟𝑎𝑑), higher charge (50 -100 𝑝𝐶) and a low energy spread as well (2 %).
Density downramp injection
An inhomogeneity in the plasma that sustains the density oscillations can be the source of local injection of particles. It occurs in the case of very non-linear plasma waves driven by a laser [Bulanov 98]. It was first demonstrated theoretically, and then studied experimentally.
In this scheme, a density gradient is used to change locally the wake phase velocity to lower the threshold velocity required to inject plasma electrons into the wake [Geddes 08].
Longitudinal self-injection
The injection schemes presented above are all controlled injection techniques. They rely on active and complex systems to inject electrons in the wakefields. The following two techniques are self-injection methods, they are simpler and therefore convenient to realize.
Longitudinal self-injection of electrons from the plasma background is a process similar to longitudinal wavebreaking in 1D. It is the result of the relativistic lengthening of the plasma wake that follows strong relativistic self-focusing effects. In this case, injected electrons are those that are the closest to the propagation axis before the perturbation of the plasma reaches them. Those electrons undergo the whole first period of the plasma wave, and when they reach the rear of the first bucket, i.e. the first period of the wave, their velocities exceed slightly the phase velocity of the plasma wave. Therefore, they are trapped and can gain energy from the wave. It was first observed and studied at LOA [Corde 13]. The experiment was accomplished in Salle Jaune, with a laser peak power of 30 𝑇𝑊. The main laser pulse which was driving the wakefield, contained 1 𝐽 of energy, and lasted 35 𝑓𝑠 . The laser parameter was then 𝑎 0 = 1.4. In the experiment, a gas cell of adjustable length was providing a plasma with a density of 𝑛 0 = 10 19 𝑐𝑚 -3 . Longitudinal injection was demonstrated to occur after a few hundred of micrometers at any plasma density and to stay dominant at lower density (𝑛 0 < 10 19 𝑐𝑚 -3 ). The charge of the longitudinally injected bunches is low ~ 2 -10 𝑝𝐶, the spectrum is quasi-monoenergetic and very stable shot to shot. The trajectories of trapped electrons in the wakefield, for the transverse and longitudinal selfinjection cases, are depicted in Fig. 6.3 (a) and (b). Spectra obtained from the experiment of [Corde 13] with longitudinal self-injection are shown in Fig. 6.3 (c).
Transverse self-injection
Transverse self-injection is another self-injection scheme, that results in lower quality bunches that the previous one, and relies on a different type of wavebreaking. It can have three variants.
In the bubble regime, transverse self-injection can occur from electron moving backwards in the wave, inside the sheath. In fact, when those electrons cross the axis and contribute to close the bubble structure they can be trapped [Lu 07]. A trajectory is displayed in Fig. 6.3 (b). It was shown also that abrupt changes in the cavity frontier radius could lead to sudden trapping of electrons [Kalmykov 09]. In both cases, injected electrons are those that are far from the propagation axis (at a distance of about a cavity radius) before the arrival of the laser driver.
An experimental demonstration of a transverse self-injection process was accomplished at LOA [ Corde 12] along with the longitudinal self-injection demonstration described in the previous paragraph. It was shown that transverse self-injection can occur with the same experimental parameters, but after a longer propagation of the laser in the gas. Transverse self-injection provides higher charge bunches, of order 50 -100 𝑝𝐶, with a broad spectrum.
The maximal electron energy in this experiment was 250 𝑀𝑒𝑉 as well. The process has a low stability and is very sensitive to the shot-to-shot fluctuations of the laser intensity profile. The produced electron bunches have higher emittance compared to longitudinal self-injected ones.
Ionization injection
Generally speaking, ionization injection such as the one performed in a mixture of helium with a small percentage of nitrogen [Pak 10, McGuffey 10] uses the successive ionization thresholds of a large Z gas to inject particles in the wakefield. This process allows to ionize and release electrons near the peak intensity of the laser in the plasma, which can then be easily trapped by the wake as they are born inside the wake at an optimal location. However, ionization injection occurs mostly continuously in the wake, which implies a higher energy spread than other injection techniques.
These physical processes are necessary for LWFA experiments such as the one performed at Laboratoire d'Optique Appliquée. These LWFA experiments rely on the Salle Jaune facility that is described in details in the next section.
Salle Jaune facility a. Facility
Salle Jaune is an experimental facility of rather modest dimensions compared to accelerators such as SLAC National Accelerator Laboratory. It consists of two rooms on top of each other, each of dimensions about 20 × 30 𝑚 drawn in Fig. 6.4 (a).
The facility relies on a 2 × 50 𝑇𝑊 laser system. Two laser pulses containing each 1.5 𝐽 named P1 and P2 are used. An additional beam named P3 with a lower energy is available and can be used as an optical probe for various diagnostics. The central wavelength of the main laser is 810 𝑛𝑚, and the bandwidth is 40 𝑛𝑚. The main system can operate at 1 𝐻𝑧, however, data acquisition at full power is generally performed at 0.1 or 0.2 𝐻𝑧. The pulses are compressed down to 30 𝑓𝑠 , thanks to the Chirped Pulse Amplification technique described in the section dedicated to lasers. A picture of the laser chain is reported in Fig. 6.4 (b).
The laser chain (Fig. 6.4 (c)) begins with a Ti:Sapphire oscillator that provides a 9 𝑓𝑠 pulse with a 𝑛𝐽 of energy. At that stage, the spectrum is broad and centred on 800 𝑛𝑚, the repetition rate is much higher than the final laser rate: 88 𝑀𝐻𝑧. The pulse is then stretched temporally to reach a length of 20 𝑝𝑠, so that it can be amplified to a total power of 2 𝑚𝐽. After this pre-amplification, a first compressor system brings the pulse back to 20 𝑓𝑠, and the rate is at that stage of 10 𝐻𝑧. An XPW system filters the pulse: contrast is a key parameter to ensure a good laser-plasma interaction. The contrast reached (at the end of the laser chain) thanks to the XPW is 10 10 , 100 𝑝𝑠 before the main pulse and 10 7 , 10 𝑝𝑠 before the pulse. After the XPW, the beam energy is reduced to 35 𝜇𝐽. The laser pulse is then stretched again to 500 𝑝𝑠 and enters a Dazzler system, an acousto-optic modulator that will manipulate its spectral phase. This setup is important to prevent the spectral shortening due to the main amplifier stage at the next step of the chain. Several 532 𝑛𝑚 Nd:YAG lasers pump the five amplification stages. The pulse is amplified at each stage, it is brought by the first stage to 1 𝑚𝐽, then to 20 𝑚𝐽 after the second. Successively, the pulse energy increases to 600 𝑚𝐽, then 3 𝐽 and finally 6 𝐽. At the end of the amplification chain, the beam is divided into three beams, P1, P2 with the same energy, and P3 used as a probe containing very little energy. All these processes take place at the second floor of Salle Jaune. The beam is then transported downstairs by sets of afocal lenses systems, to enter the compressors. In the experimental chamber, the beams have a diameter of almost 6 𝑐𝑚 for P1 and P2 and 3 𝑐𝑚 for P3. The beams are compressed independantly to 25 -30 𝑓𝑠, FWHM.
During the experiments as depicted in Fig. 6.5, P1 is focused thanks to a parabola, and P2 thanks to cylindrical lenses. The quality of the focal spots is corrected thanks to deformable mirrors for P1 and P2. This adaptive optics system relies on a phase-front sensor on which is imaged the surface of the deformable mirror. The HASO phase-front sensor measures the front aberrations and reconstructs it by using the decomposition of the phase transverse profile on Zernike polynomials. The deformable mirror can then compensate for each component on Zernike basis. As a result, the focal spot quality can be greatly improved. Around 60% of the beam total energy (total energy available in the experimental chamber) is contained in the focal spot.
The optics in the chain deteriorate when the beams are used at full power. As a result, the energy on target can be reduced from an experimental campaign to another one. When the optics in the chain are perfectly clean, about 60 % of the energy after the final amplification stage arrive in the experimental chamber where the target is.
LWFA experiments in Salle Jaune take place either in the cubic chamber ROSA, or in the circular chamber ZITA. A typical experiment setup is depicted in Fig. 6.5, for the circular chamber ZITA. This is a schematic of the 2017 hybrid LWFA/PWFA experiment in the circular chamber ZITA. As can be seen in Fig. 6.5:
P1 is the main beam that is used for LWFA. It is deflected by two mirrors onto an offaxis parabola. In the experiment depicted in this part, the parabola is a 1-m focal length parabolic mirror with a diameter of 6 𝑐𝑚.
P2 is used in the hybrid experiment as a pre-ionizing beam for the second gas jet. An optical elevator brings the beam up where two cylindrical lenses focus it in the second gas jet, along a line. P2 propagates in the second gas jet moving downwards from the top of the chamber, with a 10° angle. In the first experimental campaign described in this chapter, P2 was not used.
P3 is used as a probe beam for a side-view diagnostic. In the hybrid experiment, it is used with the Wollaston cube interferometer diagnostic system. P3 probes the interaction region in the horizontal plane, and is collimated with a diameter of 3 𝑐𝑚.
Two diagnostics are constantly used in the experiment in Salle Jaune. The first one, the Nomarski side-view interferometer, allows to monitor the laser and electron beam propagation in the gas jet targets. The second one, the electron spectrometer, is used to monitor the electron energy and divergence distributions.
b. Energy spectrometer
The electron spectrometer relies on a permanent dipole, a magnet whose field deflects the electron beam in the horizontal plane and on a Lanex screen that emits light when the electrons interact with it. The Lanex is a Kodak fine scintillating screen that measures 35 * 180 𝑚𝑚 2 . A 16 𝑏𝑖𝑡𝑠 Hamamatsu camera is used to collect the light emitted by the Lanex after the particles reach it. In Fig. 6.6 (a) is a schematic of the setup, and Fig. 6.6 (b) and (c) illustrate two examples of electron spectra.
For a particle with an energy 𝐸 at the exit of the gas jet, a simple code that solves the equation of motion of an electron in a B field allows to calculate the position at which the particle reaches the screen. The map of the B field must be known to draw the curve 𝑠(𝐸), position of the electron on the screen as a function of their energy. It is then possible to replace the x-axis of an image with the result 𝑠(𝐸). To do that, a calibration of the axis origin on the camera image is required. In addition, the magnet is setup on a translation stage, which allows to remove it and therefore to record the position on the screen that corresponds to undeflected particles (that would correspond, in the presence of the dipole to infinite energy particles). The exact geometry of the setup is necessary to compute the curve 𝑠(𝐸). The resolution is calculated while supposing a constant divergence of 3 𝑚𝑟𝑎𝑑 for all energies. Depending on the choice of setup, the Salle Jaune spectrometer allows to detect electrons of energy in the range 40 -400 𝑀𝑒𝑉.
The calibration of the Lanex light emission and light collection by the camera was accomplished in the laboratory in the past [Glinec 06] for the previous system of Salle Jaune. However, the calibration of the new Hamamatsu camera is yet to be done and will probably be accomplished during the next experimental campaign. For now, the number of counts on the camera chip will be used to quantify the signal intensity without at the moment the absolute value of the charge.
c. Side-view interferometer
During the experiments, the gas is delivered through a nozzle using a pulsed electro-valve. The density profile of the jet is determined by the shape of the nozzle, whereas the maximal density is given by the backing pressure. We use to measure the plasma density a single shot measurement method. This method relies on a Nomarski interferometer, based on the use of a Wollaston cube [ Small 72]. The probe beam, P3, propagates horizontally, and exits the chamber where an imaging system composed of two lenses of focal length 60 𝑐𝑚 projects it onto a CCD camera chip as depicted in Fig. 6.7 (a). The plane of the plasma is conjugated with the plane of the CCD chip. A Wollaston prism is setup just after the second lens and separates the components of the incoming light onto its two axes. Therefore, on the chip of the camera appears a spot for each polarization component. The incoming light is projected by the first polarizer, before the Wollaston cube to obtain the same intensity on each of the two spots. From the second lens to the CCD chip the beams are collimated: they overlap on the camera. At that point, the light of the spots cannot interfere with each other as their polarization is orthogonal. That is why the second polarizer is setup and forms a 45° angle with each beam polarization. The beams emerging from the second polarizer can interfere.
The distance between the two focal spot centers is 𝑎 = 𝛼. 𝑓 𝐿2 where alpha is the Wollaston cube angle (multiplied by the refractive indices difference), and 𝑓 𝐿2 is the focal length of the second lens. The inter-fringe distance is then given by the usual formula for interference from two coherent point-like sources interfering at infinity: 𝛿 = 𝜆𝑓 𝐿2 /𝑎 . Fringe spacing can therefore be written as a function of the wavelength of the probe and the Wollaston angle only: 𝛿 = 𝜆/𝛼.
The choice of the lenses focal length was made to image a window of 1 𝑐𝑚 2 onto the 1 𝑐𝑚 2 CCD chip. The Wollaston angle was chosen to resolve spatially the plasma density gradient and ensure that at least 5 pixels are used for one inter-fringe spacing on the chip. For a probe at 𝜆 = 800 𝑛𝑚 , and a Wollaston with an angle of 𝛼 = 1.5°, a 14 𝑏𝑖𝑡𝑠 , 4240 × 2824 𝑝𝑖𝑥𝑒𝑙𝑠 Point Grey camera, the resolution is of ~10 𝑝𝑖𝑥𝑒𝑙𝑠/𝑓𝑟𝑖𝑛𝑔𝑒.
The gas flow is very slow compared to the passage of the 30 𝑓𝑠-long probe beam moving at the speed of light. When the probe samples the gas after the main beam (P1) created a plasma, the probe accumulates a phase shift in the area where ionization occurred. The plasma refractive index is given by the formula:
𝜂 = √1 -𝑛 𝑒 /𝑛 𝑐 (6.6)
The fringe spacing on the camera chip depends on the plasma and gas refractive index. Under the hypothesis of an axi-symmetrical plasma, an Abel inversion algorithm allows to convert the phase shift axial profile to a plasma density radial profile [ Kalal 88]. A rough estimate of the density can be reached using a simpler calculation. In Fig. 6.6 (c), the plasma channel is roughly 𝑙 = 250 𝜇𝑚 wide, and the pattern is shifted at the center of the channel over about two fringes. As a result, the phase difference at the center provides the equation:
2𝜋 𝜆 (𝜂 𝑝𝑙𝑎𝑠𝑚𝑎 -1)𝑙 = -2.2𝜋 (6.7) which gives an estimate of the density of the plasma: 𝑛 𝑒 = 2.2 10 19 𝑐𝑚 -3 . In this calculation, we made the hypothesis that the density was constant in the plasma column. This assumption is incorrect, however the peak density for a similar nozzle, and a backing pressure of 11 𝑏𝑎𝑟 can reach 2.6 10 19 𝑐𝑚 -3 [Guillaume 15] which is close to the estimation above.
Hybrid LWFA-PWFA experiment and results
a. Experimental setup
The experimental setup is depicted in Fig. 6.5. P1 is focused in the first gas jet and it produces an electron bunch by Laser Wakefield Acceleration. The second gas jet is aligned a few millimeters after jet one. Both nozzles are mounted on three-axis translation stages, gas jet positions can therefore be adjusted, for instance by moving gas jet two from touching jet one to at least 20 𝑚𝑚 downstream. Between the gas jets are placed two thin steel disks, screwed to each other and maintain a thin aluminum foil. The thickness of the foil will be varied in the following sections. The disks are mounted on a two-dimension translation stage and fixed to a goniometer to adjust its orientation. The adjustment of the position of the nozzles can be controlled thanks to the side-view diagnostic. An example of an image recorded on the camera is displayed in Fig. 6.7 (b).
The first run of the hybrid experiment aims at studying the interaction of the LWFA electron beam interacting with a gas jet. The thin foil located between the two gas jets is used to prevent laser interaction in the second gas jet. The effect of this foil on the electron beam propagation needs to be elucidated first. P2 is intended to ionize the second gas jet, before the electron beam passes in the preformed plasma. However, the study of this scheme will be pursued in future experimental campaigns.
b. Effects of the second gas jet on the electron beam
In this section, we describe the electron beam produced by the LWFA stage. We report then on the effect of the second gas jet on the spectrum of the electron beam, when no plasma was pre-formed in gas jet two.
Electron beam production and spectrum
Electron beam produced in the LWFA stage is performed by focusing the main laser beam (P1) into gas jet 1 to excite a nonlinear wakefield. A gas density of ~2.10 19 𝑐𝑚 -3 was used, with 𝑎 0 ≈ 2.1. Ionization injection is used in this experimental campaign with a gas mixture of 99% helium, 1% nitrogen. An example of a spectrum is given in Fig. 6.8 (a) and (b). The spectrum has features common to most ionization injection schemes, it spreads here from 220 𝑀𝑒𝑉 to 80 𝑀𝑒𝑉 and is almost uniform over the whole range. In this section, the wheel is removed, and the second gas jet can be switched on or off. We characterize here the electron bunch created in the first jet.
Averaging over several shots, the mean maximal energy -defined as the highest energy at which some signal was recorded on the camera -was 215.60 ± 5.23 𝑀𝑒𝑉. The average peak energy was 182.04 ± 5.56 𝑀𝑒𝑉 in the dataset considered. The spectrum of Fig. 6.8 (a) belongs to the dataset used to produce these typical average values.
The calibration of the charge in the bunch, measured by the camera chip signal is yet to be accomplished. However, the charge measured in the same context, in Salle Jaune of LOA revealed bunch charge of the order of 50 -100 𝑝𝐶 [ Guillaume 15].
The divergence in the transverse plane is assumed to be identical in x and y dimensions. However, it can only be measured in y, as the spectrometer deflects the particles in the x direction. The divergence on the screen appears as displayed in Fig. 6.8 (a). For the shot displayed, the divergence spans from 3.22 𝑚𝑟𝑎𝑑 at 100 𝑀𝑒𝑉 to 2.01 𝑚𝑟𝑎𝑑 at 200 𝑀𝑒𝑉 (FWHM).
The average beam energy values quoted above are unaffected by the second gas jet: without the second gas jet, the maximum and peak energies are 232.53 ± 8.19 𝑀𝑒𝑉 and 209.45 ± 18.20 𝑀𝑒𝑉. When the second gas jet is setup, those values become 237.31 ± 5.55 𝑀𝑒𝑉 and 221.66 ± 6.41 𝑀𝑒𝑉. The increase in the maximal energy can be explained for example by the beam self-generated deviation due to an asymmetry in its transverse profile.
The plasma in the second gas jet is ionized by the self-field of the LWFA electron bunch created in the first gas jet. Self-ionization of a neutral gas by a bunch of particles, in the However, the schemes considered in these articles are often different from the hybrid platform described in this section. First, the gas they consider is usually lithium or cesium, second, the neutral gas and particle bunch densities they consider are lower by at least an order of magnitude. Numerical results obtained considering the same scheme as us were reported [Heinemann 17]. In this last work, numerical simulations compared the wakefield driven in the second gas jet, when the gas was pre-ionized by a laser or self-ionized by the drive beam. First, this work showed that self-ionization occurred with a LWFA electron drive bunch comparable to ours. Second, the accelerating electric field in the wakefield driven by this bunch, although smaller in the self-ionized case than in the pre-ionized one, could reach 300 𝐺𝑉/𝑚 at a gas density of 𝑛 0 = 10 19 𝑐𝑚 -3 . Such a study could be accomplished experimentally in Salle Jaune during the next experimental campaign in early 2018.
Effect of the second gas jet on the bunch spectrum
In this part, we consider shots for which the wheel contained a Mylar film of 13 𝜇𝑚. The wheel with the foil could be inserted to block the laser after the first stage. Several materials and thicknesses for the film were tried in the experiments. The choice of the thickness and the study of the material effect will be given in the next section.
When the wheel is inserted, the laser beam is blocked, and the electron beam can ionize a thin column of plasma in the second gas jet on its own, as shown in Fig. 6.6 (b). In addition, the second gas jet has a focusing effect, particularly strong at low energies. This effect can be seen in Fig. 6.9 (a) and (b). A quantification of this phenomenon is provided with Fig. 6.9 (c). On that plot, the red dots depict the divergence along the spectrum for the LWFA created electron bunches, when they pass through the wheel, but without the second gas jet. By contrast, the blue dots depict the bunch divergence, after the beam passes through the wheel, when the second gas jet is used.
The self-focusing phenomenon occurring when a bunch of particles -electrons or positronsexcites a plasma wakefield in a gas is a process studied for several decades, that was called from the beginning the plasma lens. This phenomenon focused attention first in the context of setting up a final stage at the end of a conventional accelerator, to use the self-pinching effect to increase the luminosity of the particle source [ Chen 87]. The principle of the self-focusing phenomenon is simple: when a charged particle bunch propagates in a gas, the front particles ionize the medium and excite a wakefield that is focusing for the bunch particles following. Head erosion usually occurs. Along the bunch will exist a progressively stronger focusing force that will reduce the global divergence of the beam. In the context of conventional accelerators, the plasma lens was studied theoretically [Chen 87] and experimentally [Nakanishi 91, Ng 01].
The plasma lens phenomenon focused LWFA researchers' attention next. In fact LWFA produced electron bunches have fundamentally an extremely low emittance of order of 1 𝑚𝑚. 𝑚𝑟𝑎𝑑. When such bunches drift in free space, the finite energy spread of the particles they contain has the negative consequence of increasing the emittance of the beam. If a plasma based collider is created someday, several plasma stages distant from each other will have to successively accelerate the beams. It will therefore be important to collimate the bunches after each plasma cell before they propagate to the next. However, preserving the emittance implies that the lenses operate on the emerging beam over the first few millimeters when they are transversally short. Quadrupoles used in conventional facilities are not strong enough to provide the accelerating gradients needed to focus LWFA beams. The plasma lens technique could in principle accomplish this.
The major drawback of self-focusing is the inhomogeneity of the force along the bunch. Two schemes in the context of LWFA were suggested to exploit the advantages of the plasma lens technique while getting rid of the drawbacks. The schemes were proposed theoretically [ Lehe 14], and a first demonstration was accomplished at LOA [ Thaury 15]. Both schemes rely on a two-jet setup. However, these schemes rely on the wakefield driven by a laser bunch in the second jet whose transverse field is used to refocus the electron beam emitted in the first jet. This is incompatible with the concept of the hybrid LWFA/PWFA project.
The focusing phenomenon reported in this section is the "conventional plasma lens" technique quoted above, that was reported in conventional facilities [ Nakanishi 91].
In the experiment described in this chapter, the laser is blocked by a Mylar foil after the first jet, only the particle beam is responsible for the focusing effect in the second stage. It is therefore a scheme comparable to the original plasma lens design. The comparison accomplished in Fig. 6.9 (c) ensures that the focusing effect is an interaction between the electron beam and the plasma: the laser is blocked by the wheel, only the electrons and the gas interact in jet 2. The effect is the strongest at 100 𝑀𝑒𝑉, where the divergence is reduced by 32 %. At higher energy, 200 𝑀𝑒𝑉, the divergence is only reduced by 16 %. The maximal energy of the beam is slightly modified as said in the previous section. With the Mylar foil and without the second gas jet, the maximal energy is 𝐸 𝑚𝑎𝑥 = 232.53 ± 8.19 𝑀𝑒𝑉. When the second gas jet is added, the measured maximal energy is 𝐸 𝑚𝑎𝑥 = 237.31 ± 5.5 𝑀𝑒𝑉.
The magnetic field gradients in the transverse plane can be estimated. Starting from the simple model of an electron of the bunch whose initial energy is 100 𝑀𝑒𝑉 (𝛾 ~ 198), at the distance 𝜎 𝑥,𝑟𝑚𝑠 from the axis facing a focusing and constant field, we can estimate B. The particle speed is nearly 𝑐, and is not modified during the motion. The deflection of this particle is supposed to be 𝜃 0 ~ 1 𝑚𝑟𝑎𝑑 (reduction of the beam divergence at 100 𝑀𝑒𝑉 is ~ 2 𝑚𝑟𝑎𝑑 in Fig. 6.9 (c)). We suppose as well that the field is constant for this electron along the length 𝑙 = 3 𝑚𝑚 of the second jet. The equation of motion is: The numerical application gives: 𝐵 = 0.74 𝑇. Assuming a beam with an initial divergence of 6 𝑚𝑟𝑎𝑑, that propagates 2 𝑚𝑚 between the jets, the transverse size is 12 𝜇𝑚 when the bunch enters the plasma lens. The B field is null on axis, and has an amplitude of 0.74 𝑇 at the edge of the bunch. The corresponding transverse magnetic gradient is therefore |∇ 𝐵| = 6.2 • 10 4 𝑇 𝑚 -1 . This is two orders of magnitude higher than the highest performance permanent magnets transverse gradients (~ 500 𝑇 𝑚 -1 ) [ Thaury 15].
To conclude, it is clear that an interaction happens between the electron beam and the second gas jet. However, no energy deposition is seen yet. Furthermore, another comment must be made: the wheel itself has a defocusing effect at all energies. This effect can be seen in Fig. 6.9 (d). The blue dots depict the divergence along the spectrum for the LWFA created electron bunch, with the Mylar foil. The red dots depict the bunch divergence, when the wheel is inserted. The effect is almost constant: at 100 𝑀𝑒𝑉 and at 200 𝑀𝑒𝑉, the divergence is increased by 36 %. The origin of the defocusing effect will be studied in the next section.
c. Effects of the foil on the electron beam
In this part, we study the effect of the wheel on the electron beam. We use Aluminum foils with tens of micrometers thicknesses, or a Mylar foil, with a thickness of 13 𝜇𝑚.
When a charged particle beam traverses matter, multiple scattering occurring in the material leads to an increase in the divergence. This phenomenon is characterized by the radiation length of the material. These additional angles will be automatically subtracted to the measured data during image processing from now on. The studies below aim at identifying other effects that could explain the increase of the divergence.
At that point, the remaining "scattering" of the electron beam due to the foil can have three origins. It could be an effect of the plasma in front of the foil and/or it could be a volumetric plasma effect in the foil, or it could be due to the reflection of the laser bunch on the foil. Setting up the wheel at a 45° angle should verify the first hypothesis. The second hypothesis, related to the volume, should be verified by a parametric study of the foil thickness and material. Following the last conjecture, the "scattering" effect could be located on the surface of the wheel, where laser-foil interaction could give birth to chaotic electromagnetic fields at the surface. Varying the distance between the foil and the first jet should reduce the scattering. If this last hypothesis is valid, the defocusing effect should decrease rapidly as the gas jet-foil distance increases.
First was accomplished the study of the hypothesis of a plasma effect, by changing the orientation of the wheel. An example of the side-view diagnostic is displayed in Fig. 6.11 (a) to illustrate the modification of the setup for this test. The plot of Fig. 6.11 (b) does not show any difference with or without the angle given to the wheel. The conclusions of the volumetric effect study are more likely to show an effect on the divergence of the beam. Fig. 6.12 reports on the divergence of the beam as a function of energy for different Aluminum thicknesses of the wheel, and for a Mylar foil as well. Several plots are showed that correspond to different distances between the first gas jet and the wheel. The reference position is chosen as the position where the foil touches the downstream edge of the first nozzle. At each position, it seems that the thickness of that material correlates slightly with the scattering effect. In fact, the curves show on each plot that the defocusing grows as the Aluminum thickness increases. In addition, using a Mylar foil instead of an Aluminum one seems to increase the effect at the closest position (Fig. 6.12 (a) and (b)). The errorbars in Fig. 6.12. are quite large. Data analysis reveals that more data is required to draw a perfect conclusion regarding the material and foil thickness influence. It is clear that the effect of the thickness is non-negligible.
As a conclusion for this second study, we can say that the effect of the volume of the foil exists, but is modest. A larger amount of data would bring clearer datas regarding the volumic effects in the foil. During the experimental campaign, the amplitude of the defocusing effect seemed to be larger, this is why the last parametric study is likely to bring positive and clear results.
The last study, related to the effect of the distance between the foil and the first jet is reported in Fig. 6.13. Fig. 6.13 (a) and (b) for an aluminum foil with different thicknesses. The reference position is chosen as it was made in the measurements reported in Fig. 6.12: the position for which the foil touches the downstream edge of the first nozzle. As can be seen, the trend is similar for all thicknesses. Generally speaking, the divergence at 120 𝑀𝑒𝑉 and at 160 𝑀𝑒𝑉 increases very slightly with the thickness of the foil. In addition, when the distance increases, the scattering of the electron beam decreases rapidly.
When the foil is made of Mylar, the behavior is more chaotic. In Fig. 6.13 (c), are reported the measurements at 120 𝑀𝑒𝑉 and at 160 𝑀𝑒𝑉 for a Mylar foil. Two steps seem to appear on this graph, when the distance between the foil and the first jet outruns 700 𝜇𝑚, a clear reduction of the scattering effect occurs, and the effect is almost constant for longer distances. The main conclusion of Fig. 6.13 is the following: the strong decrease of the divergence with the distance illustrates that the plasma-foil interaction plays a role in the deterioration of the beam properties. This effect is much greater than the influence of the material or of the thickness of the foil blocking the laser.
To conclude, a first characterization of the hybrid LWFA-PWFA was accomplished in this chapter. The results of the last section illustrate the difficulty to build a two-stage LWFA-PWFA experiment. An evidence of the electron bunchsecond gas jet interaction was obtained. In addition, a parametric study of the scattering effect of the wheel was realized. This led to rather complex results and seems to indicate that laser-foil interaction at the surface of the material is responsible for the effect. This conclusion opens the prospect of several interesting new experiments to explore further this laser-foil interaction. We will discuss these prospects in the general conclusion of the manuscript.
Betatron X-ray radiation in LWFA experiments a. Radiation from charged particles
The origin of radiation by accelerated charged particles can be understood by a simple picture [Khan 08, Ferri 16]. Electromagnetic emission fundamentally comes from the finite speed at which information (electric and magnetic fields here) propagates, the speed of light 𝑐 . Considering a single particle in vacuum, at rest, an observer could feel its electric field everywhere in space. If the particle is moving at a constant speed, the observer could always switch to the particle reference frame, in which the particle is not moving.
Let's consider a particle initially at rest. If the particle starts moving, a change of its electric field in all space should be felt. However, this information needs to propagate from the particle to the observer at the speed of light. The retarded informationthe sudden change in the position of the charged particle and of its electric field -propagates toward the observer. This propagating perturbation is the electromagnetic radiation emitted by the particle. This is illustrated in Fig. 7.1 (a). This simple picture shows the link between acceleration of charged particles and emission of radiation. In the following the exact formulas for the radiation will be introduced first and the process that explains why LWFA-produced electron bunches emit radiation second.
In the following, we consider the fields generated by a relativistic particle 𝑃. 𝑡 ′ is the time, in the laboratory frame, at which the particle emitted some radiation, that propagated at speed c and reached the observer at position 𝑀( 𝒓 ⃗ 𝑀 ) at time 𝑡. 𝑡 ′ is referred to as the retarded time, and is before time 𝑡 at which the radiation is received by the observer. Where 𝜷 = 𝒗/𝑐 and 𝒏 is the unit vector collinear to 𝑟 𝑀 -𝑟 𝑃 (𝑡′). The wave equation, written in terms of the electromagnetic four-potential 𝑨 𝛼 = (𝜙/𝑐, 𝑨) and of the four-current 𝑱 𝛼 = (𝑐𝜌, 𝑱), reads in the Lorentz gauge (𝜕 𝜇 𝑨 𝜇 = 0):
( 1 𝑐 2 𝜕 2 𝜕𝑡 2 -∇ 2 ) 𝑨 𝜇 = 𝜇 0 𝑱 𝜇 (7.3)
where 𝜙, 𝑨, 𝜌, 𝑱 are the usual scalar potential, vector potential, charge density and charge current density. Using the Green function of (7.3), the solution of this inhomogeneous linear differential equation for the four-current associated to a single accelerated electron, described by its four-position 𝒓 𝑃 and four-velocity 𝑢 𝛼 , can be expressed as [ Jackson 62]:
𝐴 𝜇 (𝒓) = - 𝑒 4𝜋𝜖 0 𝑐 𝑢 𝜇 (𝑡 ′ ) 𝑢(𝑡 ′ )•[𝒓-𝒓 𝑷 (𝑡 ′ )] (7.4)
where 𝒓 is the four-vector position at which the four-potential is evaluated (i.e. the position of the observer). This formula is called the Liénard-Wiechert potential. Using the light cone condition (7.1) and the definition of the 𝑬 and 𝑩 fields in terms of the potentials, one can reach the following expressions, where all terms are expressed as a function of 𝑡 ′ :
𝑬(𝑟, 𝑡) = - 𝑒 4𝜋𝜖 0 [ 𝒏-𝜷 𝛾 2 (1-𝜷•𝒏) 𝟑 𝑅 𝟐 + 1 𝑐 ( 𝒏 × [(𝒏-𝜷) × 𝜷 ̇] (1 -𝜷•𝒏) 𝟑 𝑅 )] 𝑡 ′ (7.5) 𝑩(𝑟, 𝑡) = 1 𝑐 [𝒏 × 𝑬] 𝑡 ′ (7.6)
In (7.5) and (7.6), 𝜷 ̇= 𝑑𝜷/𝑑𝑡 is the acceleration and 𝛾 is the Lorentz factor of the particle, defined as 𝛾 = (1 -𝛽 2 ) -1/2 .
The first term in (7.5) does not depend on the acceleration of the radiating electron, it is a static term that decreases fast with the distance ∝ 1/𝑅 2 . This term is sometimes said "quasicoulombian" as it shares similarities with the expression of the electric field created by a charged particle at rest. The second term in the expression of 𝑬 is particularly interesting as it depends on the acceleration of the radiating electron. This term represents the electromagnetic field generated by the acceleration of the particle. The term only decreases with the scaling ∝ 1/𝑅.
If the distance 𝑅 is much bigger than the size of the radiation source (the spatial extent covered by the trajectories of a bunch of particles for example) and the wavelength of the radiation, then the second term in formula (7.5) is dominant. In addition, in that case 𝒏 can be considered as a constant, and the following approximation can be used:
𝑅(𝑡 ′ ) ≈ 𝑅 0 -𝒏 • 𝒓 𝑷 (𝑡 ′ ) (7.7)
It is possible from (7.5) and (7.6), to reach the expression of the radiated energy per unit of solid angle and frequency by the particles of a LWFA electron bunch. The definition of Poynting vector, the vector transporting the energy of the electromagnetic field, is: The integration over time of this expression leads to the total energy radiated per unit of solid angle: Injecting (7.12) into (7.11) leads to the general formula for the radiated energy, per unit of solid angle and frequency:
𝑑𝐼 𝑑𝛺 = ∫ |√𝑐𝜖 0 𝑅𝑬(𝑡)| 2 𝑑𝑡 ∞ -∞ = ∫ |√𝑐𝜖 0 𝑅𝑬(𝜔)| 2 +∞ -∞ 𝑑𝜔 = 2 ∫ |√𝑐𝜖 0 𝑅𝑬(𝜔)|
𝑑 2 𝐼 𝑑𝛺𝑑𝜔 = 𝑒 2 16𝜋 3 𝑐𝜖 0 |∫ 𝑒 𝑖𝜔 𝑐 (𝑡 ′ -𝒏•𝒓 𝑃 ) 𝒏 × [(𝒏-𝜷) × 𝜷 ̇] (1 -𝜷•𝒏) 𝟐 𝑑𝑡′ +∞ -∞ | 2 (7.13)
This formula gives the expression of the radiation emitted by an electron, in the direction of observation 𝒏, as a function of the particle position, velocity, and acceleration. This formula is only valid for an observer far from the source. Four important aspects of this expression characterize the radiation emitted by accelerating charged particles [Corde 13]:
If 𝜷 ̇= 0, then the radiated energy is null. In addition, the dominance of the term containing 𝜷 ̇ in our calculation indicates that the acceleration is responsible for the electromagnetic field far from the charged particle.
The radiated power is maximum for 𝜷 ∥ 𝒏, and for 𝛽 ≈ 1. Relativistic electrons will radiate much more energy than non-relativistic particles, by several orders of magnitude. In addition, the theory of special relativity teaches us that the radiation is emitted in the direction of propagation, in a cone of angle 𝛥𝜃~1/𝛾.
Knowing the scalings 𝛽 ̇∥ ∝ 𝐹 ∥ /𝛾 3 and 𝛽 ̇⊥ ∝ 𝐹 ⊥ /𝛾, we can conclude that transverse forces will be much more efficient than longitudinal ones to force relativistic electrons to produce radiation.
Last, the phase term can lead us to the frequency of the radiation a wiggling electron will produce: 𝜔 𝑒-𝑤𝑖𝑔. = 𝜔 𝑋-𝑟𝑎𝑦𝑠 /2𝛾 2 . This formula illustrates the interest of manipulating very relativistic particle beams: 𝜔 𝑒-𝑤𝑖𝑔. is the electron oscillation frequency, and 𝜔 𝑋-𝑟𝑎𝑦𝑠 is the frequency of the emitted radiation.
b. Radiation in LWFA experiments
Moving now to the purpose of this theoretical presentation, X-ray radiation have many applications, for instance for medical imaging or non-destructive testing. In previous paragraphs, we saw that forcing relativistic particles to wiggle transversally is a very efficient way to produce X-ray radiation. Based on this principle, Betatron X-ray production in LWFA is one of the most promising schemes towards the realization of a high-brilliance high-energy X-ray source, this is why many X-ray studies are accomplished in the context of LWFA.
Before introducing the natural source of transverse oscillations of LWFA electrons, we start with a presentation of an important concept regarding these transverse oscillations: the regime of oscillations, that can be either wiggler or undulator. We define 𝜓 as the maximal angle between the velocity of the particle and the main axis of propagation. The fundamental parameter to distinguish and define the two regimes is 𝐾 = 𝜓𝛾 = 𝜓/𝛥𝜃, where 𝛾 is the energy of the electrons. It can be shown that if the transverse oscillation has a spatial periodicity 𝜆 𝑢 , the emitted radiation is also periodic with the wavelength: 𝜆 = 𝜆 𝑢 2𝛾 2 (1 + 𝐾 2 2 + 𝛾 2 𝜃 2 ), where 𝜃 is the angle of emission. When 𝐾 ≪ 1, the regime is undulator and when 𝐾 ≫ 1 the regime is wiggler. The specificities of the two regimes are summarized on the schematic of Fig. 7.2. The spectrum always contains the fundamental frequency 𝜔 = 2𝜋𝑐/𝜆 and in some cases harmonics of this frequency. In the undulator regime, only the fundamental frequency will be present. In the wiggler case, harmonics will be contained up to a critical frequency 𝜔 𝑐 .
In the bubble regime of LWFA, the plasma cavity can act as a wiggler. The electrons accelerated in the ion cavity also face a restoring focusing force which forces them to oscillate around the main propagation axis. The dynamic of the electrons in the cavity is slightly more complicated than the transverse oscillation described above, with fixed periodicity, amplitude and particle energy. In particular, the Betatron amplitude, the frequency of oscillation and the particle energy depend on time. Similarly, the transverse positions of the particles depend only on time, and not on the longitudinal coordinate. But the discussion is still relevant in the context of Betatron radiation, by considering that the properties of the radiation needs to account for the instantaneous oscillation amplitude, oscillation frequency and particle energy. The amplitude of the Betatron oscillation scales as 𝐴 ∝ 𝛾(𝑡) -1/4 while the frequency of the Betatron oscillation scales as 𝜔 𝛽 ≈ 𝜔 𝑝 /√2𝛾(𝑡) ∝ 𝛾 -1/2 . The theoretical description is more complex for electrons that radiate while being accelerated, which is the case relevant for practical LWFA experiments like the ones performed at LOA. The model shows in particular that the number of emitted photons in an oscillation period depends very weakly over 𝛾, while the radiated energy is strongly dependent on 𝛾, as the frequency of the photon increases quickly with 𝛾. As a result, the energy contained in the observed radiation is dominated by the contribution from the radiation emitted by particles at the end of their acceleration, when they have the highest energy. For a LWFA-produced bunch, it usually is when dephasing occurs.
The first demonstration of Betatron electron oscillations in a bubble ion cavity and Betatron X-ray production was accomplished in the context of PWFA. A plasma wakefield was driven with an electron bunch of 1.8 × 10 10 particles in a preformed plasma of density 1.7 × 10 14 𝑐𝑚 -3 [Wang 02]. The beam electrons oscillated in the transverse field as well and were responsible for X-ray emission. The oscillation parameter was 𝐾 = 16.8 and the peak spectral brightness was 7 × 10 18 𝑝ℎ𝑜𝑡𝑜𝑛𝑠/𝑠/𝑚𝑟𝑎𝑑 2 /𝑚𝑚 2 /0.1 % 𝐵𝑊 at 14.2 𝑘𝑒𝑉 in this experiment.
The first experiment relying on a LWFA-produced electron bunch was accomplished at LOA [Rousse 04]. The Salle Jaune 50 𝑇𝑊 laser system was used to produce electrons from a 3 𝑚𝑚 long helium gas jet providing a plasma density of 10 19 𝑐𝑚 -3 . The bunch of electrons emerging from the gas was deflected by a 1 𝑇 permanent dipole magnet, and a CCD camera placed on axis detected the Betatron X-ray light emitted by the electron bunch during its acceleration in the ion cavity. The peak spectral brightness was 2 × 10 22 𝑝ℎ𝑜𝑡𝑜𝑛𝑠/𝑠/ 𝑚𝑟𝑎𝑑 2 /𝑚𝑚 2 /0.1 % 𝐵𝑊.
As said in the general introduction of the manuscript, several applications rely on bright, spatially-coherent X-ray sources. Maximizing the production of X-ray light in LWFA experiments is one of the main efforts from the laser-plasma acceleration community. In fact, Betatron X-ray light could be a convenient and technically superior method of imaging. However, its main disadvantage comes from the relatively low total energy radiated by the source and the energy of the produced photons, which need to be increased to make this stateof-the-art technique really efficient for a broad range of application experiments. The scheme presented in the following is a promising attempt to bring the current performances of X-ray plasma sources to the next level, both in terms of brightness and spectral range.
Design and numerical characterization of a two-stage hybrid LWFA-PWFA Xray source
A comprehensive numerical study of a new scheme was accomplished by researchers from LOA, as a part of a project that aimed at decoupling the LWFA production of an electron beam and the generation of X-ray light through Betatron oscillations. The theoretical work was published in a recent article [Ferri 17], this section introduces its main results.
a. Motivations for a decoupled scheme
As explained in the first section, electrons in a LWFA experiment emit a so-called Betatron X-ray radiation during their acceleration in the ion cavity. The radiation originates from the transverse oscillations of the charged particles, around their main axis of propagation. In the bubble regime of LWFA, the particles wiggle and emit radiation with a broadband photon energy spectrum that extends up to the critical energy 𝐸 𝑐 = ℏ𝜔 𝑐 ∝ 𝛾 2 𝑛 𝑒 𝑟 𝛽 . Due to its very short fs duration, micrometer source size and natural synchronization with the laser system, this radiation source is very promising for many applications such as high-resolution imaging or temporally-resolved absorption spectroscopy.
For application purposes, it is necessary to use very high energy X-ray photons. The energy of Betatron photons produced in LWFA is still rather low. This photon energy limitation comes from the dephasing limit that prevents the Lorentz factor of the LWFA-produced electrons from growing further. The dephasing length scales as 𝑳 𝒅𝒆𝒑𝒉 ∝ 𝒏 𝒆 -𝟑/𝟐 with the plasma density, therefore, a lower gas density is more suitable for optimizing LWFA acceleration of electrons and reaching higher electron energy. However, a stronger transverse wiggling (higher transverse acceleration, higher 𝐾) and a shorter Betatron oscillation period favor Betatron radiation, which necessitates a higher plasma density. The shorter Betatron oscillation period at high plasma density simply results from the higher plasma frequency, a parameter that scales as 𝝎 𝒑 ∝ 𝒏 𝒆 𝟏/𝟐 . The stronger wiggling results from the higher transverse restoring force that scales as 𝝎 𝒑 𝟐 ∝ 𝒏 𝒆 . Electron energy gain and strong wiggling have an opposite behavior relative to the plasma density of the gas in which they occur. This explains why decoupling electron acceleration and Betatron X-ray generation is very promising. In fact, both a high-energy gain and a strong wiggling could occur in a two-stage scheme, relying on two gas jets with different plasma densities independently optimized for electron acceleration and Betatron production.
The theoretical design described above would rely experimentally on a two gas-jet setup. The first jet would have a low density and be the source of relativistic electrons. While the second jet, with a much higher density, would allow to optimize the production of Betatron radiation. The scheme is depicted in Fig. 7.3. By using such a scheme, Betatron radiation is produced by high-energy electrons accelerated in the first jet and wiggling in the high-density plasma of the second jet. In fact, because of dephasing and laser depletion limits at high plasma densities, it is expected that the interaction in the second jet occurs in the PWFA regime, the laser being quickly depleted and the electron beam exciting the wakefield itself. The use of the PWFA regime here is very interesting as it is much more favorable at high plasma densities, not being limited by dephasing and laser depletion. In this case, this optimized scheme for Betatron radiation can be referred to as a two-stage hybrid LWFA-PWFA X-ray source.
b. Numerical results
The article [Ferri 17] relies on a numerical study of the two-stage hybrid Betatron source described above. The simulation of the LWFA stage was considering a 500 𝑇𝑊 laser pulse, containing 15 𝐽 of energy in a 30 𝑓𝑠 (FWHM) duration and focused to a spot size of 23 𝜇𝑚 (FWHM) at the entrance of a plasma with an electron density of 1.75 × 10 18 𝑐𝑚 -3 . The plasma has a linear entrance ramp of 200 𝜇𝑚. The laser wavelength is 𝜆 = 800 𝑛𝑚 and 𝑎 0 = 6 at the vacuum focus. From theoretical scaling laws, the dephasing length was estimated to be 15.3 𝑚𝑚.
The simulation shows that after 15 mm of propagation, an electron beam is obtained with an energy spectrum peaked at 1.8 𝐺𝑒𝑉 and with 5 𝑛𝐶 of charge above 350 𝑀𝑒𝑉. The length of the bunch is 30 𝜇𝑚 (FWHM) in the longitudinal dimension.
The simulation of the first stage was accomplished thanks to the CALDER-CIRC quasicylindrical code, using a box of 3200 × 200 𝑐𝑒𝑙𝑙𝑠, with spatial steps of 𝛥𝑧 = 0.25 𝑐/𝜔 0 , 𝛥𝑟 = 4𝑐/𝜔 0 . The time step was chosen as 𝛥𝑡 = 0.249 𝜔 0 -1 .
The electron bunch was sent after its acceleration up to the dephasing limit into the second stage. In the second stage (PWFA), the plasma density was set to 1.1 × 10 20 𝑐𝑚 -3 with a 25 𝜇𝑚 linear entrance ramp.
The simulation in the second stage was accomplished with the 3D CALDER code, using a simulation of 800 × 200 × 200 𝑐𝑒𝑙𝑙𝑠. The spatial steps were 𝛥𝑧 = 0.5 𝑐/𝜔 0 , 𝛥𝑥 = 𝛥𝑦 = 0.5 𝑐/𝜔 0 . The time step was chosen as 𝛥𝑡 = 0.288 𝜔 0 -1 .
The simulation showed a maximum radiated power of 50 𝐺𝑊 in the second stage, and a produced photon beam containing a total energy of 140 𝑚𝐽. The critical energy of the photon beam was 𝐸 𝑐 = 9 𝑀𝑒𝑉, and the photon energy spectrum was peaked at 1 𝑀𝑒𝑉. In comparison, a simulation was run with the usual single-stage LWFA setup, with a plasma density of 10 19 𝑐𝑚 -3 and with the same laser parameters as the two-stage hybrid simulation. In this case, both electron acceleration and Betatron radiation occur in the same LWFA stage and are therefore not decoupled. This reference simulation led to a photon energy spectrum peaked at 30 𝑘𝑒𝑉 and to a photon beam containing in total 7.5 𝑚𝐽 of energy. The comparison with the two-stage result is outstanding. The choice of laser parameters, although realistic, fully exploit the potential of the two-stage concept, in particular because it allows to use widely different plasma densities in the two stages of the hybrid LWFA-PWFA scheme.
In the two-stage scheme, the laser to X-ray and gamma-ray beam conversion efficiency was as high as 0.9 %. The divergence of the X-ray beam was 14 × 15 𝑚𝑟𝑎𝑑 2 (FWHM), and under the hypothesis that the source (the 𝑒 -bunch) was 2 𝜇𝑚 wide, the brilliance of the source was 𝐵 = 4.4 10 23 𝑝ℎ𝑜𝑡𝑜𝑛𝑠/𝑠/𝑚𝑚 2 /𝑚𝑟𝑎𝑑 2 /0.1% 𝐵𝑊 at 1 𝑀𝑒𝑉.
Those figures are indeed very promising. It suggests that it is possible to reach even higher brilliance and to produce photon beams with higher photon energy from the Betatron X-ray source. The laser considered here is a 0.5 𝑃𝑊 laser pulse, which exceeds what is available with the laser system at LOA, however, many facilities in the world are either already available or will open soon with this class of lasers.
Experiment in Salle Jaune at LOA
An attempt was made at LOA to accomplish the experimental decoupling of the LWFA production of electrons and of the Betatron X-ray emission. The setup was the same as the one presented in Fig. 6.7, only the foil between the two jets was removed.
The experiment relied on the Salle Jaune laser system at LOA that delivers laser pulses with 50 𝑇𝑊 peak power, 30 𝑓𝑠 (FWHM) pulse duration and with 1.5 𝐽 of energy. The backing pressure of the gas jet could be setup manually by adjusting the pressure in the supply gas pipe. 𝑃 1 is the backing pressure of the first jet and 𝑃 2 is the backing pressure of the second jet. Ionization injection was used for injecting electrons in the first jet and to increase the reproducibility and shot-to-shot stability of the LWFA accelerator stage, as well as for its simplicity. The first jet was therefore using a gas mixture with 99% helium and 1% nitrogen. The second jet, where Betatron radiation is expected to be produced by electrons accelerated in the first jet, was using pure helium.
The X-ray emission was detected by a Quad-RO Princeton instrument X-ray CCD camera with indirect detection. The camera indirectly detects X-ray photons with energies ranging from a few keV to a few 100s keV using a Gd2O2S:Tb scintillator screen and a 1:1 fiber optic coupler in front of a low-noise visible CCD camera with quantum efficiency of 70% at 550 nm. Each X-ray photon incident on the scintillator has a probability to interact with the scintillator through photoelectric absorption, scattering or electron-positron pair production and to deposit energy in the scintillator. A fraction of the deposited energy is converted to photons emitted at ~550 𝑛𝑚 and detected by the visible CCD camera, giving a count value on the camera proportional to the energy deposited in the scintillator. The detector size is a square of 50 × 50 𝑚𝑚 2 , containing 2084 × 2084 𝑝𝑖𝑥𝑒𝑙𝑠, each pixel being of dimension 24 × 24 𝜇𝑚 2 . The camera was setup along the beam line axis, at the exit of the vacuum chamber. A 75 𝜇𝑚 mylar window allowed the X-ray beam to exit the vacuum chamber. The gas jet to mylar window distance was 73 𝑐𝑚, and the Quad-RO camera was located 2.5 𝑐𝑚 after the window in open air and was protected by a 500 𝜇𝑚 beryllium window.
As discussed earlier, the interest of the two-stage scheme relies on the independent control of the plasma density of each stage, which can be varied experimentally through the jet backing pressures 𝑃 1 and 𝑃 2 . Therefore, the experiment had to rely on the optimization of electron acceleration and Betatron radiation using 𝑃 1 and 𝑃 2 . All the other parameters (laser focus, compression, third order spectral phase, laser and jet positioning…) would also need to be optimized to obtain the best electron beam parameters possible.
The main result of this work is presented in Fig. 7.4. The laser parameters and the first gas jet backing pressure had been optimized to reach the best electron beam parameters (highest electron energy and highest total charge). The optimized backing pressure for jet 1 was 𝑃 1 = 17 𝑏𝑎𝑟𝑠. The X-ray radiation due to Betatron oscillations occurring in the first gas jet are drawn as a dashed line with the standard error displayed by a grey box. The level of X-ray signal with the first jet only is 99 ± 6 𝑐𝑜𝑢𝑛𝑡𝑠 on the X-ray CCD camera. The optimized electron beam typically contains of the order of 100 pC of charge with a maximum electron energy in the 150 -200 𝑀𝑒𝑉 range. Keeping 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and using the second gas jet, Fig. 7.4 shows that the peak X-ray signal can be strongly increased and reaches a maximum when the backing pressure in the second jet 𝑃 2 is 40 𝑏𝑎𝑟𝑠. Starting from 99 ± 6 𝑐𝑜𝑢𝑛𝑡𝑠 generated in the first jet alone, the X-ray signal grows to 1390 ± 94 𝑐𝑜𝑢𝑛𝑡𝑠 when the pressure in the second jet is optimized at 𝑃 2 = 40 𝑏𝑎𝑟𝑠, which corresponds to an impressive increase by a factor of 14. An example of a shot for the highest signal (for which parameters were 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 40 𝑏𝑎𝑟𝑠) and for the reference signal (for which parameters were 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 0 𝑏𝑎𝑟𝑠) is shown in Fig. 7.5. The triangular shapes on the figure are aluminum filters set up in front of the screen, initially intended to be used to evaluate the spectrum of the X-ray beam.
However, this result was obtained by optimizing the laser and jet 1 parameters for the best electron beam parameters. The same can be repeated but optimizing the X-ray signal instead, and then optimizing both 𝑃 1 and 𝑃 2 in order to obtain the highest X-ray signal from the twostage scheme. The best X-ray signal, 3880 ± 307 𝑐𝑜𝑢𝑛𝑡𝑠, is obtained for 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 41,5 𝑏𝑎𝑟𝑠. The scan of 𝑃 2 for 𝑃 1 = 17 𝑏𝑎𝑟𝑠 is displayed in Fig. 7.6 (a), the red data points corresponding to the two-stage results and the black dashed line to the single-stage (jet 1 only, jet 2 switched off) result with 𝑃 1 = 17 𝑏𝑎𝑟𝑠 (X-ray signal of 1020 ± 45 𝑐𝑜𝑢𝑛𝑡𝑠). This value differs from the single-stage result of Fig. 7.4 and is mainly due to a different setting of the laser focus when optimizing the Betatron X-ray radiation directly instead of the electron beam parameters.
Finally, to compare the two-stage scheme to the reference single-stage scheme, the singlestage scheme should be optimized for the highest Betatron X-ray radiation, even if this configuration is not favorable for the two-stage scheme. The result is reported in Fig. 7.6 (b), for different values of 𝑃 1 , showing an optimum at 𝑃 1 = 25 𝑏𝑎𝑟𝑠. This optimized pressure for the single-stage Betatron source is higher than the setting 𝑃 1 = 17 𝑏𝑎𝑟𝑠 used for optimizing the two-stage Betatron source, which is expected since in that case X-rays come from electrons oscillating in the wakefield driven by the laser in 𝑃 1 , and a higher pressure in the first jet implies a stronger wiggling of electrons, and therefore a higher X-ray signal. This comes at the expense of the electron beam quality, which can be strongly degraded. The black dashed line in Fig. 7.6 (b) show the X-ray signal from the optimized two-stage source, which is about two times higher than the X-ray signal from the optimized single-stage source.
Under the light of this experimental study, several conclusions can be made. The results presented in Fig. 7.4 clearly demonstrate the ability to decouple electron acceleration and Betatron radiation production. Indeed, for 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 40 𝑏𝑎𝑟𝑠 , electron acceleration is well optimized in the first stage (jet 1) and Betatron X-ray radiation originates at 93% from the second stage (jet 2). The two processes, acceleration and radiation production, are well decoupled as intended. However, the increase by a factor of 14 in Fig. 7.4 cannot be considered as a source improvement compared to the usual single-stage source, where acceleration and radiation production are coupled. The experimental data of Fig. 7.6 (b) shows instead that, when comparing the optimized two-stage source to the optimized singlestage source, the X-ray signal is increased by a factor of 2. To conclude, the decoupling of electron acceleration and Betatron X-ray radiation production was experimentally demonstrated with attractive results. The X-ray source signal was increased by a factor of 2 using the two-stage scheme, which is already a promising result if it can be further improved by a better coupling between the two stages and with improved laser parameters and possibly higher electron charges, a possibility currently being investigated in Salle Jaune by correcting for laser chromatic aberrations. The numerical work of section 2 relied indeed on better laser parameters than the one available at LOA, for which the two-stage scheme is the most interesting as the difference in plasma density between the two stages is very important (two orders of magnitude). The real potential for the hybrid twostage scheme has not been fully demonstrated yet experimentally, and certainly requires experiments at PW laser facility, where the expected gain is much higher.
Summary of the results
This section is a summary of the main results of my thesis.
I reported the first demonstration of the acceleration of a distinct positron bunch in a Plasma Wakefield Accelerator. Such a demonstration was challenging as the focusing and accelerating phase range for positrons in a positron driven wakefield is short. Even though the experiment was hard to accomplish, a clear proof of acceleration was reported along with a proof of longitudinal beam-loading that provided an insight into the energy transfer from the wave to the trailing bunch. In this experiment, the initial 350 𝑝𝐶 trailing bunch, initially at 20.05 𝐺𝑒𝑉, extracted energy from the wakefield to lead to a typical accelerated bunch that contained 85 𝑝𝐶, and had a peak energy of 21.50 𝐺𝑒𝑉. An energy gain of 1.45 𝐺𝑒𝑉 was obtained. This represents an accelerating gradient of 1.12 𝐺𝑒𝑉 𝑚 -1 .
Two regimes of acceleration are of interest for the community of PWFA, the nonlinear "bubble regime" and the linear or quasilinear regime. The former is not favorable for positron acceleration because in this regime, the behavior of plasma electrons is not symmetric to their motion in electron driven waves. In addition, producing positron beam drivers is not energetically favorable. This is why PWFA positron acceleration would highly benefit from a regime in which a positron bunch extracts the energy of a laser driven or electron driven wakefield. The use of the universal quasilinear regimethat relies on a similar wakefield for laser, electron and positron driverswas considered but never demonstrated for positrons. I brought in chapter 5 a clear evidence of the realization of such a regime. By increasing the emittance of the drive bunch, we overcame the tendency of positron drivers to evolve toward a nonlinear regime. A good agreement between numerical simulations and experimental results sustained the evidence that acceleration of trailing positrons was occurring in a quasilinear wakefield.
In chapter 6, I reported the results of the first experimental attempt to explore a hybrid compact approach. The corresponding M-PAC project aims at creating a LWFA-PWFA twostage experiment. Such a scheme would rely on the production of LWFA electron bunches in a first gas jet, and then use these electron bunches to drive wakefields in the gas jet of the second stage. The first experimental campaign of the project provided the proof that the LWFA electron bunches can interact with the second stage gas jet. In fact, a clear selffocusing effect was seen in the second stage, with a reduction of the divergence of the electron beam spanning from 32% at 100 𝑀𝑒𝑉 to 16 % at 200 𝑀𝑒𝑉. This result provides an experimental demonstration of a beam-driven wakefield in the second stage. In addition, the two stages were separated by a thin foil that blocked the laser driving the wakefield of the first stage. A side effect of this thin foil is the defocusing of the electron bunch. I contributed to the study of this effect by conducting experimental tests that allowed us to exclude some
Conclusion
physical interpretations, such as the reflected laser ponderomotive force or instabilities within the bulk of the foil, and to propose a credible origin for this effect, related to the existence of strong electromagnetic fields at the surface of the material affecting the electron beam.
Last, in chapter 7, I report on the experimental confirmation of the theoretical work of Julien Ferry. His theoretical and numerical proposal suggested a scheme that exploits the advantages of a two-stage hybrid LWFA-PWFA scheme to produce high energy and bright Betatron X-rays from an LWFA beam. In fact, a two-stage setup allows to decouple the acceleration of LWFA electronsfavorable in plasmas with an electron density of 10 18 𝑐𝑚 -3 or lowerand the Betatron oscillations of the electrons, favorable in plasmas with an electron density of 10 20 𝑐𝑚 -3 or higher. I reported the results of the experiment accomplished at LOA to illustrate this concept, with the 50 𝑇𝑊, 30 𝑓𝑠 laser pulse of Salle Jaune that produced LWFA bunches with broadband electron energy spectra with a maximum energy of 200 𝑀𝑒𝑉. The laser parameters available at LOA allowed us to clearly decouple electron acceleration and Betatron X-ray production, with an enhancement of the Xray emission by a factor of 2 compared to the optimized single-stage Betatron X-ray source.
Future prospects
These results have opened many promising prospects: they brought a confirmation of the scientific advances the plasma accelerator community is making towards an all plasma collider. They also suggest several new studies, in the project of building the hybrid LWFA-PWFA platform.
Acceleration of a distinct positron bunch: towards a plasma-based electron-positron collider. For the prospects of applying plasma-based accelerator technology to high-energy physics, it is crucial that not only electrons are successfully accelerated to high energies in plasmas, but also its antimatter counterpart, the positron. In the context of plasma-based positron acceleration, the experimental work of [Corde 15] demonstrated the possibility to accelerate positrons in a nonlinear plasma wakefield at high field and with high energy transfer efficiency from the wake to the accelerated positrons. The work relied on a single positron bunch sent into the plasma, and has opened the prospect for an "afterburner" or "energy booster" to an existing conventional linear electron-positron collider. However, performing drive-trailing two-bunch experiments in which a trailing positron bunch extracts the energy deposited in the plasma by a drive bunch is of paramount importance if one wants to accelerate from the start a distinct bunch of positrons in multiple plasma accelerator modules, thereby opening the prospect for an all plasma collider. The demonstration of the acceleration of a distinct positron bunch in plasma-based particle accelerators, reported in this thesis, is now accomplished.
Because the use of positron bunches to drive plasma waves is not energetically favorable, our demonstration should be extended to electron and laser drivers. In the quasilinear regime, this should be straightforward as the plasma response is nearly symmetrical in this case. After the shutdown of the FACET facility, the advanced accelerator test facility is being upgraded to FACET-II to deliver much better beam parameters and new capabilities, and the possibility to deliver pairs of electron drive bunch and positron trailing bunch with longitudinal separation of less than 100 𝜇𝑚 is being considered. This would allow for the experimental study of positron bunch acceleration in electron-driven plasma wakes.
Proposing a plasma-based collider scheme whose final beam parameters match all the requirements of a high-energy particle collider remains an extreme challenge in 2017. Two ordeals will have to be overcome: the issues of emittance and charge conservation along acceleration, for both positron and electron beams, while keeping high wall-plug energy efficiency. Preserving ultralow emittance is a particularly challenging task for positrons. The nonlinear regime allows for high efficiency, but the transverse force experienced by the positrons is not linear with the radial coordinate, which leads to emittance growth unless the positron bunch has a radial equilibrium distribution. Such transverse equilibrium distribution, accounting for the self-consistent wakefields (that is, beam loading), is slice-dependent, nonseparable and highly nontrivial. While the quasilinear regime is promising for positrons, the charge of the accelerated ultralow emittance positron bunch is limited to prevent the collapse of the positron bunch leading to nonlinear beam loading. This sets strong limits on the energy efficiency that can be achieved. Another alternative for positron acceleration is the hollow plasma channel (a tube of plasma), allowing to suppress the transverse focusing force inside the channel, thereby opening the prospect for emittance preservation at ultralow emittance. But hollow plasma channels suffer from very strong transverse instabilities, and solutions for their mitigations have to be found. Achieving simulatenously the preservation of ultralow emittance and high energy efficiency in plasma-based positron acceleration will certainly be the most challenging milestone of the next decade.
A hybrid LWFA-PWFA platform: PWFA studies in small facilities. The cost and footprint of conventional accelerator facilities is a major barrier to particle beam applications and plasma acceleration experiments. The realization of a hybrid LWFA-PWFA two-stage experiment would offer the possibility to run plasma wakefield acceleration of electrons in a small size facility such as Salle Jaune at Laboratoire d'Optique Appliquée. Other groups in the world are also considering the development of such two-stage hybrid LWFA-PWFA setup [Chou 16].
In our setup, a thin foil between the two gas stages blocks the laser driving the wakefield in the first stage, ensuring that the electrons alone interact in the second stage. In comparison to other experimental designs that use a large distance between the two stages (typically 5 to 10 mm) to ensure the wakefield in the second stage is driven by the electron beam (PWFA) and not by the laser pulse (LWFA), using the thin foil offers the prospects of a much more compact setup with sub-mm distance between the LWFA and PWFA stages, and hence a much better coupling of the electron beam into the second stage. But the potentially detrimental effect of the thin foil, which is observed to be largely superior in experiments than the multiple angle scattering prediction, is a very important effect that needs to be studied and understood before considering the physics of beam-plasma interaction in the second stage. While the experimental tests we have conducted allowed us to exclude some physical interpretations and to put experimental constraints on physical models, a theoretical and numerical study is now necessary and is underway in our group at LOA. From my analysis of chapter 6, the effect of strong electromagnetic fields at the surface of the foil on the electron beam is a potential candidate for the development of a physical model, and will be investigated in further details in the near future.
Last, the beam-plasma interaction occurring in the second jet must be improved, with the prospect of observing energy loss or gain of the particles in the electron beam. The effect of pre-ionizing the plasma in the second stage will be considered, and can potentially lead to substantial improvements in the beam-plasma interaction, as the ionization of the plasma by the electron beam itself is known to have important limitations, in particular related to the beam head erosion. The use of quasi-monoenergetic shock-injected electron bunches could also bring some new information. The optimization of the hybrid LWFA-PWFA platform could also strongly benefit from in-situ optical visualization of the beam-driven wakefield and of its magnetic fields, using optical shadowgraphy snapshots with 5 fs probe laser pulse and using Faraday measurements of the magnetic field.
Plasma-based X-ray sources. Among the applications of plasma-based particle beam acceleration, X-ray source production focuses the attention from multiple communities. In fact, X-ray radiation is a fundamental tool for bioimaging, material science and atomic physics. The unique properties of plasma based X-ray sources make them unrivaled choices for use in high resolution X-ray phase contrast imaging. The scheme suggested and studied theoretically by Julien Ferri [Ferri 17] showed very promising prospects for plasma-based Betatron X-ray sources. Its experimental investigation with the parameters of the laser system of Salle Jaune at LOA brought a first proof of principle for this scheme. Accomplishing a new experimental campaign could however bring more impressive results. The very promising laser to X-ray energy transfer efficiency of the order of 1 % in Julien Ferri's work should be demonstrated experimentally, by using a laser power closer to the numerical choice of the article, 0.5 𝑃𝑊, an order of magnitude higher than the laser power in Salle Jaune. Doing so, one could exploit the full potential of this two-stage hybrid scheme for the first time.
In the context of plasma-based acceleration research, my thesis contributed to the scientific journey towards innovative plasma technologies for particle accelerators and X-ray sources.
Even though many challenges are still to be overcome before producing particle beams with parameters matching colliders requirements, the accomplishments of the recent years invite to remain optimistic regarding the promises plasma-based acceleration has brought. Some researchers even suggest that a plasma-based collider could be built within decades [Esarey 16]. On a shorter time scale, the Betatron X-ray source or the Compton source [Ta Phuoc 12] already offer unique opportunities for bioimaging, atomic physicists and material scientists to accomplish convenient and low cost experimental campaigns. The Betatron source properties make it already feasible for a commercial development. An optimized two-stage schemereported in my thesis -could bring the available performances of such a system to the next level.
[ [Geraci 00] Geraci, A., A., Witthum, D., H., Transverse dynamics of a relativistic electron beam in an underdense plasma channel, Physics of plasmas, 7, 8, 2000.
[Gibney 15]
Gibney, E., CERN's next director-general on the LHC and her hopes for international particle physics, News: Q&A, Nature, 2015.
Résumé en Français
Cette thèse porte principalement sur l'accélération de particules par ondes de sillage excitées par un faisceau de particules. C'est une thèse de recherche expérimentale, durant laquelle trois expériences ont eu lieu. Ces expériences avaient chacune un but précis, mais toutes se trouvent dans le cadre de l'accélération de particules dans des sillages plasma. Une partie de ce travail se place notamment dans le large projet de construire un jour un collisionneur de particules reposant sur la technologie d'accélération plasma, et d'exploiter les particularités de cette technologie pour les applications scientifiques d'une telle machine.
Ondes de sillage excitées par un faisceau de particules
Les accélérateurs de particules sont des inventions scientifiques aux multiples applications rappelées au début du manuscrit. Quelques exemples sont donnés, tels que le traitement du cancer, l'imagerie médicale, le contrôle non destructif ou plus fondamental, l'étude des constituants de la matière par la collision de particules de très haute énergie. Cette dernière application nécessite d'utiliser des particules d'énergie toujours plus haute pour sonder des constituants toujours plus petits de la matière. Le coût et la taille des derniers collisionneurs suggèrent qu'il faille apporter une nouvelle technologie pour permettre à la recherche fondamentale de poursuivre son développement. L'accélération par onde de sillage plasma permettrait d'augmenter les gradients d'accélération de particules. En effet, lorsque cette technologie est utilisée, les champs accélérateurs sont soutenus par des plasmas qui ne sont pas sujets à la limite de claquage électronique.
Les applications des faisceaux de particules produits par sillage laser
L'accélération d'électrons par onde de sillage s'accompagne de l'émission de rayons X. En particulier les ondes de sillage laser permettent de produire dans des laboratoires tels que le Laboratoire d'Optique Appliquée des sources de rayons X au potentiel intéressant pour les applications médicales ou industrielles.
L'expérience conduite en 2016 et présentée dans la dernière partie du manuscrit proposait d'utiliser un schéma hybride pour optimiser l'émission de rayonnement. Le montage expérimental reposait à nouveau sur deux étages, composés de deux jets de gaz. Le premier jet fournissait du gaz à une pression modeste, ce qui favorise la production d'électrons de haute énergie lors d'une accélération par onde de sillage laser. Le second jet avait une pression plus forte, ce qui permettait d'obtenir une émission de rayon X plus intense, et composée de photons de plus haute énergie.
Le découplage de l'accélération des particules et de la production des rayons X est donc un schéma prometteur qui a été démontré durant la campagne expérimentale de 2016. Par ailleurs, le système reposant sur deux jets de gaz a permis d'obtenir un doublement de la quantité globale de rayon X émis.
Conclusions et perspectives de ce travail
L'expérience principale de mon travail de thèse, l'accélération d'un paquet distinct de positron dans une onde de sillage excitée par un faisceau ouvre plusieurs perspectives pour la communauté scientifique. Il serait maintenant intéressant de parvenir à accélérer un paquet de positrons en excitant une onde plasma avec un laser ou un paquet d'électrons. Mais il est aussi important de travailler à présent à la préservation de la qualité du faisceau durant l'accélération ou encore de travailler à augmenter la charge accélérée. Ces défis sont les prochaines étapes avant de pouvoir se rapprocher d'un faisceau comparable à celui d'un accélérateur conventionnel.
L'expérience hybride LWFA-PWFA a montré qu'il fallait comprendre le détail des phénomènes physiques qui bloquent actuellement les performances du second étage d'interaction faisceau-plasma. Un travail de simulation numérique est sans doute nécessaire. Par ailleurs d'autres conditions sont peut-être intéressantes, par exemple une pré-ionisation du second étage ou l'usage de faisceaux mono-énergétiques pour obtenir une interaction plus claire, peuvent apporter de nouveaux résultats à cette étude expérimentale.
Enfin, le schéma de découplage de l'accélération d'électrons et de production de rayons X dans un système à deux étages laisse également la place à de nouvelles campagnes expérimentales. Il serait utile à la communauté scientifique d'étudier et de démontrer la grande efficacité de transfert énergétique prévue dans l'article théorique relatif à ce schéma. Il serait également intéressant d'effectuer cette expérience avec un système laser aussi performant que celui proposé par les théoriciens, pour obtenir une émission de rayonnement plus intense et des rayons X de plus forte énergie, et ainsi exploiter le schéma de découplage en profondeur. Summary: Plasma wakefield accelerators (PWFA) or laser wakefield accelerators (LWFA) are new technologies of particle accelerators that are particularly promising, as they can provide accelerating fields of hundreds of gigaelectronvolts per meter while conventional facilities are limited to hundred megaelectronvolts per meter. In the Plasma Wakefield Acceleration scheme (PWFA) and the Laser Wakefield Acceleration scheme (LWFA), a bunch of particles or a laser pulse propagates in a plasma, creating an accelerating structure in its wake: an electron density wake associated to electromagnetic fields in the plasma. The main achievement of this thesis is the very first demonstration and experimental study in 2016 of the Plasma Wakefield Acceleration of a distinct positron bunch. In the scheme considered in the experiment, a lithium plasma was created in an oven, and a plasma density wave was excited inside it by a first bunch of positrons (the drive bunch) while the energy deposited in the plasma was extracted by a second bunch (the trailing bunch). An accelerating field of 1.12 𝐺𝑒𝑉/𝑚 was reached during the experiment, for a typical accelerated charge of 85 pC. In the present manuscript is also reported the feasibility of several regimes of acceleration, which opens promising prospects for plasma wakefield accelerator staging and future colliders. Furthermore, this thesis also reports the progresses made regarding a new scheme: the use of a LWFA-produced electron beam to drive plasma waves in a gas jet. In this second experimental study, an electron beam created by laser-plasma interaction is refocused by particle bunch-plasma interaction in a second gas jet. A study of the physical phenomena associated to this hybrid LWFA-PWFA platform is reported. Last, the hybrid LWFA-PWFA scheme is also promising in order to enhance the X-ray emission by the LWFA electron beam produced in the first stage of the platform. In the last chapter of this thesis is reported the first experimental realization of this last scheme, and its promising results are discussed.
Université Paris-Saclay
Espace technologique / Immeuble Discovery Route de l'Orme aux merisiers RD 128 / 91120 Saint-Aubin, France
Figure 0.2: (a) A typical laser wakefield experiment, a parabolic mirror focuses a laser beam into a gas jet. The emerging electron beam is characterized by a spectrometer. (b) A plasma wakefield experiment, in which two particle bunches are sent into a laser pre-ionized plasma.A spectrometer displays the energy of the particle bunch emerging from the plasma. This experiment can be carried out with electron or positron bunches.
Contents 1 .
1 Particle accelerators: technology and applications ............................................... 7 a. A century long history ............................................................................................ 7 b. Particle beams and applications ............................................................................. 9 2. Laser physics concepts and formalism .................................................................. 11 a. Laser fields and Gaussian pulses .......................................................................... 11 b. Relativistic regime ............................................................................................... 13 c. Maxwell equations ............................................................................................... 14 d. Chirped pulse amplification ................................................................................. 14 3. Beam physics concepts and formalism ................................................................. 15 a. Emittance .............................................................................................................. 16 b. Transfer matrices and beam transport .................................................................. 17 c. Twiss parameters and beam envelope equation ................................................... 18 d. Evolution of the trace-space ellipse in free space ................................................ 19 e. Periodic focusing systems .................................................................................... 20 f. Sources of emittance growth ................................................................................. 21
Figure 1
1 Figure 1.1: (a) CERN 1956 synchrocyclotron producing bunches of protons with an energy of 600 MeV. (b) The Large Hadron Collider facility nowadays, that can provide 7 TeV particles. As displayed on the figure, the main ring has a diameter of 8.6 km. From CERN.
[
Coutard 37]. X-ray therapies are now quite common to treat most kind of cancers, even if the defects of this technology fueled scientific research about other kinds of radiations: particle beams. Particle therapy started during World War II and is still widely used nowadays [Thwaites 06]. Electron therapy has been considered [Klein 08], but proton therapy seems more promising [Levin 05]. Facilities such as the synchrocyclotron of Orsay, France (Fig. 1.3 (a) and (b)) perform proton therapy.
Figure 1
1 Figure 1.3: (a) Gantry of the proton therapy center in Orsay, France. The patient lies on the bed (center), the ionizing radiations flow from the mobile green and white device (left). (b) Moving structure of the gantry. The chamber (a) is inside the circular shape on the top left. The grey device on the bottom of the picture is one of the brake of the moving structure. Its total height is approximately 4𝑚. Courtesy of E. Bayard
[
Malka 02]. Although accelerator physicists still face many challenges, a plasma-based facility design has already been considered [Adli 13].
Figure 1 . 4 :
14 Figure 1.4: Examples of applications of particle beams, some of which can already be accomplished with plasma-based particle accelerators. (a) Simulation of an electron beam dose deposition for cancer treatment. Research is being done on the use of the different kinds of particles, here electron beams burn also the body (blue and green areas) around the tumor (red area). (b) Gamma-ray internal imaging of a metallic sample accomplished at LOA. From [Ben-Ismail 11] (c) Imaging of a bee body using x-rays from a plasma-based accelerator particle beam accomplished at the ALLS facility of the INRS-EMT laboratory. From [Fourmaux 11]
𝑐𝜏 0
0 is the laser pulse length in vacuum, measured as the Full Width at Half Maximum of the beam in the propagation direction 𝑧. is the curvature radius. This is an additional quadratic term that takes into account the curvature of the phase front at distance 𝑧 from the focal spot. transverse size of the laser pulse. The graph (𝑧, 𝑤(𝑧)) is ploted in Fig.1.5, where the asymptotic evolution of the waist dimension appears clearly.
Figure 1
1 Figure 1.5: A Gaussian beam, near its focus. The plot represents the waist dimension, as a function of the position z. The meanings of the parameters 𝑤 0 , 𝜃 and 𝑧 𝑅 are also visible.
[
Humphries 02]. Particles flow from the cathode, are accelerated by the potential gradient between the two electrodes and emerge through holes in the anode. The beam is then spatially and spectrally shaped downstream in the facility. Positron bunches are produced by sending a high-energy electron beam on a thick tungsten alloy target. Electron/positron pairs are generated and positrons are selected and accelerated in the facility [Humphries 02].
Figure 1 . 6 :
16 Figure 1.6: Schematic of the chirped pulse amplification technique.
Figure 1
1 Figure 1.7: (a) Particle distribution in trace-space. The black line is the ellipse whose equation is given above. Its area is 𝜋𝜖. (b) Dimensions of the ellipse expressed with the Twiss parameters.
)
Choosing the origin of the 𝑧 axis at the beam waist leads to:
Figure 1 . 8 :
18 Figure 1.8: Evolution of the trace-space ellipse of a particle beam moving in real space without focusing force, when the beam crosses a focal spot. (a) Before focus. (b) At focus. (c) After focus. The area of the ellipse stays constant, so does the extremum of 𝑥'.
Figure 1 . 9 :
19 Figure 1.9: Periodic focusing system. The abscise is the position along the line, normalized to the distance between two consecutive lenses, the vertical axis represents the transverse dimension of the beam. From [Humpries 02].
Figure 2 . 1 :
21 Figure 2.1: Electronic perturbation in one dimension. From [Rax 05].
Figure 2 . 2 :
22 Figure 2.2: Ionization processes (a) Coulomb potential of an atom, the valence electron has an ionization energy 𝑈 𝐼 . (b) Low-Field Ionization process, the energy of the incoming photon is high enough to ionize the atom. (c) Multi-Photon Ionization: the atom is ionized under the combined effect of multiple photons.When the total energy of the photons is 𝑈 𝐼 the process is called Multi-Photon Ionization. In the specific case where the total energy is higher than 𝑈 𝐼 , the process is called Above-Threshold Ionization, case (c).
Figure 2 . 3 :
23 Figure 2.3: Ionization processes (a) Coulomb potential of an atom, the valence electron has an ionization energy 𝑈 𝐼 . (b) Tunnel Ionization, when the laser field is strong enough the potential is tilted and makes the tunneling effect (transition through the barrier depicted with a red dashed line) possible. For extremely strong fields, the potential bends enough to permit the suppression of the potential barrier.
Figure 2
2 Figure 2.4: Dispersion diagram for electromagnetic waves in a plasma. 𝜔 𝑝 appears as the cut-off frequency. Waves of frequency inferior to 𝜔 𝑝 cannot propagate in the plasma [Rax 05].
Contents 1 .
1 Propagation of the driver in a plasma ........................................................................ 32 a. Laser pulses propagation in a plasma ....................................................................... 32 b. Electron beams propagation in a plasma .................................................................. 33 2. Solution of plasma wave excitation in the linear regime .......................................... 35 a. Plasma wave excitation ............................................................................................. 35 b. Beam driven plasma electron density waves ............................................................. 37 c. Laser driven plasma electron density waves ............................................................. 43 3. One-dimensional solution of plasma wave excitation in the nonlinear regime ...... 45 4. Nonlinear "Blow-out" regime ..................................................................................... 46 a. The bubble regime .................................................................................................... 46 b. Wavebreaking limit .................................................................................................. 47
is responsible for relativistic optical guiding [Sprangle 87] and the term 𝛿𝑛 𝑛 0 is due to the plasma density perturbation induced by the laser pulse in the plasma [Sun 87].
One gets for the group velocity: 𝑣 𝑔 = 𝑑𝜔 𝑑𝑘 = 𝑐 √ 1 -𝜔 𝑝 2 𝜔 2 . Corrections have to be made to this formula in the case of very strong laser drivers [Decker 94].
Figure 3 ,
3 Figure 3.1: (a) Laser spot size as a function of the normalized propagation distance in the plasma. (i) Vacuum diffraction (ii) L= 𝜆 𝑝 4 (iii) L= 𝜆 𝑝 4 , 𝑃 = 𝑃 𝑐 , 𝑎 0 = 0.9 and 𝜆 𝑝 = 0.03 𝑐𝑚 (iv) Guiding in a preformed plasma channel, from [Sprangle 92, Esarey 09] (b) Betatron oscillations of the drive beam of an electron driven PWFA experiment. Transverse beam dimensions as a function of the phase advance (i) Beam spot size in the horizontal plane (𝜎 𝑥 ) (ii) Beam spot size in the vertical plane (𝜎 𝑦 ), from [Clayton 02].
possible to normalize and simplify an equation.
12 )
12 It can be shown that if 𝒗 × 𝛁 × (𝒖 -𝒂) is initially null, it stays null when the system evolves [Gorbunov 97]. This remark and the identity 𝒄𝛁γ = • 𝛻 𝒓 )𝒖 + 𝒗 × 𝛁 × 𝒖 were used to simplify the second line.
Fig 3.2 (a). Each bucket has a size of order 𝜆 𝑝 (for instance 𝑛 0 = 10 18 𝑐𝑚 -3 corresponds to 𝜆 𝑝 ~30 𝜇𝑚 ). The wave displayed on picture (a) is magnified compared to image (b), and picture (a) comes from a numerical simulation. LWFA experiments such as the ones carried out at LOA take place as depicted in Fig 3.2 (b): an intense and femtosecond laser pulse is focused into a gas jet.
15 )
15 -Ampere equation written for the vector potential leads to the following two equations: The sum of (3.13) and (3.14), along with the relation 𝑬 = -𝜵𝑉 -𝜕𝑨 𝜕𝑡 leads to:
Figure 3
3 Figure 3.2: (a) A plasma electron density wave driven by a laser pulse (orange). A bunch of accelerated electrons extracts energy from the accelerating cavity (red) PIC simulation, from [Réchatin 10] (b) Details of the LWFA setup of Fig. 0.2 a. The plasma density wave is driven in the gas jet. Each ion cavity has a typical dimension of 𝜆 𝑝 ~30 𝜇𝑚 for 𝑛 0 = 10 18 𝑐𝑚 -3 .
2
are not included in this contour, the integral over the contour is null. calculated thanks to the Residue Theorem, which writes with the contour of Fig 3.3 (a), for any values of 𝑅 and 𝜖, assuming 𝜉 > 0: Using 𝐼 = [-𝑅; 𝑅]\ ([-𝑘 𝑝 -𝜖; -𝑘 𝑝 + 𝜖] ∪ [𝑘 𝑝 -𝜖; 𝑘 𝑝 + 𝜖]), this integral can be divided into the sum of four terms, whose limits are:
Figure 3
3 Figure 3.3: (a) Contour 𝛾 for the calculation of 𝑝. 𝑣. ∫
2𝜎 𝑧 2 sin (𝑘 𝑝 (𝜉 -𝜉 ′ )) 𝐻(𝜉 -𝜉 ′ )𝑑𝜉′ (𝑘 𝑝 (𝜉 -𝜉 ′ )) 𝑑𝜉′ 𝜉 -∞ (3.46) Formula (3.46) is the plasma electron density, displayed in Fig. 3.4 (b). The radial extent of the density perturbation is more limited than the extent of the 𝐸 𝑧 field. The 𝐸 𝑧 field also lags the density perturbation by a quarter period, or a 90° phase. c. Laser driven plasma electron density waves Laser driven linear wakefields have been studied through many articles and thesis reports [Glinec 06, Rechatin 10, Corde 12, Lehe 14]. The calculation of the fields is simpler in the case of LWFA. In fact, the equation over 𝜙, (3.48) is straightforward to reach from the equation over 𝛿𝑛 (3.13). For a Gaussian laser pulse (and a Gaussian source term 𝑐 2 𝛻 2 𝒂 𝟐 2
Figure 3
3 Figure 3.4: (a) Transverse force in the wakefield in the linear regime of plasma wakefield generation. (b) Plasma electron density perturbation in the linear regime of plasma wakefield generation. Parameters are 𝑛 0 = 10 16 𝑐𝑚 -3 , 𝑁 = 10 8 , 𝜎 𝑧 = 15 𝜇𝑚, 𝜎 𝑟 = 20 𝜇𝑚.
Figure 3
3 Figure 3.5: (a) Laser driven nonlinear plasma density wave. (red) Laser field (𝑎/𝑎 0 ), (blue) longitudinal electric field on axis (𝐸 𝑧 /𝐸 0 ), (yellow) plasma electron density 𝑛 𝑝 /𝑛 0 (𝑎 0 = 2). (b) Electron beam driven nonlinear density wave. (red) Beam Current 𝑛 𝑏 /𝑛 0 (blue) 𝐸 𝑧 /𝐸 0 field on axis, (yellow) plasma electron density 𝑛 𝑝 /𝑛 0 . (𝑛 0 = 10 16 𝑐𝑚 -3 ).
Fig 3.6 (a)). Beyond this sheath is a linear response region.
Figure 3
3 Figure 3.6: (a) Bubble half-cavity. The drive (red) clears all plasma electron from the bubble (black), a thin sheath of electrons circulates around the cavity (green) and crosses on the axis at the back of the bubble. Propagation to the left, from [Lu 06b]. (b) Behavior of plasma electrons in the case of an electron driver: "blow-out" regime. (c) Behavior of plasma electrons in the case of a positron drive: "suck-in" regime [Hogan 03].
.
Following the derivation from Ref. [Mora 13], the density must stay positive and finite: this condition writes 𝜕𝜁 𝜕𝑧 0 > -1. To obtain the expression of 𝐸 we have to find a relation between 𝜁 and 𝐸, using Maxwell-Gauss equation and equation (3.57): Which leads to the very simple relation: 𝐸 = 𝑒𝑛 0 𝜁 𝜖 0 On the other hand, the change of variable turns Euler equation to: Therefore, solutions for 𝜁 and 𝐸 are of the form: 𝜁 = 𝜁 0 sin (𝜔 𝑝 𝜏 -𝑘𝑧 0 ) and 𝐸 = 𝐸 0 sin (𝜔 𝑝 𝜏 -𝑘𝑧 0 ), the condition 𝜕𝜁 𝜕𝑧 0 > -1 leads to 𝐸 0 = 𝑒𝑛 0 𝜁 0 𝜖 0 = 𝑚𝑣 𝜙 𝜔 𝑝 𝑒 , with 𝑣 𝜙 = 𝜔 𝑝 𝑘 the phase velocity of the wave. The parameter 𝐸 0 = 𝑚𝑣 𝜙 𝜔 𝑝 𝑒
𝑚] = 96 𝑛 0 1/2 [𝑐𝑚 -3 ] [Dawson 59, Mori 90, Esarey 95]. For a plasma with an electron density of 𝑛 0 = 10 17 𝑐𝑚 -3 , one gets 𝑚𝑐𝜔 𝑝 𝑒 = 30 𝐺𝑉/𝑚 which is still two to three orders of magnitudes higher than the fields in conventional accelerators. However, the phase velocity of the waves can become relativistic. A correction must be applied to the wavebreaking limitation of the electric field [Mori 90]. The maximum value for 𝐸 in that case writes: 𝐸 0,𝑟 = 𝜔 𝑝 𝑚𝑐 𝑒 √2(𝛾 𝜙 -1) 1 2 , where 𝛾 𝜙 is defined with the phase velocity 𝛾 𝜙 = (1 -(
Contents 1 .
1 Positron driven plasma wakefields ........................................................................ 52 2. Positron acceleration experiments ........................................................................ 53 3. SLAC and FACET Facilities ............................................................................
Figure 4
4 Figure 4.1: Plasma electron density in a positron wakefield from a PIC simulation. The code QuickPIC was used, with 𝑛 0 = 10 16 𝑐𝑚 -3 , and a drive bunch of charge 𝑄 𝑑𝑟𝑖𝑣𝑒 = 480 𝑝𝐶 and dimensions (𝜎 𝑥 , 𝜎 𝑦 , 𝜎 𝑧 ) = (35 𝜇𝑚, 25 𝜇𝑚, 30 𝜇𝑚). The focusing and accelerating area is very limited, by contrast, the large blue area is very defocusing for positrons.
Figure 4
4 Figure 4.2: (a) Transverse size in x of the bunch after the plasma. Red squares are with plasma, white circles are without plasma. A strong focusing of the tail occurs. From [Hogan 03]. (b) Accelerated positron bunch in a self-loaded positron PWFA experiment. From [Corde 15].
From
1980 until 1990, SPEAR was replaced by the Positron-Electron Project (PEP) that reused SPEAR rings and the linear accelerator to collide electrons and positrons with an energy of 29 𝐺𝑒𝑉. PEP was upgraded in 1994 to become the PEP-II project, in which larger storage rings were built and sheltered the Babar experiment which aimed at demonstrating Charge Parity violation. The PEP-II system was in use until 2008.The SLC (Stanford Linear Collider) was another experimental platform at SLAC, completed in 1987. It was an electron-positron collider that relied on SLAC 3-km-long linac to accelerate both kind of particles and then force them to collide thanks to two « arcs », curved final cavities. The center-of-mass energy of the colliding particles was 90 𝐺𝑒𝑉. The SLAC hosted experiments for a decade.From 2009 on, Linac Coherent Light Source (LCLS) became the main user facility operated at SLAC. LCLS was the first hard X-ray free electron laser in the world. In 2012, FACET, the Facility for Advanced aCcelerator Experimental Tests opened and provided the opportunity to work on advanced accelerator concepts such as PWFA schemes. The schematic of the current facility is depicted in Fig.4.3.
Figure 4 . 3 :
43 Figure 4.3: Schematic of the FACET beam line and of its main components
Figure 4 . 4 :
44 Figure 4.4: Beam parameters at FACET.
FACET relies on a
lithium plasma for most of the PWFA experiments. The first ionization potential of lithium is 5.4 𝑒𝑉 and the second 75.4 𝑒𝑉. Lithium vapor is contained during the experiments in a pressure heat-pipe oven, whose internal lithium pressure is controlled by the temperature of the vapour [Muggli 99]. It is necessary to maintain the gas at a high temperature to reach neutral densities of about 10 16 𝑐𝑚 -3 . A temperature of around 900°𝐶 is needed, this was made possible by the plasma oven where the pressure is [Mozgovoi 86]: 𝑃 = exp (-2.05 ln(𝑇) -
Figure 4
4 Figure 4.5: A plasma oven temperature profile (left) and density profile (right), from [Vafaei-Najafabadi 12]
Figure 4 . 6 :
46 Figure 4.6: Schematic of FACET femtosecond laser systems
Contents 1 .
1 Experimental setup and diagnostics ............................................................................ 61 a. Experimental setup ..................................................................................................... 61 b. Energy Spectrometer ................................................................................................. 64 c. EOS longitudinal diagnostic ...................................................................................... 65 d. Beam charge diagnostics ............................................................................................ 66 e. Optical Transition Radiation (OTR) screens ............................................................. 67 f. Simulations ................................................................................................................. 67 2. Acceleration of a trailing positron bunch ................................................................... 68 a. Proof of acceleration ................................................................................................... 68 b. Beam loading, theory and experimental observation ................................................ 70 3. Acceleration regime ....................................................................................................... 75 a. Emittance manipulation system .................................................................................. 75 b. Nonlinear to quasilinear positron driven waves ........................................................ 76
Figure 5 . 2 :
52 Figure 5.2: Axicon and laser beam parameters in the experimental area.
Figure 5
5 Figure 5.1: Experimental setup of the trailing positron bunch acceleration.
Figure 5 . 3 :
53 Figure 5.3: An axicon focusing an incident laser beam. In the transverse plane, the intensity profile is a Bessel function of order 0, 𝐽 0 (𝑘𝛼𝑟).
Figure 5
5 Figure 5.4: (a) EOS crystal and titanium wedge mount in the picnic basket. (b) Overview of the picnic basket with the positron beam and laser beam paths.
Fig 5.5 (a). The wafers have a 45° angle compared to the beam trajectory, so that the second wafer reflects the Cherenkov light that is then recorded by a camera [Adli 15]. The unmodified two-bunch beam spectrum leads to the image of Fig 5.5 (b) on the spectrometer.The dispersion due to the dipole leads the particles to spread on the vertical dimension of the screen. The vertical position can then be related to the energy of the particles thanks to the following formula, where 𝑦 is the vertical position along the screen, 𝑦 0 is the nominal position (position of the particles at energy 𝐸 0 ) and 𝜂 0 the dispersion from the dipole at the nominal beam energy 𝐸 0 : is nonlinear in 𝑦, the position along the screen. In Fig 5.5 (b), the conversion from the vertical coordinate to the energy axis has already been made, and the image was stretched accordingly.
Figure 5
5 Figure 5.5: (a) Schematic (top view) of the Cherenkov light spectrometer. (b) A processed image from the spectrometer showing the two-bunch energy structure when no plasma is present. The vertical position on the Cherenkov screen is converted into the energy axis. The left y-axis is the horizontal position. The right y-axis is the integrated spectrum axis. The black plain line is the integrated spectrum that appears on the background image.
Figure 5
5 Figure 5.6: (a) Waterfall (columns of integrated signal) of the EOS signal measured during a dataset. The images are sorted by increasing interbunch distance. (b) Corresponding interbunch distance. Calibration of the EOS camera: 8.06 𝜇𝑚 𝑝𝑥 -1 .
Figure 5.7: (a) Integrated spectra for a shot when only the drive bunch was sent into the plasma, with plasma (red plain line), without plasma (blue dashed line) (b) Integrated spectra for a shot when only the trailing bunch was sent into the plasma, with plasma (red plain line), without plasma (blue dashed line) (c) Integrated spectra for a shot when both bunches were sent into the plasma, with plasma (red plain line), without plasma (blue dashed line). Acceleration is clear with a secondary red peak on the right. From [Doche 17]
Figure 5
5 Figure 5.8: (a) 𝐸 𝑧 field on axis, in the linear regime of positron driven wakefield. 𝑛 0 = 10 16 𝑐𝑚 -3 , 𝑁 𝑑𝑟𝑖𝑣𝑒 = 3 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 . Drive beam at 𝜉 = 0 . (b) 𝐸 𝑧 field on axis. 𝑛 0 = 10 16 𝑐𝑚 -3 , 𝑁 𝑡𝑟𝑎𝑖𝑙𝑖𝑛𝑔 = 2 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠. Trailing beam initially at 𝜉 = 130 𝜇𝑚. (c) 𝐸 𝑧 field on axis, with a loaded wakefield. 𝑛 0 = 10 16 𝑐𝑚 -3 . Drive beam at 𝜉 = 0 𝜇𝑚 and trailing beam at 𝜉 = 130 𝜇𝑚, the total wakefield is the superposition of the previous two. The wake is still accelerating at the trailing position (grey area), the wake is not optimally loaded. 𝑁 𝑑𝑟𝑖𝑣𝑒 = 3 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 and 𝑁 𝑡𝑟𝑎𝑖𝑙𝑖𝑛𝑔 = 2 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 . (d) 𝐸 𝑧 field on axis, with an overloaded wakefield: the wake becomes decelerating at the trailing position. 𝑛 0 = 10 16 𝑐𝑚 -3 . Drive beam at 𝜉 = 0 𝜇𝑚 and trailing beam at 𝜉 = 130 𝜇𝑚 . 𝑁 𝑑𝑟𝑖𝑣𝑒 = 3 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 and 𝑁 𝑡𝑟𝑎𝑖𝑙𝑖𝑛𝑔 = 4.5 10 8 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠.
Figure 5
5 Figure 5.9: (a) Waterfall plot of the 160 spectra in which the beam loading effects are particularly visible. Each line is an integrated spectrum from the Cherenkov spectrometer. (b) Correlation between the trailing bunch charge and the peak energy of the accelerated bunch. (c) Correlation between the trailing bunch charge and the energy spread of the accelerated bunch.
Figure 5 . 12 :
512 Figure 5.12: Maximum energy of the accelerated particles as a function of the titanium thickness.
Figure 5 .
5 Figure 5.13: 𝐸 𝑧 field map (left) and transverse force 𝐹 𝑥 (right) after 72.5 𝑐𝑚 of propagation in the plasma, for the case without spoiling (a), with 100 𝜇𝑚 of Titanium (b), with 179 𝜇𝑚 of Titanium (c), with 257 𝜇𝑚 of Titanium (d).
Fig. 5.13 (d) is referred to as the quasilinear regime, similarly to what is used by the LWFA community to qualify wakefields driven by a laser pulse of 𝒂 𝟎 ~1 showing properties very close to the linear regime [Schroeder 10].The quasilinear regime is interesting because of its more symmetrical properties for electrons and positrons, and its regularity may be an advantage for preserving positron beams quality during an acceleration to high energies [Schroeder 10, Cros 16].
Figure 5.14: 𝐸 𝑧 field map (left) and transverse force 𝐹 𝑥 (right) after 72.5 𝑐𝑚 of propagation in the plasma, for the four following cases: no spoiling (a), with 297 𝜇𝑚 of Titanium (b), with the initial parameters 𝜎 𝑥 × 𝜎 𝑦 = 35 × 25 𝜇𝑚 2 and 𝜖 𝑥 × 𝜖 𝑦 = 270 × 60 𝜇𝑚 2 (c), with the initial parameters 𝜎 𝑥 × 𝜎 𝑦 = 89 × 86 𝜇𝑚 2 and 𝜖 𝑥 × 𝜖 𝑦 = 100 × 10 𝜇𝑚 2 (d).
14 (c), increasing the emittance only (the initial beam density being kept constant) effectively led to a quasi-linear regime. The field map is very symmetrical and has a sinusoidal dependence in 𝜉. The longitudinal fields have a cosine-like shape. By contrast, simulation D, which corresponds to Fig 5.14 (d)
Figure 5 . 15 :
515 Figure 5.15: Simulation parameters used to study the respective effects of emittance and initial beam density on the acceleration regime. The central spot sizes and charge percentage are calculated by taking into account all the particles initially in the drive bunch used in the simulation.
Contents 1 .
1 Acceleration, trapping and injection of particles in plasma wakefield ................. a. Phase velocity of plasma density waves ................................................................... b. Acceleration, trapping and LWFA phase detuning .................................................. c. Injection techniques .................................................................................................. 2. Salle Jaune facilitiy .................................................................................................... a. Facility .................................................................................................................... b. Energy spectrometer ............................................................................................... c. Side-view interferometer ......................................................................................... 3. Hybrid LWFA-PWFA experiment and results ....................................................... a. Experimental setup .................................................................................................. b. Effect of the second gas jet on the electron beam ................................................... c. Effect of the foil on the electron beam ....................................................................
The fixed points are stable states of the system for 𝜙 = 𝜙 𝑚𝑎𝑥 and unstable for 𝜙 = 𝜙 𝑚𝑖𝑛 , with each time 𝛾 = 𝛾 𝑝 = (1 -𝛽 𝑝 2 ) -1/2 . A curve has a particular importance in the phase-space picture we consider: the separatrix. It distinguishes the closed orbits of trapped electrons and the open orbits of untrapped electrons that flow from the right to the left of the phase portrait. Both behaviors appear in Fig. 6.1 (b). The equation for the separatrix can be obtained from the relation: 𝐻(𝜉, 𝛾) = 𝐻(𝜉 𝑚𝑖𝑛 , 𝛾 𝑝 ) (6.4) which can be solved to find the formula for 𝛾(𝜉): 𝛾(𝜉) = 𝛾 𝑝 (1 + 𝛾 𝑝 (𝜙(𝜉) -𝜙 𝑚𝑖𝑛 )) ± 𝛾 𝑝 𝛽 𝑝 [(1 + 𝛾 𝑝 (𝜙(𝜉) -𝜙 𝑚𝑖𝑛 ))
Figure 6
6 Figure 6.1: (a) Longitudinal electric field of a nonlinear laser driven wakefield (b) Phase portrait of the particles in the plasma. The separatrix (red line) distinguishes the trapped particles from the particles traversing the wakefield. The vertical coordinate is the speed. Figure from [Réchatin 10].
Figure 6.2: (a) In a laser produced wakefield, some particles (electrons) are injected at the back of the bubble, they face an accelerating wakefield. (b) Accelerated particles move faster than the plasma wave, they reach the end of the accelerating field. (c) The second half of the bubble forces the particles to decelerate, the maximal energy reached is limited: this is the dephasing limit.
Figure 6
6 Figure 6.3: (a) Trajectory of a particle injected through longitudinal injection. (b) Example of a trajectory of a particle injected through transverse injection. (c) Example of longitudinal self-injected electron bunches. Figure from [Corde 13]
Figure 6
6 Figure 6.4: (a) 3D map of the Salle Jaune facility. Upstairs is most of the laser chain while the compressors and the experimental chambers are downstairs. (b) Picture of the laser chain. (c) Schematic of the whole laser chain, with the evolution of the beam parameters.
Figure 6 . 5 :
65 Figure 6.5: Schematic of the hybrid LWFA/PWFA experiment.
Figure 6
6 Figure 6.6: (a) Schematic of the spectrometer system. (b), (c) Examples of electron bunch spectra measured by the spectrometer. The images were processed: the energy axis is established thanks to a code which calculates the trajectories of the electrons in the field of the magnetic dipole.
Figure 6
6 Figure 6.7: (a) Schematic of the side-view diagnostic. (b) Example of an image recorded on the camera. On the left, a plasma wakefield is created in the first jet (nozzles are not visible). The wheel intercepts the laser beam (center), and the electron beam ionizes a thin column of gas in the second jet (on the right). (c) Perturbations of fringe spacing in both jets allows to retrieve the plasma density.
Figure 6
6 Figure 6.8: (a) Example of a spectrum recorded on the camera. Vertical axis is the divergence of the beam, in the direction of the dipole field. (b) Spectrum integrated over the whole vertical axis. (in arbitrary units)
Figure 6
6 Figure 6.9: (a) Example of an electron bunch spectra, for a shot with a Mylar window to block the laser, but no gas in the second jet. (b) Same, but with gas in the second jet. (c) Divergence of the beam on the spectrometer as a function of the particles energy. The focusing effect of the second gas jet is visible. (d) Divergence of the beam on the spectrometer as a function of the particles energy. This is an evidence of the defocusing effect of the Mylar foil.
.
It has the solution:𝑣 = 𝑐 ( 𝑐𝑜𝑠(𝜔 𝑐 𝑡) 𝑠𝑖𝑛(𝜔 𝑐 𝑡)), (6.9) leading to 𝜔 𝑐 𝑡 0 = 𝜃 0 at the exit of the second jet. This particle propagates along 𝑙 = 3 𝑚𝑚, assuming the deviation is very small, we have the relation: 𝑐𝑡 0 = 3 𝑚𝑚. This leads to the value of 𝐵 seen by this electron:
Figure 6 .
6 Figure 6.10: Scattering angles due to the effect of the foil. Only the angles at 𝐸 = 150 𝑀𝑒𝑉 are given here.
Figure 6 .
6 Figure 6.11: (a) Side-view image that displays the 45° oriented wheel. (b) Plot of the divergence of the beam as a function of the energy of the particle. No effect of the 45° wheel is seen, compared to the 0° usual one.
Figure 6 . 12 :
612 Figure 6.12: Plots of the divergence evolution for different materials and thicknesses of the foil in the wheel. Each image corresponds to a distinct nozzle-foil distance. (a) Reference position (b) 533 𝜇𝑚 between the foil and the reference position. (c) 806 𝜇𝑚 between the foil and the reference position. (d) 1220 𝜇𝑚 between the foil and the reference position. (e) 2114 𝜇𝑚 between the foil and the reference position.
Figure 6 .
6 Figure 6.13: Plots of the divergence evolution for different materials and thicknesses of the foil in the wheel. (a) Aluminum foil of different thicknesses. The divergence is measured at 120 𝑀𝑒𝑉 . (b) Aluminum foil of different thicknesses. The divergence is measured at 160 𝑀𝑒𝑉. (c) Mylar foil, divergence at 120 𝑀𝑒𝑉 and at 160 𝑀𝑒𝑉.
Figure 7.1: (a) Perturbation of the electric field propagating from a charged particle. (b) Definition of the retarded time 𝑡′ and of various notations.
leads to the formula for the radiation power received by an observer in the direction 𝒏 and in a solid angle 𝑑𝛺: 𝑑𝑃(𝑡) 𝑑𝛺 = |𝐴(𝑡)| 2 = |√𝑐𝜖 0 𝑅𝑬(𝑡)| 2 (7.9)
Where the second equation is justified by Parseval-Plancherel's theorem, and the factor of 2 in the last equation arises from considering only positive frequencies. As a result, which can be expressed easily using the Fourier Transform of 𝑬(𝑡): ′ +𝑅 0 -𝒏•𝒓 𝑃 ) 𝒏 × [(𝒏-𝜷)
Figure 7
7 Figure 7.2: (a) Electron trajectory and X-ray emission cone in the undulator regime. (b) Electron trajectory and X-ray emission cone in the wiggler regime. Figure from [Corde 13].
Figure 7
7 Figure 7.3: Schematic of the two-stage LWFA-PWFA scheme for X-ray production. Figure from [Ferri 17].
Figure 7 . 4 :
74 Figure 7.4: Plot of the peak X-ray signal in count number measured on the X-ray CCD camera as a function of the backing gas pressure of the second jet. The black dashed line is the level of Betatron radiation using the first jet only, with 𝑃 1 = 17 𝑏𝑎𝑟𝑠, corresponding to the optimization of the electron beam parameters. Calibration of the camera signal is to be accomplished at LOA in 2018.
Figure 7
7 Figure 7.5: (a) X-ray signal on the CCD chip, parameters for this shot were 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 40 𝑏𝑎𝑟𝑠 (highest signal in Fig.7.4). (b) X-ray signal on the CCD chip, parameters for this shot were 𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 0 𝑏𝑎𝑟𝑠 (reference signal in Fig.7.4).
Figure 7
7 Figure7.6: (a) Plot of the peak X-ray signal in count number as a function of the backing pressure of the second gas jet, for 𝑃 1 = 17 𝑏𝑎𝑟𝑠. The black dashed line is the count level when only jet 1 is switched on, also for 𝑃 1 = 17 𝑏𝑎𝑟𝑠. (b) Plot of the peak X-ray signal in count number as a function of the backing pressure in the first gas jet, with the second jet switched off and for laser and jet 1 parameters optimized for the highest X-ray signal. The black dashed line shows the count level of the fully-optimized two-stage X-ray source (𝑃 1 = 17 𝑏𝑎𝑟𝑠 and 𝑃 2 = 41,5 𝑏𝑎𝑟𝑠 ), which is twice higher than the single-stage optimum. A calibration of the X-ray signal is to be accomplished again in 2018 at LOA.
L
'accélération de particules par onde de sillage est un domaine de recherche qui a déjà connu plus de cinquante ans de développement. Cette technologie repose sur le principe d'envoyer un faisceau pour perturber la densité d'électrons dans un plasma, et créer ainsi des champs accélérateurs dans le sillage de ce faisceau. Ces champs peuvent être exploités pour accélérer des particules. La théorie de l'excitation d'ondes de sillage possède des solutions analytiques dans le cas d'une faible perturbation exercée sur les électrons du plasma, c'est-à-dire dans le cas linéaire. Ces solutions ont la même forme pour une excitation par un faisceau laser ou par un faisceau de particules chargées positivement ou négativement. Ces résultats sont dérivés dans le manuscrit, et les différences entre les cas d'excitation par laser ou par un faisceau sont mises en avant. Dans le cas non-linéaire, il n'y a pas de solution analytique, en revanche des simulations numériques permettent de comprendre le comportement des électrons du plasma lorsqu'un sillage non-linéaire est créé. Ce comportement du plasma diffère entre les faisceaux de particules chargées positivement et les autres faisceaux excitateurs. Ces différences expliquent le retard que l'accélération de positrons et l'usage d'ondes de sillages excitées par des faisceaux de positrons ont sur l'accélération d'électrons.Le résultat majeur de mon travail de thèse a été la démonstration de l'accélération d'un faisceau distinct de positron. Il s'agissait d'un résultat espéré par la communauté scientifique pour avancer vers un collisionneur électron-positron reposant sur la technologie plasma. Dans l'expérience, les ondes de sillages ont été excitées par un premier faisceau de positron. L'accélération d'un second paquet dans le sillage a été démontrée dans un régime nonlinéaire, propre aux faisceaux de positrons, mais également dans un régime d'accélération quasi-linéaire, commun à tous les types de faisceaux excitateurs. Cela ouvre notamment la perspective d'utiliser des ondes de sillage laser pour accélérer des positrons.Réaliser une expérience d'accélération par onde de sillage plasma dans un laboratoire universitaireL'expérience précédente comme la plupart des expériences d'accélération par onde de sillage excitées par faisceau s'est déroulée dans un accélérateur conventionnel : le Stanford Linear Accelerator. Il s'agit d'un établissement employant plusieurs centaines de personnes et qui a nécessité un investissement de plus d'un milliard de dollars. La taille et le coût de ces centres de recherche limitent les scientifiques dans leurs recherches. C'est pourquoi une avancée majeure serait de rendre possible les expériences d'accélération par onde de sillage excitées par faisceaux dans un laboratoire universitaire. Le projet M-PAC et l'expérience réalisée durant ma thèse en 2017 avaient cet objectif. L'expérience reposait sur un premier étage d'accélération par onde de sillage laser (LWFA), après lequel une feuille d'aluminium bloquait le faisceau laser. Après cette feuille, un deuxième étage d'interaction plasma était positionné, et était composé d'un jet de gaz dans lequel le faisceau d'électron créé au premier étage excitait une onde de sillage. Les résultats de cette expérience et les premiers obstacles sont présentés et analysés dans ce manuscrit. Ce travail expérimental va se poursuivre au Laboratoire d'Optique Appliquée à partir de 2018.
Titre:
Accélération de particules dans des ondes de sillage plasma excitées par faisceaux de particules Mots clés : Plasmas, ondes dans les plasmas, ondes électromagnétiques, interaction laser matière, accélération par sillage plasma, code Quick PIC (Particle in Cell). Résumé : Les accélérateurs par onde de sillage plasma produites par faisceaux de particules (PWFA) ou par faisceaux laser (LWFA) appartiennent à un nouveau type d'accélérateurs de particules particulièrement prometteur. Ils permettent d'exploiter des champs accélérateurs allant jusqu'à plusieurs centaines de gigaélectronvolts par mètre alors que les dispositifs conventionnels se limitent à cent mégaélectronvolts par mètre. Dans le schéma d'accélération par onde de sillage plasma, ou par onde de sillage laser, un faisceau de particules ou une impulsion laser se propage dans un plasma et créé une structure accélératrice dans son sillage : c'est une onde de densité électronique à laquelle sont associés des champs électromagnétiques dans le plasma. L'un des principaux résultats de cette thèse a été la démonstration de l'accélération par onde de sillage plasma d'un paquet distinct de positrons. Dans le schéma utilisé, un plasma de lithium était créé dans un four, et une onde plasma était excitée par un premier paquet de positrons (le pilote ou faisceau excitateur) et l'énergie était extraite par un second faisceau (le trailing ou faisceau témoin). Un champ accélérateur de 1,12 𝐺𝑒𝑉/𝑚 a ainsi été obtenu durant l'expérience, pour une charge accélérée typique de 85 pC. Nous montrons également ici la possibilité d'utiliser différents régimes d'accélération qui semblent très prometteurs. Par ailleurs, l'accélération de particule par sillage laser permet quant à elle, en partant d'une impulsion laser femtoseconde de produire un faisceau d'électron quasimonoénergétique d'énergie typique de l'ordre de 200 MeV. Nous présentons les résultats d'une campagne expérimentale d'association de ce schéma d'accélération par sillage laser avec un schéma d'accélération par sillage plasma. Au cours de cette expérience un faisceau d'électrons créé par laser est refocalisé lors d'une interaction dans un second plasma. Une étude des phénomènes associés à cette plateforme hybride LWFA-PWFA est également présentée. Enfin, le schéma hybride LWFA-PWFA est prometteur pour optimiser l'émission de rayonnement X par les électrons du faisceau de particule crée dans l'étage LWFA de la plateforme. Nous présentons dans un dernier temps la première réalisation expérimentale d'un tel schéma et ses résultats prometteurs. Title: Particle acceleration with beam driven wakefield Keywords: Plasmas, plasma and electromagnetic waves, laser-matter interaction, Plasma Wakefield Acceleration, Quick PIC (Particle in Cell) code.
Lu 06b, Corde 13]. 𝜔
𝑏 depends on gamma, therefore particles with different energies will rotate at different velocities. As many Betatron oscillations occur during the acceleration process, this will contribute to distort the beam ellipse in trace-space [
If one considers Betatron
oscillations of particles in the blow-out regime of Plasma Wakefield Acceleration,
particles in trace-space 𝑥 -𝑥′ rotate around the origin at the frequency 𝜔 𝑏 =
𝜔 𝑝 /√2𝛾 [
Michel 06].
Contents 1. Plasmas ........................................................................................................................
a. Electronic plasma frequency ....................................................................................
b. Debye length ............................................................................................................
2. Ionization ..................................................................................................................... a
. Low-Field Ionization ................................................................................................ b. Multi-Photon Ionization ........................................................................................... c. Tunnel Ionization and Barrier Suppression Ionization ............................................
3. Fluid description of a plasma ..................................................................................... 4. Electromagnetic waves in plasmas ............................................................................
This equation introduces a new parameter in spite of the fluid velocity: the pressure. The third order moment of Vlasov equation would provide a new equation involving 𝑃 𝑗 , however, it would also introduce a new fluid parameter such as the heath flux. So would the fourth order moment of Vlasov equation do. Physicists usually rely on a closing hypothesis to put an end to this endless suite of equations. The cold plasma hypothesis (𝑃 𝑗 = 0) is a common hypothesis and the one used in the following chapters.
𝒓, 𝑡). 𝒗 𝑗 (𝒓, 𝑡) = ∫ 𝒗. 𝑓 𝑖 (𝒓, 𝒗, 𝑡)𝑑𝒗 (2.7)
The pressure tensor is defined as:
𝑃 𝑗 (𝒓, 𝑡) = 𝑚 𝑗 ∫ (𝒗 -𝒗 𝑗 (𝒓, 𝒗, 𝑡)) (𝒗 -𝒗 𝑗 (𝒓, 𝒗, 𝑡)) 𝑓 𝑖 (𝒓, 𝒗, 𝑡)𝑑𝒗 (2.8)
Where 𝑚 𝑗 is the mass of constituent 𝑗.
Integrating equation (2.5) over 𝒗 leads to the particle number conservation equation for the 𝑗
particles in the fluid:
𝜕𝑛 𝑗 𝜕𝑡 + 𝛻 • (𝑛 𝑗 𝒗 𝒋 ) = 0 (2.9)
The first moment of equation (2.5) leads to Euler equation, also called momentum
conservation equation:
( 𝜕 𝜕𝑡 + (𝒗 𝑗 • 𝛻 𝑟 ) ) 𝒗 𝒋 = - 𝜵 . 𝑃 𝑗 𝑚 𝑗 𝑛 𝑗 + 𝑞 𝑗 𝑚 𝑗 (𝑬 + 𝒗 𝒋 × 𝑩) (2.10)
Jackson 62]:
To keep Jackson's notation, let's denote 𝜓 the solution, combination of 𝐼 𝑚 and 𝐾 𝑚 the modified Bessel functions. Two solutions will coexist, 𝜓 1 that satisfies the boundary conditions for 𝑟 < 𝑟′ and 𝜓 2 that satisfies the boundary conditions for 𝑟 > 𝑟′. As demonstrated in [Appendix 1], the Green function is symmetric in 𝑟, 𝑟′, which implies that 𝜓 1 and 𝜓 2 can be exchanged. Therefore, the solution of the equation [
𝑘 𝑝 2 𝜕 𝜕𝑟 ( 𝛿(𝑟-𝑟 ′ ) 2𝜋𝑟 ) (3.33)
𝛿𝑛 ̂= 𝑞 𝑒 𝑘 𝑝 2 𝑘 𝑝 2 -𝑘 2 𝛿(𝑟-𝑟 ′ ) 2𝜋𝑟 (3.34)
Solution for 𝑬
𝒛
Following the derivation from [Jackson 62], §3.11, the solution for radial equation (3.32) is a linear combination of Bessel modified functions.
Noble 83, Rosenzweig 87a, Rosenzweig 87b].
The derivation of equations (3.53) and (3.54) relies on the Maxwell-Vlasov system [
Equation (3.54) can be expressed with the normalized potential [Krall 91]:
𝜕 2 𝜙 𝜕𝜉 2 = 𝑘 𝑝 2 2 [ 2𝑛 𝑏 𝑛 0 + 1 (1+𝜙) 2 -1] (3.55)
(3.55) is the equation of generation of a one-dimensional nonlinear wakefield in a cold plasma by a non-evolving beam. It is easy to solve numerically equation (3.55) and compare
5.2.
Parameter Value
Axicon angle 0.6°
Laser convergence angle 0.28°
Laser pulse energy in plasma 120 𝑚𝐽
Laser peak power 0.6 𝑇𝑊
Bunch length in plasma 200 𝑓𝑠
Laser Intensity at plasma center ~ 2 10 14 𝑊 𝑐𝑚 -2
table.
Titanium Thickness (𝝁𝒎) 𝑳/𝑿 𝟎 𝜽 𝒓𝒎𝒔 (𝝁𝒓𝒂𝒅) 𝜽 𝒓𝒎𝒔 . 𝑫 (𝝁𝒎) 𝝐 𝒙 (𝝁𝒎) 𝝐 𝒚 (𝝁𝒎)
0 0 0 0 100 10
100 0.00281 27.5 45.9 171 34.3
139 0.00391 33.0 55.1 195 40.1
179 0.005 37.7 63.0 214 46.1
218 0.00613 42.2 70.5 234 51.3
257 0.00723 46.2 77.1 253 56
297 0.00834 49.9 83.4 270 60.3
382 0.0107 57.3 94.5 298 69.1
.15.
Simulation Initial Size (𝑥 × 𝑦) Initial Emittance (𝑥 × 𝑦) FWHM Size at middle plasma % charge in central spot 𝑛 𝑏𝑒𝑎𝑚 𝑛 0
A 35 × 25 𝜇𝑚 2 100 × 10 𝜇𝑚 2 16 × 4 𝜇𝑚 2 14.8 14.2
B 89 × 86 𝜇𝑚 2 270 × 60 𝜇𝑚 2 61.5 × 17.5 𝜇𝑚 2 17.9 1.2
C 35 × 25 𝜇𝑚 2 270 × 60 𝜇𝑚 2 46 × 15.5 𝜇𝑚 2 25.8 2.1
D 89 × 86 𝜇𝑚 2 100 × 10 𝜇𝑚 2 31 × 4.5 𝜇𝑚 2 10.2 4.8
In simulation C corresponding to Fig 5.
Esarey 95, Esarey 96a
]. A correction has to be added in the case of non-linear plasma waves: 𝐿 𝑑 ≈ A solution to this problem was found two decades ago [Sprangle 01]. It was suggested to spatially tailor the plasma density so that the accelerated bunch of electrons would see an accelerating field whose phase velocity is 𝑐. A successful experimental demonstration of this technique was accomplished at LOA [Guillaume 15]. The authors introduced a density step to reduce suddenly the plasma wavelength and therefore "force" the bunch to stay in an accelerating region for a longer time.
3 2√𝑎 0 𝜆 𝑝 3𝜋𝜆 2 [Lu 07].
Phase detuning competes with two other limitations: laser diffraction and laser energy
depletion in the plasma [Esarey 96b]. However, phase detuning remains probably the most
serious challenge for applications of LWFA.
The radiation length of Aluminum is 8.897 𝑐𝑚 [PDG 17], and the radiation length of Mylar is 50.3 𝑐𝑚 [Adler 06]. The scattering angle due to the foil is nonnegligible. The corresponding additional angles are reported in Fig. 6.10.
Material Thickness 𝜃 𝑠 (𝐸 = 150 𝑀𝑒𝑉)
Mylar 13 𝜇𝑚 0.28 𝑚𝑟𝑎𝑑
Aluminum 8 𝜇𝑚 0.56 𝑚𝑟𝑎𝑑
Aluminum 15 𝜇𝑚 0.79 𝑚𝑟𝑎𝑑
Aluminum 30 𝜇𝑚 1.2 𝑚𝑟𝑎𝑑
Aluminum 60 𝜇𝑚 1.7 𝑚𝑟𝑎𝑑
Esarey 97] Esarey, E., Sprangle, P., Krall, J. and Ting, A., Self-focusing and guiding of short laser pulses in ionizing gases and plasmas, IEEE journal of quantum electronics, 33, 1997. Esarey, E., Schroeder, C., B. and Leemans, W., P., Physics of laserdriven plasma-based electron accelerators, Review of Modern Physics, 81, 1229, 2009. Esirkepov, T., Bulanov, S., V., Yamagiwa and M., Tajima, T., Electron, positron and photon wakefield acceleration: trapping, wake overtaking and ponderomotive acceleration, Physical Review Letters, 96, 014803, 2006. Fainberg, I. B., The use of plasma waveguides as accelerating structures in linear accelerators, Ukrainian academy of science, Kharkov. Faure, J., Glinec, Y., Pukhov, A., Kiselev, S. and al., A laser-plasma accelerator producing monoenergetic electron beams, Nature, 431, 541-544, 2004. Geddes, C. G. R., Toth, C., van Tilborg, J., Esarey, and al., High-quality electron beams from a laser wakefield accelerator using plasma-channel guiding, Nature, 431, 538-541, 2004. C. G. R. Geddes, K. Nakamura, G. R. Plateau, Cs. Toth, and al., Plasmadensity-gradient injection of low absolute-momentum-spread electron bunches, Physical Review Letters, 100, 215004, 2008.
[Esarey 09]
[Esirkepov 06]
[Fainberg 56]
[Faure 04]
[Geddes 08]
[Faure 06] Faure, J., Rechatin, C., Norlin, A., Lifschitz, and al., Controlled injection and acceleration of electrons in plasma wakefields by colliding laser pulses, Nature, 444, 737-739, 2006. [Faure 07] Faure, J., Accélération de particules dans les plasmas, 2007. [Ferri 16] Ferri, J., Etude des rayonnements Bétatron et Compton dans l'accélération par sillage laser, thesis, 2016. [Fourmaux 11] Fourmaux, S., Corde, S., Ta Phuoc, K., Lassonde and al., Single shot phase contrast imaging using laser-produced Betatron x-ray beams, Optics Letters, 36, 2426-2428, 2011. [Frederico 16] Frederico, J., Theory and measurements of emittance preservation in Plasma Wakefield Acceleration, Thesis manuscript, 2016. [Geddes 04]
Remerciements
du travail présenté ici a été accompli grâce à l'
Decoupling of LWFA electron acceleration and Betatron radiation in a two-stage experiment
As an introduction to radiation emission by electrons in LWFA experiments we will first recall the origin of radiation emission by charged particles, we will then briefly introduce the main concepts and the formalism of Betatron emission by LWFA electron bunches. The second section will be dedicated to the simulation results regarding the decoupling of electron acceleration and X-ray emission in a two-stage LWFA-PWFA scheme. In fact, for reasons that will be discussed in details, the two-stage scheme is promising to enhance the usual Betatron emission of LWFA experiments. The last section will be dedicated to the corresponding experimental campaign performed at LOA in 2016 that aimed at realizing this conceptual scheme. Adli, E., Delahaye, J-P., Gessner, S. J., Hogan, and al., A beam driven plasma-wakefield linear collider: from Higgs Factory to Multi-TeV, SLAC-PUB-15426, 2013.
[Adli 15]
Adli, E., Gessner, S., J., Corde
Bibliography |
01767747 | en | [
"chim.coor"
] | 2024/03/05 22:32:15 | 2016 | https://theses.hal.science/tel-01767747/file/2016TOU30347b.pdf | Keywords: Amino-Terminal APP Intracellular Amyloid Precursor Protein Ascorbate Asn; Aspartate Aβ, Amyloid-β peptide AβDPs, Aβ-degrading proteases Aβox, Oxidized Amyloid-β Peptide C CCA, Coumarin-3-Carboxylic Acid CID, Collision Induced Dissociation CNS, Central Nervous System CTF, Carboxyterminal Fragments Cu
La thèse est une grande aventure faite de rencontres, d'opportunités et de nouveautés. C'est un tout nouveau monde qui s'ouvre à soi, inconnu, inconfortable mais exaltant. Au cours de mon parcours de thèse parfois tumultueux, j'ai eu la chance de faire de belles rencontres scientifiques et humaines.
Je souhaiterais remercier toutes ces personnes et espère n'oublier personne.
m'ont appris tellement… Je n'aurais pas pu être mieux encadrée qu'avec vous ! Merci aussi de m'avoir donné l'opportunité de faire des activités « extra-labos » comme les enseignements, la vulgarisation scientifique, le club jeunes SCF, toutes ces conférences auxquelles j'ai pu assister et présenter, et les sessions à l'ESRF. Ce n'est pas donné à tous les doctorants d'avoir autant d'opportunités et de liberté, et je vous remercie de m'avoir donné cette chance.
Fabrice, tout d'abord merci de m'avoir donné ma chance en me choisissant pour ce projet de thèse.
Merci pour ta disponibilité, tes précieux conseils et pour ton optimisme contagieux. Tu as toujours su partager ta passion pour la chimie mais aussi ta bonne humeur, même quand j'étais au fond du trou, notamment quand le peptide oxydé n'en faisait qu'à sa tête. Je te remercie également pour ta patience et ta pédagogie. Corriger mon premier article en « franglais » tout en faisant attention de ne pas me vexer n'a pas dû être une mince affaire !! Merci pour les longues discussions scientifiques, pour le temps que tu as passé avec moi quand les manips ne fonctionnaient pas (masse ou autre). J'ai passé des supers moments avec toi en conf' et à l'ESRF en mode 007 ! Christelle, je souhaiterais te remercier pour (i) la générosité et la passion avec laquelle tu partages ton savoir scientifique, (ii) ta disponibilité pour discuter de chimie ou de choses plus personnelles et ce, même dans les moments où tu as déjà accordé 110% de ton temps à l'ERC et 90% à l'équipe, (iii) avoir été une super coloc de bureau prête à dégainer du chocolat à tout moment et enfin (iv) ne pas m'avoir viré quand j'ai cassé ta voiture ! Je te remercie de m'avoir fait me sentir comme ton égale lors de nos longues conversations scientifiques pendant lesquelles j'ai pu développer mon sens critique et proposer toutes les idées (quelquefois farfelues) qui me venaient à l'esprit, sans peur du ridicule. Merci également de montrer à toutes les femmes de l'équipe qu'on peut être une super chercheuse et une super maman de 4 bambins en même temps ! Peter, tout d'abord je te remercie de m'avoir accueillie dans ton équipe. Merci pour tes précieux conseils, pour les discussions scientifiques qui m'ont beaucoup apporté mais aussi pour ton enthousiasme permanent. Un grand merci aussi pour la confiance que tu m'as accordée et pour ta disponibilité, même après ton départ pour Strasbourg. Ta passion pour les sciences est vraiment contagieuse, alors propage la autant que tu peux ! Cette thèse n'aurait pu se faire sans l'aide précieuse de :
-Emilien Jamin pour la HRMS, je te remercie pour tes précieux conseils et pour le temps que tu m'as accordé. Et désolée pour la poisse que j'emmenais (souvent) avec moi à l'orbitrap ! -Christian Bijani pour les spectres RMN, merci de ne pas t'être lassé de voir des RMN avec des pics super larges tout le temps !! Et merci d'avoir pris le temps de regarder la partie RMN du manuscrit.
-Lionel Rechignat pour la RPE, merci pour les conseils et pour la conversation sur la théorie de la RPE (toujours aussi obscure pour moi !!).
-Vanessa Soldan pour le TEM, je tiens à te remercier pour les précieux conseils et le temps que tu as passé avec moi à sonder l'échantillon à la recherche de fibres (ou d'absence de fibres !).
-Toute l'équipe FAME de l'ESRF. Merci à Isabelle Kieffer pour toutes les explications sur le XAS et pour la visite guidée de la nouvelle ligne. Merci à Denis Testemale pour sa disponibilité et sa patience quand la ligne plantait en notre présence (juré c'est pas de notre faute !) et pour les précieux conseils prodigués en somnambule à toute heure de la nuit !! -Stéphanie Sayen et Emmanuel Guillon lors des sessions à l'ESRF. Merci pour l'aide précieuse apportée, pour toutes les réponses à mes questions floues sur le XANES. Merci aussi d'avoir partagé les sessions blind-test et les bonbons avec moi ! -Petit Poney qui a subi nos expériences plus ou moins douteuses avec l'azote liquide. Merci pour les photos souvenirs ! Coté Pharma-DEV, je remercie Françoise Nepveu, Karine Reybier, Paul-Louis Fabre, Mohamed Haddad, Marieke Vansteelandt, Geneviève Bourdy ainsi que tous les autres membres des équipes Redstress et PEPS, avec qui j'ai eu la chance de travailler, de partager un repas ou bien un café.
Merci à Pierre pour sa gentillesse et sa bonne humeur quotidienne, ainsi que pour l'aide précieuse et les bons conseils qu'il m'a apporté. Merci également à Franck qui a toujours le sourire et qui m'a beaucoup aidé pour les questions administratives. Franck, ton punch est à tomber par terre ! Il me faut la recette ! Merci à tous les stagiaires, doctorants et post-doc de Pharma-DEV qui m'ont permis de travailler dans une super ambiance : Ennaji, Nambinina, Rémi, Thi Thu, Luyen, Solomiia, Filip, Marion, Lucie, Mireia, Laure-Estelle (LEC), Cynthia. Malgré le froid glacial l'hiver, j'ai adoré partager les repas du midi avec vous dans le « couloir-cantine ». Et j'aurais bien besoin de me remettre aux mots croisés du midi !! Un énormissime merci à LEC et Cynthia pour l'accueil très chaleureux que vous m'avez fait à mon arrivée au laboratoire, je me suis sentie bien tout de suite et c'est grâce à vous. Merci pour vos encouragements au quotidien, pour les discussions scientifiques (ou pas !) et pour tous les bons moments passés ensemble. Vous avez été mes modèles de réussite ! Coté LCC, je remercie toute l'équipe F qui m'a chaleureusement accueillie et m'a fait mourir de faim jusqu'à 13h30 pendant mes deux premières années de thèse ! Merci à Manu pour sa bonne humeur et ses super blagues, mais aussi pour les discussions autour des sciences, de l'administration (et des mouches !), de la musique et de l'oenologie ! Merci à Béatrice qui a toujours le sourire, pour sa gentillesse et ses conseils. Laurent, je te remercie de ne pas m'avoir détesté d'office quand j'ai cassé ta voiture ! Je suis très contente d'avoir pu te connaître ces derniers mois et je te remercie pour l'aide et le soutien que tu m'as apporté. Viviane, merci pour ton rire contagieux, ta bonne humeur quotidienne et ta gentillesse ! Je tiens également à remercier tous les étudiants / postdoc actuels ou anciens de l'équipe : Olivia, Hélène, Adam, Olena, Carine, Melisa, Daniel (bon aprèm les garces !), Mireia (mais c'est vrai ça ?!), Megan (so cute !), Gabriel (Mr Marcel), Rufus (c'est qui Clémence ?!), Sara (à ne pas confondre avec Sara !), Marie, Alex (petit puits), Omar (Ahomalll), Elena (a fare l'amore, encore merci pour la chanson !), Valentina (la mammmmmmmmma) et Amandine (ACDC <3). Merci pour la super ambiance que vous apportez dans l'équipe, pour les discussions scientifiques qui permettent de faire avancer nos manips qui rament, mais aussi un grand merci pour les nombreux moments passés ensemble à l'extérieur du labo, les soirées mojitos / danses tahitiennes à La Plage… Le labo a été un super endroit pour se faire des amis ! Mes remerciements s'adressent ensuite à toute l'équipe pédagogique Chimie pharmaceutique aux cotés desquels j'ai eu le plaisir de découvrir l'enseignement pendant mes deux années de DCE : Geneviève Baziard, Salomé El Hage, Barbora Lajoie, Jean-Luc Stigliani, Fatima El Garah, Christelle Recoche-Gueriot et Laurent Amielet. J'ai beaucoup appris auprès de vous et je tiens à vous remercier pour votre accueil et votre bienveillance.
Je tiens à remercier tout le bureau jeune de la SCF Midi-Py avec qui j'ai eu la chance d'animer des ateliers scientifiques et de monter des projets sympas : Claudia, Cécile, Alix, Morgane, Jérémy, Stéphane, et tous les étudiants de l'équipe F. Les sessions Chimie & Terroir vont me manquer, mais pas les ventes des kits chimistes ! Merci aussi à Lydie Valade et plus largement à Chimie et Société de m'avoir fait partager leur passion pour la vulgarisation scientifique.
Je remercie les organisatrices des fameux et très sélects « Zumbapéro » qui m'ont permis de garder la forme pendant ces années de dur labeur ! Julie, merci d'avoir toujours la pêche et d'avoir été là pour les sessions papotage nécessaires à la survie de la thèse ! Amandine, je ne sais même pas quoi dire… Merci d'avoir été là pour moi pendant les bons mais aussi les mauvais moments, de m'avoir fait rire à en pleurer, d'avoir fait mon copilote pendant les missions… et tant d'autres choses ! Bref, merci d'être toi !! Merci aussi à Laurent de nous avoir supporté toutes les deux, je sais qu'on peut être intenables parfois !! Léa, mon petit, merci d'être là pour moi, avec ta pêche d'enfer et ton hyperactivité !! Merci aussi à ma Flow qui est dans un pays lointain (une fois !). Votre amitié m'a beaucoup aidé à traverser cette thèse.
Merci à mes deux petites angevines Noémie et Marine que j'ai le plaisir de revoir quand je remonte dans le grand nord ! Merci d'être toujours là pour moi après toutes ces années… Pierre, cette thèse est aussi la tienne. Je n'aurais jamais pu aller au bout sans toi. Merci d'avoir été là pendant toutes ces années, de m'avoir soutenu, nourri (j'ai fait des envieux au labo avec mes gamelles du midi !!), remis sur les rails quand je perdais toute motivation. Tu as été d'une patience à toute épreuve et un vrai roc pour moi.
Enfin, un grand merci à toute ma famille qui m'a toujours soutenu pendant mes longues années d'études et m'a encouragé dans les moments de doute. Je sais que je ne suis pas toujours facile à suivre quand je parle de mon travail, c'est comme si je vous parlais chinois, mais vous êtes toujours intéressés et ça me fait très plaisir. Bref, tout ça pour vous dire : 我愛你 (et cette fois-ci c'est vraiment du chinois, pas du langage de chimiste !). Orbitrap [7] .. [10] [19] Hirnrinde" ("On an unusual Illness of the Cerebral Cortex") the uncommon case of a 51-yearold patient who was suffering from memory loss, disorientation, hallucinations and cognitive impairment. After the death of the patient, the post-mortem examination showed an atrophic brain with "striking changes of the neurofibrils" and "minute military foci" caused by the "deposition of a special substance in the cortex". [1] One century later, this "unusual illness" named Alzheimer's Disease (AD) has become the most widespread neurodegenerative disease whose etiology is still unknown. [2] I.A.1. Prevalence
Table of Contents
II.D.1. General principles
II.G.1. X-Ray absorption: general principles
General introduction
General introduction
According to the World Alzheimer Report, [3] 46.8 million people were suffering from dementia worldwide in 2015 and this number is expected to almost double every 20 years.
Approximately 5% -8% of individuals over age 65, 15% -20% of individuals over age 75, and 25% -50% of individuals over age 85 are affected by dementia. [4] Alzheimer's disease is the most common form of dementia, accounting for 50% -75% of all dementias. [4] Figure I.A-1: Estimated number of people suffering from dementia in each continent in 2015. [3] I.A.2. Clinical signs AD is characterized by a progressive deterioration of cognitive functions and progresses in three stages: mild, moderate and severe. [5] At the early-stage of AD, the patient can have memory lapse, difficulties to complete familiar tasks and be confused with time or place. New problems with words in speaking or writing can arise, and the possible changes in mood can be misinterpreted as depression.
At the middle-stage, the symptoms become stronger, the patient can get frustrated or angry and the personality and behavior are impacted. The memory troubles worsen: the patient can have forgotten events or parts of his personal history.
At the severe stage, the patient requires around-the-clock assistance. He loses the ability to carry on a conversation and to control movement and can be bedbound. Individual is then vulnerable to infections (especially pneumonia) and the cause of death is usually external to the disease.
I.A.3. Histopathological signs a. Brain size [6] In AD, the volume of the brain is significantly reduced compared to normal brain
(Figure I.A-2)
. This atrophy results from the degeneration of synapses and the death of neurons.
Hippocampus, the brain region playing a role in memory and spatial orientation, is particularly affected. The reduction of brain size and the progression of AD are related.
Figure I.A-2: Neuroanatomical comparison of normal brain (A, B) and AD brain (C, D). Prominent atrophy in C compared with A (arrows). B, D: Coronal plane of A and C respectively. Arrow on D shows
an enlargement of the ventricles and a selective hippocampal atrophy. [7] b. Amyloid plaques
The first hallmarks of AD described by Aloïs Alzheimer in 1907 [1] as "minute military foci" caused by the "deposition of a special substance in the cortex" are amyloid plaques (Figure I.A-3a). They are composed of deposits of a peptide named Amyloid-β peptide (Aβ) in aggregated forms [8] mostly fibrils. Those plaques, also named senile plaques, are found in the extracellular media of AD brain and more especially located in the hippocampus region. Aβ is mainly a 40 to 42-amino acid residues peptide originating from the cleavage of Amyloid
Precursor Protein (APP) by two enzymes: β-and γ-secretases (see Section I.B for more details).
c. Neurofibrillary tangles
The other hallmarks of the disease are intracellular neurofibrillary tangles (Figure
I.A-3b
). Those tangles are also observed in Parkinson's disease (PD) [9] and are composed of hyper-phosphorylated Tau proteins. [10] This microtubule-associated protein interacts with tubulin to stabilize microtubules. In AD and PD, the abnormal phosphorylation of Tau induces accumulation as paired helical filaments that aggregate inside neurons in neurofibrillary tangles, making unstable the microtubules. Since microtubules are essential to preserve the structure of the neuron, the neuron loses its functionality. of developing the disease reaches 50% for individuals beyond age 85. [4] As the life expectancy increases with the medicine advances, more and more people are likely to develop AD.
b. Gender
Women are known to have a higher life expectancy than men, thus being more susceptible to suffer from AD. Furthermore, studies suggest that the decrease in estrogen levels due to menopause could increase the risk of having AD. Indeed, clinical trials have shown that women who had been treated with hormone therapy have a lower risk of AD. [12] c. Genetic mutations There are two major forms of AD: the sporadic or late-onset form that is the most common, and the familial or early-onset form, representing less than 5 % of the cases. [13] Individuals living with Down's syndrome (also called trisomy 21) have an increased risk of early-onset AD. Indeed, they carry an extra copy of chromosome 21 in which is located the gene that is responsible for the APP formation. [14] Mutations in genes coding for APP, Presenilin 1 and Presenilin 2 (parts of the γsecretase which is responsible for cleavage of APP in Aβ) and ApoE (involved in Aβ clearance)
genes increase the risk of developing AD as these proteins are involved in Aβ regulation in the brain. [13,15] I.A.
Diagnosis
The diagnosis of AD is realized at 70% accuracy with clinical examination and neuropsychological assessment of the patient combined with brain imaging techniques. [7] However, the definitive diagnosis of AD requires histopathologic confirmation and is made post-mortem, based on the observation of specific pathological lesions: intracellular neurofibrillary tangles and senile plaques. [11] a proposed diagnostic criteria for dementia and for AD. [16] Medical history, clinical examination, neuropsychological testing and laboratory assessment are the recommended standard methods of examination. Criteria are made for several stages: probable AD dementia, possible AD dementia and probable or possible AD dementia with evidence of the AD pathophysiological process. The diagnostic criteria have been revised in 2011 [17] and are still used as no specific marker for AD is known. However, they are realized in combination with brain imaging techniques.
b. Neuroimaging [18] Structural magnetic resonance imaging (MRI) and positron emission tomography (PET) are two brain imaging techniques clinically used for the detection of abnormalities in the brain. [19] Structural MRI is commonly used to visualize brain atrophy caused by neuronal and dendritic losses. However, the cerebral atrophy is not specific to AD and can be the result of another pathology. Structural MRI has thus limitations in AD recognition as it cannot detect the histopathological hallmarks of AD (amyloid plaques and neurofibrillary tangles).
Fluoro-deoxy-D-glucose (FDG) PET is used as an indicator of brain metabolism by measuring the synaptic activity. Nevertheless, as metabolism can be disrupted for different reasons, FDG is not a specific marker for AD either.
Specific biomarkers of AD hallmarks are now under focus. Recently, the European Medicines Agency granted marketing authorization for the Florbetapir F18 ( 18 F-AV-45) (also called Amyvid), a PET marker with a good affinity for Aβ plaques (Kd = 3.7 nM [20] ). This active substance can be used to detect amyloid plaques on living patient. [20][21] This is a great advance for AD diagnosis.
I.A.6. Current treatments
There is currently no cure for AD, only symptomatic treatments. The U.S Food and Drug Administration (FDA) has approved four medications that are marketed in France and classified in two groups: acetylcholinesterase inhibitors (Donepezil, Rivastigmine and Galantamine) and N-Methyl-D-Aspartate (NMDA) receptor antagonists (Memantine).
Apart from amyloid plaques and neurofibrillary tangles, AD is characterized by a deficit of acetylcholine, a neurotransmitter that diffuses signal across the synapse, between two neurons. Acetylcholinesterase inhibitors avoid acetylcholine degradation by inhibiting acetylcholinesterase, an enzyme that catalyzes the breakdown of acetylcholine. The level of acetylcholine thus remains stable, allowing neurotransmission.
An over-concentration of glutamate is also observed in synaptic clefts of AD patients.
This molecule is an excitatory neurotransmitter that plays a role in neural activation but it can also lead to neuronal death if physiologically over-concentrated. NMDA receptor antagonists block the glutamate receptors to avoid the loss of neurons.
These symptomatic treatments only slow down the cognitive deterioration and have no effect on patients in the severe stage of AD. Researchers particularly focus on amyloid plaques and neurofibrillary tangles to find new therapeutic pathways that could prevent or stop the progression of the disease. [22] I.B. Aβ and the amyloid plaques formation APP is a type-1 trans-membrane protein expressed in various tissues of the organism, especially in the central nervous system (CNS). [23] Its major neuronal isoform encompasses 695 amino acid residues. [24] Although its physiological function is still unclear, APP would play an important role in brain development, memory and synaptic plasticity. [24] Two different pathways of APP metabolism can occur, as shown in Thus, Aβ peptides are the product of a minor pathway of APP metabolism, [26] released in the extracellular space of healthy brain during neuronal activity, without leading necessarily to Alzheimer's pathology. Aβ is subject to a proteolytic degradation by Aβ-degrading proteases (AβDPs), which regulates Aβ levels in the brain. [27] Its functions in the brain are still unknown, although Aβ could play a role in the synaptic plasticity and the memory. [28]
I.B.2. APP and Aβ mutations
Familial AD are caused by mutations on a gene of APP, Presenilin 1 (PSEN1) or Presenilin 2 (PSEN2). PSEN1 and PSEN2 are two subunits of γ-secretase. The mutations on both PSEN1 and PSEN2 lead to a higher Aβ production, PSEN1 mutations specifically conducting to an increased Aβ1-42 formation. [13] For APP, 65 mutations are indexed in the Alzheimer Disease & Frontotemporal Dementia Mutation Database, with only 15 being non-pathogenic. [29] The mutations are divided in three categories: mutations at the β-secretase cleavage site, at the γ-secretase cleavage site and in the mild-domain amyloid-β region. [30] The mutations at the γ-secretase cleavage site can alter the cleavage position and lead to an increase of the Aβ1-42/Aβ1-40 ratio. The mutations at the β-secretase cleavage site increase the rate of APP proteolysis by the β-secretase. The mutations in the mild-domain of Aβ region in APP alters Aβ assembly by increasing the propensity of Aβ to form oligomers and fibrils. [31] As APP mutations can occur in the Aβ (1-letter code). [13] I.B.3. Amyloid cascade hypothesis AD is a multifactorial disease and the multiple mechanisms related to the disease are unclear. However, since Aβ has been found in healthy brain on soluble form but on aggregated form in AD patient's brain, [8] a hypothesis has been proposed to explain the formation of senile plaques composed of aggregated Aβ. The amyloid cascade hypothesis (Figure I.B-4) formulated in the early 1990s [32][33][34][35] has become the dominant model for AD pathogenesis, [36] although still controversial. [37][38] It is proposed that an abnormal extracellular increase of Aβ levels in brain could lead to its aggregation in β-sheet rich structures. [39] The aggregation starts with the formation of oligomers species that are reorganized into protofibrils and fibrils, which are found in amyloid plaques, a hallmark of AD. In particular, oligomers accumulated in AD patient brains [40] are proposed to be the more toxic species for cells [41][42] as they can permeabilize cellular membranes, thus initiating a series of events leading to cell dysfunction and death. [43] According to this hypothesis, the others events such as the intracellular formation of neurofibrillary tangles and the disruption of synaptic functions would ensue from this early and key event. Metal ions such as zinc, iron and copper ions have been found in amyloid plaques. [44] In addition, Cu and Zn are exchanged within the synaptic cleft of some neurons. They are supposed to play an important role in the aggregation according to the amyloid cascade hypothesis. [45] Actually, metal ions can bind Aβ and thus modulate the aggregation process.
They act either on the kinetics or on the thermodynamics by impacting the morphology of the formed aggregates. [46] Furthermore, amyloid aggregates with entrapped redox-active metal ions such as copper ions are considered as more toxic since they can produce Reactive Oxygen Species (ROS), deleterious for the biomolecules. [47] I.B.4. Aggregation
Aβ is a natively unfolded peptide with no defined 3D structure. As this peptide is highly flexible and unstructured, it can easily undergo aggregation to amorphous or ordered structures (Figure I.B-5, green and blue pathways respectively), the β-sheet rich structures being thermodynamically the most stable. [48][49] The fibrils formed during the aggregation process are organized in stacked parallel or anti-parallel β-sheets structures. In Aβ40 fibrils, the 12-24 and 30-40 residues would be responsible for β-sheet formation. [50] The aggregation process of Aβ in β-sheet structures is dynamic and complex, consisting in multiple self-assembly steps. Two different steps are observed over time: nucleation and elongation. . Picture from reference [46] .
During nucleation, the unstructured monomers in solution cluster in small aggregates called oligomers that further form nuclei (red pathway in Figure I.B-5). Nucleation is the slower and limiting step of aggregation since association of monomers that occurs during nucleation is not thermodynamically favorable. [46] The nucleation phase is then followed by a rapid elongation phase in which protofibrils and finally fibrils are formed from nuclei (orange pathway in Figure
I.B-5).
Several techniques have been developed to monitor Aβ aggregation. [46,51] Among them, fluorescence of Thioflavin T (ThT) dye is widely used. ThT interacts with β-sheet and upon interaction undergoes a strong fluorescence enhancement. [52] This allows to monitor the aggregation kinetics. Atomic force microscopy (AFM) and transmission electron microscopy Metal ions such as zinc, iron and copper are present in the brain. They are necessary and required to regulate the neuronal activity in the synapses and involved in biological functions of metallo-proteins. In several diseases such as AD, the metal ion homeostasis is disrupted and the concentration is very far from the physiological one, with Cu and Zn levels that can reach up to three times the control levels. [53] Moreover, high content of these metal ions is found in amyloid plaques extracted from AD brains. [44] In addition, such ions can bind to Aβ under physiological concentrations. Thus, knowing the coordination mode of these metal ions with
Aβ is a pre-requisite to understand their role in AD.
a. Zn(II) coordination to the Aβ peptide Zn ion exists only as Zn(II) and its coordination to Aβ is still not well-established. [54][55] Although it is consensual that a complex 1:1 is formed, [55] the nature of the amino acid residues involved in the coordination sphere is still under debate. A novel binding model has been recently proposed, based on Nuclear Magnetic Resonance (NMR) and X-ray Absorption Spectroscopy (XAS) studies of Zn coordination with mutated and N-terminal acetylated peptides (Figure I.C-1). [56] In this model, Zn(II) would be bound by imidazole rings of His6 and either His13 or His14 residues, the carboxylate group of Glu11 and the carboxylate group of Asp1, Glu3 or Asp7. b. Cu(II) coordination to the Aβ peptide
The Cu(II) coordination to Aβ has been widely studied for years and is challenging as several species are formed depending on the pH. Numerous studies have been realized in the past decade and the results have been recently reviewed, [54,[57][58][59] leading to a consensual model with different Cu(II) binding modes depending on the pH. The four binding modes observed for pH values higher than 6. His14. [60][61][62][63][64] For component II, two distinct models have been proposed. In the first one, Cu(II) is bound via the carbonyl function from Ala2-Glu3 and the imidazole rings of the three His. [62,64] In the second one, Cu(II) is bound to the N-terminal amine of Asp1, the amidyl function of Asp1-Ala2, the carboxylate group of Ala2 and the imidazole ring of one His. [60][61]65] The first model does not explain the effect of pH on the coordination as all the residues involved in Cu(II) coordination that can undergo deprotonation are already deprotonated. The second model explains the change of Cu(II) binding mode that occurs around pH 7.8 with the deprotonation of the Asp1-Ala2 amide function, leading to its coordination. Furthermore, Electron Nuclear Double Resonance (ENDOR), Hyperfine Sublevel Correlation (HYSCORE) and NMR studies highlight the involvement of both the NH2 terminus of Asp1 and the deprotonated Asp1-Ala2 amide bond, favoring the second model. [57] Thus, the second proposed model (illustrated in The other two components (called III and IV) are formed at higher pH with the deprotonation of the Ala2-Glu3 and Glu3-Phe4 amide functions, respectively. [65] Cu(II) is bound via the -NH2 terminus, the two amidyl functions between Asp1 and Glu3 and one His residue in component III and via the NH2 terminus and the three amidyl functions between Asp1 and Phe4 in component IV.
A carboxylate group has also been proposed to be involved in apical position for several components, coming from Asp1 [60][61][62] or from Glu3, Asp7 and Glu11 carboxylates in equilibrium with Asp1 for component I. [61] c. Adapted from reference [66] .
Model A proposes a linear binding of histidines with a dynamic exchange between His6, His13 and His14. Model B involves an equilibrium between the His dyad and the His triad.
NMR studies have shown the implication of the three histidines in the Cu(I) coordination with a dynamic exchange, in line with the two proposed models. [66] However, XAS studies [66][67] and a comparison of synthetized Cu(I) complexes HisHis dipeptides and Cu(I) complexes with truncated Aβ6-14 and Aβ10-14 peptides [68][69] highlight a linear binding mode with 2 histidines, corroborating the model A.
In addition, according to a tandem mass spectrometry (MS/MS) study on Cu(I)-Aβ structure, the two histidines mostly involved in Cu(I) coordination would be His13 and His14 [70] . Thus, evidences suggest that Aβ is bound to Cu(I) by histidine residues in a linear fashion with a dynamic exchange between His6, His13 and His14, the major form being His13 and His14 dyad. This is in line with affinity studies realized on three Cu(I) complexes with one His-Ala mutation on Aβ peptide (named H6A, H13A and H14A) [71][72][73] that point out to a slightly lower affinity than for the native peptide, H6A having a stronger affinity than the other two mutants. These results indicate that Aβ only needs two histidines for binding Cu(I), His13-His14 dyad being the major form. 4). They are necessary to maintain the homeostasis in cells and play an important role in signaling [74] but are also reactive oxidants, able to damage biomolecules. In cells, endogenous enzymes are in charge of the antioxidant defense to prevent the ROS mediated damages. [47] The superoxide (O2 • ) anion, the first ROS produced by the one-electron reduction of dioxygen, is capable of inactivating few enzymes, [47] but has a poor reactivity with most of the bio-inorganic substrates due to low rate constant (usually below 10 2 L mol -1 s -1 ). [START_REF] Gardès-Albert | Aspects physicochimiques des radicaux libres centrés sur l'oxygène[END_REF][START_REF] Bielski | [END_REF] To remove a potential excess of O2 a diffusion rate close to the limit (k around 10 9 L mol -1 s -1 ). [START_REF] Bielski | [END_REF][77]
). [78] The hydroxyl radical (HO • ) is the result of the third one-electron reduction of oxygen.
It can also be produced in the presence of metal ions from H2O2 or H2O2 and O2 • by the Fenton reaction or the Haber-Weiss reaction respectively (Figure I.C-6). HO • has a very short half-life (10 -9 s) compared with O2 • (10 -6 s) and is thus the more reactive and deleterious ROS, [74] being able to oxidize the biomolecules such as proteins, lipids, DNA [START_REF] Dorfman | Reactivity of the hydroxyl radical in aqueous solutions[END_REF] because of its very high redox potential (E°'=2.34 V [START_REF] Wardman | [END_REF] ). To control the quantity of pro-oxidants (ROS) and prevent the damages on the biomolecules, the body has protecting mechanisms including enzymatic and chemical antioxidants. However, in some diseases such as AD [81] , an imbalance may occur between prooxidants and antioxidants, due to a higher ROS production or a reduced activity of the enzymes responsible for the ROS degradation, leading to oxidative damages on biomolecules. [82] b. Metal-catalyzed ROS production Redox active metal ions such as copper and iron are involved in the ROS production via Fenton and Haber-Weiss reactions (Figure I.C-6). In the presence of a reducing agent, they can have a catalytic activity. [83] In AD, Cu and Fe can be coordinated to Aβ and the resulting complex could be directly involved in the ROS production. ROS production has been mostly studied with Cu-Aβ, as Fe-Aβ has a lower redox activity. [84] Iron is found in the amyloid plaques predominantly in a colloidal form (originating from ferritin), however histochemical studies indicate that it could also be bound to Aβ. [85] The coordination mode of Fe(II) with Aβ has been characterized [86] and Fe(III) does not form a stable complex with Aβ because it finally converts into Fe(III)(HO)3 and precipitates. Thus, the physiological stable formation of Fe(III)-Aβ is unlikely. However, ROS production by Fe-Aβ still might be relevant as Fe(II)-Aβ is stable and the Fe(III) complex formed during ROS production might not have time to precipitate. As the involvement of iron bound to Aβ in ROS production is still unclear, we focus here only on Cu-Aβ.
In the case of copper, the pro-oxidant role of the Cu-Aβ system is not clearly established as the complex is more active in ROS production than several biological relevant Cu-peptides or Cu-proteins [87] but less efficient than loosely-bound copper. [84,[87][88][89][90][91] However, in vitro studies have shown that Cu-Aβ is able to catalyze the formation of H2O2 and HO • in the presence of O2 and a reducing agent such as ascorbate (Figure I.C-7). [84,[87][88]92] Moreover, although it was generally proposed that H2O2 production by Cu-Aβ occurs via a two-electron process, a recent study has highlighted the formation of superoxide as an intermediate in the production of H2O2 by Cu-Aβ and O2. [93] The energy required for the rearrangement between the Cu(I) and Cu(II) geometries (linear and square-planar respectively) being very high, the electron transfer would rather proceed via a low-populated redox-active state in which Cu(I) and Cu(II) binding modes are highly similar, thus inducing a low reorganization energy. This transient state, called "inbetween" state, is in equilibrium with the resting states (Figure I.C-8, bottom section). It has been studied by calculations [96] and characterized with MS/MS by identifying the sites of oxidative damage on the peptide. [95] By comparing the non-specific oxidations detected on Aβ28 after the radiation-induced ROS production with the copper-mediated oxidations of Aβ28, Asp1, His 13 and His14 have been found to be the metal-specific targeted amino acid residues.
Furthermore, kinetic studies of the copper-mediated Aβ28 oxidation have shown that Asp1 would be the first amino acid residues damaged. Thus, in this study, the proposed ligands for both Cu(II) and Cu(I) coordination in the in-between state are Asp1, His 13 and His14. As they have been found to be the main targets for HO • , they are supposed to be the amino acid residues the closest from copper during the metal-catalyzed ROS production.
I.C.3. Metal-catalyzed oxidation of Aβ
During the metal-catalyzed ROS production, the Aβ peptide undergoes oxidative damages. This is in line with the detection of oxidized Aβ in amyloid plaques in vivo. [97] Studies on single amino acid residue oxidations could allow a prediction on the residues targeted during the metal-catalyzed oxidation (MCO) of Aβ. [98][99][100] The physiological main targets for HO • are the sulfur-containing amino acids (methionine, cysteine), the basic amino acids (arginine, histidine, lysine) and the aromatic amino acids (phenylalanine, tyrosine, tryptophan). [START_REF] Bonnefont-Rousselot | Radicaux libres et stress oxydant : aspects biologiques et pathologiques[END_REF] Table I.C-1 provides the main oxidation products of these amino acid residues.
Oxidation of Aβ28 by HO • produced by γ-radiolysis has shown that His and Phe residues are mainly targeted, [95] in line with the oxidations reported previously for free amino acid residues.
However, in the case of MCO of Aβ, the ROS are produced at the metal center. Thus, the oxidations are site-specific and can differ from the amino acid oxidations usually detected without metal.
Table I.C-1: Main oxidation products of the principal amino acid residues undergoing HO • attack. [START_REF] Bonnefont-Rousselot | Radicaux libres et stress oxydant : aspects biologiques et pathologiques[END_REF] Amino [88,95,[START_REF] Schöneich | [END_REF][103] or hydrogen peroxide. [104] found to be more sensitive to oxidation, His6 being not detected on its oxidized form [95,[103][104] or affected after longer oxidation time. [START_REF] Schöneich | [END_REF] This is in line with the predominant binding mode of Cu(I) to His13 and His14 (see section I.C.1.c). b. Aspartate
The Aβ peptide has 3 aspartates residues at positions 1, 7 and 23. In the literature, only Asp1 has been found to be oxidized. Actually, as Asp1 is involved in the coordination of Cu(II), [57][58] it is a preferential target for the hydroxyl radical produced at the metal center.
Different damages have been detected during MCO of Asp1 both in the presence of ascorbate [95,106] and of hydrogen peroxide [104] . The oxidative decarboxylation and deamination of Asp1 leads to the formation of a pyruvate function (Figure I.C-12, blue pathway). [95,104,106] Asp1 is also subject to a backbone cleavage on the α-position of the peptide, leading to an isocyanate function (Figure I.C-12, red pathway). [95,106] Another oxidation of Asp1 into 2-hydroxyaspartate corresponding to the formal addition of an oxygen atom has also been described (Figure I.C-12, green pathway). [95] c. Tyrosine Although the amino acid residues involved in copper coordination are more vulnerable to oxidation, non-coordinating amino acid residues can also be oxidized. It is the case for Tyr10 which is sensitive to oxidation and is responsible for the Aβ peptide cross-linking by dityrosine formation (Figure I.C-13). [98] This latter, induced by Cu(II), has been detected for Aβ in the presence of H2O2. [107] MCO of Tyr10 into dityrosine was found to have an impact on aggregation as Aβ cross-linking was correlated with the formation of covalent oligomers. [108- 109] Furthermore, a study has proposed that Tyr10 acts as a gate that promotes the electron transfer from Met35 to Cu(II) for its reduction in Cu(I). [110] Figure I.C-13: Tyrosine cross-linking mechanism leading to the formation of dityrosine [111] d. Phenylalanines Three phenylalanines are present in the Aβ sequence at positions 4, 19 and 20. None of them are involved in the Cu(II) or Cu(I) coordination, nevertheless Phe19 and Phe20 have been found oxidized during MCO of Aβ in the presence of Cu(II) and ascorbate. [95] Phe19 and Phe20 has been detected with the formal addition of an oxygen atom, likely oxidized into hydroxyphenylalanine (Figure I.C-14). [98] This oxidation seems to occur after the oxidation of Asp1 which is involved in Cu binding. [95] Figure I.C-14: Structural formula of phenylalanine and the three hydroxyphenylalanines.
e. Methionine
Methionine is an amino acid residue very sensitive to oxidation. In vivo, the enzyme methionine sulfoxide reductase is responsible for the reduction of the methionine sulfoxide (Figure I.C-15), a main oxidized form of the methionine. [112] Methionine can also be converted into sulfuranyl / hydroxysulfuranyl radical cation by a one-electron oxidation. [113]
Figure I.C-15: Structural formula of methionine (left) and methionine sulfoxide (right).
Reviews have reported about oxidation of the methionine of the Aβ peptide located at position 35 and its role in toxicity and oxidative stress. [114][115] Although methionine is very sensitive to oxidation, its conversion into methionine sulfoxide occurs only after the oxidation of His13 and His14 during the in vitro MCO of Aβ in the presence of Cu(II)/ascorbate. [START_REF] Schöneich | [END_REF] This highlights the site-specificity of the amino acid residue oxidation catalyzed by the bound copper.
Met35 has also been found to promote Tyr10 oxidation [116] and to interact with Gly33, inducing its peroxidation by promoting the formation of a carbon-centered radical, leading to a hydroperoxide. [90,117] f. Other cleavages Other oxidative cleavages have been reported for Aβ bound to Cu(II) in the presence of H2O2 such as the cleavage of the peptide bond of Asp1/Ala2, Ala2/Glu3, Val12/His13 or His13/His14. [104]
Chapter II: Methodologies
This chapter is a summary of the experimental conditions and techniques implemented for the studies presented throughout the manuscript. For each spectroscopic technique, the general principles are defined and the application of the technique to our study is described as well as the corresponding experimental conditions.
II.A. Preparation of the Aβ peptide II.A.1. Solubilisation and monomerization
All the peptides are commercially available, in the form of powder (purchased from Genecust, Luxembourg, purity grade >95%). The shorter Aβ16 and Aβ28 peptides are weighed and solubilized in water, resulting in an acidic solution due to the presence of trifluoroacetate as counterion in the powder (pH 2). For the full-length peptide Aβ40 which is subject to aggregation, the solution is prepared freshly. The powder is solubilized in NaOH 50 mM, resulting in a basic solution (pH 13) to remove preformed aggregates and slow down the aggregation, and further purified via Fast Protein Liquid Chromatography (FPLC) when used for aggregation studies (see Section II.A.4). [1] Usually, the Aβ peptide powder commercially available contains an unknown quantity of counterions (such as trifluoroacetate) and the dosage by weight is not precise. Thus, the dosage is performed by UV-Visible spectroscopy by using the absorption of Tyr10. The pH of each solution was taken into account as deprotonation of Tyr at high pH induces a change in the UV spectra obtained (Figure II.A-1).
II.A.2. Dosage
For the UV-Visible dosage, the Tyr10 is considered as a free tyrosine amino acid residue. As the peptide can aggregate, a little light scattering can occur and has to be taken into account. Thus, to avoid an overestimation of the concentration, a correction is made by subtracting the background absorbance to the maximum of absorbance of Tyr.
Figure II.A-1: pH titration of Tyr in Aβ16 (0.3 mM). UV-vis absorption spectrum from tyrosine band (red)
to tyrosinate band (blue). Picture from reference [1] .
At low pH (until pH 8.5 according to Figure II.A-1), Tyr10 in the Aβ peptide has a maximum of absorbance at 276 nm and is corrected with the absorbance at 296 nm with an extinction coefficient of ε276-ε296=1410 M -1 .cm -1 . [1] At higher pH, the dosage has to be realized around pH 13, where only the deprotonated form of Tyr is present. Tyrosinate has a maximum of absorbance at 293 nm and is corrected by the absorbance at 360 nm with an extinction coefficient of ε293-ε360=2400 M -1 .cm -1 . [1] For mutant or shorter peptides without Tyr (i.e. Y10F-Aβ or Aβ1-7), the dosage was performed via phenylalanine absorption (ε258-ε280=195 M -1 .cm -1 for one Phe). [2] More details about UV-visible spectroscopy are given in Section II.C.1.
II.A.3. Oxidation and purification of Aβ
The metal-catalyzed oxidation of Aβ is carried out at 60 µM of Aβ28 or Aβ40 with 50 µM of CuSO4 and 0.5 mM of ascorbate in 50 mM of phosphate buffer at pH 7.4, under air atmosphere. The reaction occurs during 30 min under stirring and is completed when the ascorbate has fully reacted. Then, the solution is concentrated by centrifugation with Amicon A mass spectrometer is composed of:
-An ion source which produces gas-phase ions from the molecules of the studied sample -A mass analyzer whose goal is to sort the ions based on their m/z value ratio.
-A detector that provides an electric signal related to the number of detected ions.
-A data processing system (IT) to obtain a mass spectrum with the intensity given as a function of m/z.
All the results presented in this manuscript were obtained on mass spectrometers equipped with electrospray ionization (ESI) and ion trap.
II.B.2. Electrospray Ionization (ESI)
The electrospray ionization (ESI) is an atmospheric pressure ionization process developed by J.B. Fenn (Nobel Prize in Chemistry 2002) in 1984. [3] ESI is considered as a soft ionization technique as little fragmentations are obtained in the source. Its main advantage is to directly generate ions in gaseous phase from solution, under atmospheric pressure. This The liquid sample is infused at low flow rate (usually 1 to 100 µL.min -1 ) through a capillary subjected to a high electric field, the latter being obtained by the application of a voltage difference (3.0 to 6.0 kV) between the capillary and a surrounding counter-electrode. [4] A constant flow of inert gas (nebulizer gas, usually N2) move along the tube containing the capillary. At the end of the capillary, a spray is formed via the Tailor cone originated from the high voltage applied on the capillary. The sample is turned into charged micro-droplets in suspension. Those droplets are moving towards the counter-electrode and the solvent of the droplets evaporate, leading to smaller droplets. Afterwards, the charge density of the droplet becomes too high and the gas-phase ions are released during the Coulomb-explosion. The ions produced are then transported to the analyzer.
In ESI, the ions obtained can be cations or anions, generated by protonation or deprotonation of the molecules, respectively. As explained above, for molecules with several ) and H. Steinwedel [5] described an ion trap which has been used for the mass spectrometry for the first time in 1984 by Stafford and his colleagues. [6] This analyzer selects ions in the ion flow arriving from the ion source and send The ion trap also allows to isolate specific ions, to fragment them and then to detect the fragment ions. This mode of operation is called tandem mass spectrometry (MS/MS or MS 2 ).
Because fragment ions are trapped prior to be detected, another possibility is to isolate one of them and to fragment it: in this case, the mode of operation is MS/MS/MS or MS 3 . This can be After its accumulation and isolation in the ion trap, the ion is excited: a kinetic energy of the order of a few tens of eV is provided to make it resonate. With this energy, the ion collides with neutral atoms (helium) present in the trap and acquires an internal energy, resulting in bond breakage and leading to its fragmentation in smaller fragment ions. This type of fragmentation is known as Collision Induced Dissociation (CID).
b. Orbitrap [7] The orbitrap, described in 2000 by Makarov [8] ), according to the nomenclature of Biemann. [9] Figure II.B-5: Schematic view of the possible fragmentations undergone by a polypeptide chain. [9] In the ion trap we used for our studies, the fragmentation occurs by CID leading to the
LC-MS and LC-MS/MS conditions
High Performance Liquid Chromatography / Mass Spectrometry (LC/MS) analysis was performed on an ion-trap mass spectrometer (LCQ DECA XP Max, ThermoFisher), equipped with an electrospray ionization source, coupled to an Ultimate 3000 LC System (Dionex, Voisins-le-Bretonneux, France). Samples (10 µL) at 60 µM of Aβ were injected onto the column (Acclaim 120 C18, 50 × 3 mm, 3 µm, ThermoScientific), at room temperature. The gradient elution was carried out with formic acid 0.1% (mobile phase A) and acetonitrile/water (80/20 v/v) formic acid 0.1% (mobile phase B) at a flow-rate of 0.5 mL.min -1 . The mobile phase gradient was programmed with the following time course: 5% mobile phase B at 0 min, held 3 minutes, linear increase to 55% B at 8 min, linear increase to 100% of B at 9 min, held 2 min, linear decrease to 5% B at 12 min and held 3 min. The mass spectrometer was used as a detector, working in the full scan positive mode between 150 and 2000 Da followed by data dependent scans of the two first most intense ions, with dynamic exclusion enabled. Isolation width was set at 1 Da and collision energy at 28% (units as given by the manufacturer), using wideband activation. The generated tandem MS data was searched using the SEQUEST algorithm against the human Aβ peptide sequence.
Conditions of tryptic digestion
The 100 µL solution of Aβ was filtered by using Amicon HRMS is a powerful tool to characterize molecules and biomolecules as it provides the exact mass of the ion related to them, thus allowing to differentiate two molecules of close masses. In our studies, HRMS was used: (i) to identify some of the oxidized amino acid residues of Aβ, (ii) to get the chromatographic traces of some specific ions of Aβ and associated peptides, thus allowing to perform relative quantification. HRMS was also used to check for digestion efficiency, systematically found close to 100 % (non-digested peptide not detected).
LC-HRMS conditions for the characterization of the oxidation sites of Aβ40
The same operating conditions than for LC-MS (column and mobile phase gradient)
were used to carry out high resolution mass spectrometry (LC/HRMS) experiments, by using a LTQ-Orbitrap XL mass spectrometer (ThermoFisher Scientific, Les Ulis, France) coupled to an Ultimate 3000 LC System (Dionex, Voisins-le-Bretonneux, France). The Orbitrap cell was operated in the full-scan mode at a resolution power of 60 000. The UV-Visible spectroscopy (UV-Vis) is a technique applied to measure the light absorbance by a compound in the ultraviolet-visible spectral region. This region can be divided into three wavelength ranges: near UV (185-400 nm), visible (400-700 nm) and very-near infrared (700-1100 nm). Usually, the absorbance is not measured below 190 nm since dioxygen, water vapor and the quartz cuvettes used for the measurements absorb and thus disturb the measure.
LC-HRMS conditions for the kinetics study of Aβ40
The absorbance originates from the interaction of the photons of the light with the electrons of the bonds of the species present in the sample (such as molecules). Molecules with π-electrons or non-binding electrons (n-electrons) are susceptible to absorb in UV-Vis.
A spectrophotometer is composed of a light source (usually a combination of a deuterium lamp for the UV range (< 350 nm) and a tungsten filament for visible range (> 350 nm)), a dispersive system to select the wavelength, a space for the sample (usually in a cuvette) and a detector which converts the light detected into an electric signal. The selection of the wavelengths can be realized either (i) by a monochromator setup that isolates the different wavelengths of the light and introduce each monochromatic light into the sample or (ii) by a photodiode array (PDA) detector that detects the different wavelengths of the light passed through the sample after its dispersion by a spectrograph. As seen above, the detector converts the transmitted light into an electric signal. Two values can be extract from the data:
-the transmittance (T), defined as the transmitted intensity (It) over the initial intensity (I0) ratio (Equation. II.C-1).
𝑻 = 𝑰 𝒕 𝑰 𝟎
Equation. II.C-1
-the absorbance (A), given by Equation. II.C-2.
𝑨 = -𝒍𝒐𝒈 𝑻
Equation. II.C-2
UV-Vis can also be employed for quantitative analysis since the absorbance of the light can be directly related to the concentration of the compound in certain conditions defined by the Beer-Lambert law (Equation II.C-3).
𝑨 𝝀 = 𝜺 𝝀 . 𝒍. 𝑪
Equation II.C-3
Aλ is the absorbance (no dimension), ε the molar extinction coefficient (in L.mol -1 .cm - 1 ), l the optical path length (in cm) and C is the concentration of the compound in solution (in mol.L -1 ) for the wavelength λ where the measure is realized.
To be valid, the Beer-Lambert Law has to meet the following requirements [10] :
-A monochromatic light has to be used
Ascorbate consumption conditions
Ascorbate consumption is monitored by UV-Visible spectroscopy at 265 nm in a 1 cm length cuvette containing a phosphate buffered solution (50 mM, pH 7.4) of Aβ (12 µM), CuSO4 (10 µM) and ascorbate (100 µM). The ascorbate absorption is monitored every 10 s under shaking (800 rpm).
II.D. Fluorescence spectroscopy II.D.1. General principles [10] Fluorescence is an optical phenomenon that occurs usually on polyaromatic, plane or The fluorescence lifetime (τ) corresponds to the time the fluorophore stays in the excited state before emitting a photon by radiative deactivation. The lifetime is dependent on the environment of the molecule. Usually, the lifetime of a fluorophore is in the nanosecond range.
The intensity of fluorescence emission of very weakly absorbing and diluted sample is directly related with its absorbance and thus with its concentration (Equation II.D-2). [10] This relation allows to realized quantitative experiments, however it is important to establish the maximal concentration from which the fluorescence emission is not proportional to the concentration anymore. In our experimental conditions, the fluorescence intensity is proportional to the number of 7-OH-CCA molecules formed, which in turn is proportional to the HO • radicals trapped by CCA. Thus, monitoring 7-OH-CCA fluorescence gives information on the amount of HO • radical exiting the Aβ/copper system, when ROS are produced in the presence of ascorbate.
The hydroxyl group of 7-OH-CCA has a pKa of 7.4 [11] and only the deprotonated form emits light at 450 nm. Thus, the pH has to be controlled very precisely during the experiments, especially since the experiments are performed at pH 7.4 in our studies.
7-OH-CCA Fluorescence conditions
The fluorescence of 7-OH-CCA is monitored at 450 nm upon excitation at 395 nm with a microplate reader, at 25°C. The microplate contains a phosphate buffered (50 mM, pH 7.4) solution of Aβ, with CuSO4, and ascorbate or ascorbate and hydrogen peroxide (H2O2), depending on the studies.
II.D.3. Thioflavin T fluorescence
The Aβ peptide has the ability to aggregate into structured aggregates such as oligomers and fibrils (see Section I.B.4). The fibrils are known to be composed of β-sheet structures of the Aβ peptide. [12] To monitor the Aβ fibrillar formation, a fluorescent dye capable of interacting with β-sheet structures with a resulting enhanced fluorescence is commonly used:
Thioflavin T (ThT). [13][14] ThT is a benzothiazole salt (Figure II.D-5a) used first as histological marker of amyloid fibrils and, due to its water solubility and its good affinity for fibrils (in the low µM range), [15] it was then used to monitor the aggregation process during in vitro experiments. The above described increment of the ThT fluorescence yield is related to its spatial arrangement. The carbon-carbon bond between the benzothiazole and the aniline moieties is free for rotation when ThT is not bound (Figure II.D-5a, bottom panel). This rotation quenches the excited states created upon photon excitation, resulting in a low fluorescence. However, the preclusion of the free rotation is supposed to arise upon binding to β-sheet structures such as fibrils. The rotational blockage of the bond between the two moieties preserves the excited states, leading to an enhanced fluorescence yield. [15][16] Figure II.D-5b shows a proposed model of ThT binding to fibrils, named "Channel" model. In this model, ThT is proposed to bind to fibrils in channel-like motifs along the surface of the fibrils, in a parallel alignment to the long-axis of fibrils. As ThT interacts with fibrils made of the stacking of different amino acid sequences, the interaction seems to be independent of the amino acid chemical nature. Thus, the "Channel" model could be a good hypothesis for ThT binding to fibrils.
Aggregation conditions
For Aβ aggregation studies by fluorescence, the purified peptide (see Section II.A. In the absence of a magnetic field, the spin moments of the nuclei are uniformly distributed and the total magnetic moment resulting is zero. However, when a magnetic field B0 is applied, a Larmor precession occurs due to the interaction between the magnetic field and the magnetic moment of the nucleus. where h is the Planck constant (J.s -1 ), γ the gyromagnetic ratio (rad⋅s -1 ⋅T -1 ) and B0 the magnetic field (T). Thus, the higher the magnetic field strength, the higher the energy difference between the two spin states and the higher the difference of population between these two states, leading to a more intense magnetic moment. The frequency of precession of each nucleus is dependent on the chemical environment such as electron environment and neighboring nuclei. A proton close to an electron-attracting group will have a higher frequency, and its proximity with other protons nuclei leads to spinspin coupling. Thus, the frequency of each proton is informative on its chemical environment.
The NMR spectrum obtained by Fourier Transform contains the intensity of NMR absorption as a function of the chemical shift δ (in ppm) calculated from Equation II.E-2.
𝜹 = 𝝂 -𝝂 𝒓𝒆𝒇 𝝂 𝒓𝒆𝒇 . 𝟏𝟎 𝟔
Equation II.E-2
Where ν is the proton frequency, νref the frequency of the reference. where h is the Planck's constant (6.626 x 10 -34 J.s), ν the radiation frequency (in s -1 ), ge the Landé factor (2.00232 for a free electron), μB the Bohr magneton (9.2740 x 10 -28 J.G -1 ) and B0
the external magnetic field (in G).
The application of an electromagnetic wave perpendicular to B0 and with a frequency ν allows the transition between the two Zeeman levels, corresponding to an orientation change of the magnetic moment of the electron.
In EPR, an absorption spectrum is obtained by the detection of the residual electromagnetic field after it passed through the paramagnetic sample. The frequency is fixed and the magnetic field varies. When B0 value of the applied magnetic field is directly related to the frequency of the electromagnetic wave (via Equation II.F-1), the wave is absorbed by the sample, leading to the transition between the Zeeman levels. From the spectrum, the Landé factor (g) can be calculated, specific for each system studied. In the case of a free electron, g = ge = 2.00232. For anisotropic systems such as metal ions complexes, the response of the molecule is different depending to its orientation with respect to the magnetic field. Thus, 3 transitions occur in x, y and z directions. When the nucleus bearing the spin density has a nuclear spin I ≠ 0, a coupling of the electron magnetic moment occurs with the magnetic moment of the nucleus, called hyperfine coupling. The EPR spectrum is strongly impacted. The EPR spectrum will thus present a maximum of 2 I + 1 lines for each electronic transition, due to the electron / nuclear magnetic moment coupling. The hyperfine coupling constant of a nucleus (A) is directly related to the spectral line spacing 𝒜 by the Equation II.F-3.
𝑨 = 𝓐 . 𝒈 . 𝝁 𝑩 𝒉 𝒄
Equation II.F-3
𝒜 (in G) is the distance between two of the hyperfine lines, g the Landé factor, μB the Bohr magneton (9.2740 x 10 -28 J.G -1 ), h is the Planck's constant (6.626 x 10 -34 J.s) and c the light speed (9.9979.10 10 cm.s -1 ). A is expressed in 10 -4 cm -1 .
The electron magnetic moment can also interact with the nuclear magnetic moment of an another nucleus, resulting in a super-hyperfine coupling.
Thus, an EPR spectrum gives insights into the nature of the paramagnetic center and on its environment as well.
II.F.1. Application to Cu(II) coordination study
As Cu(II) ion is paramagnetic (d 9 ), EPR is a powerful technique for investigating binding modes of Cu(II) complexes. Cu(II)-peptide complexes have an axial symmetry, then 2
Landé factors g // (when the magnetic field is in the z axis direction, described by Equation II.F-4) and g (when the magnetic field is in the xy plane, described by Equation II.F-5) Where 𝒜 // (in G) is the distance between two of the hyperfine signals, g // the Landé factor, μB the Bohr magneton (9.2740 x 10 -28 J.G -1 ), h is the Planck's constant (6.626 x 10 -34 J.s) and c the light speed (9.9979.10 10 cm.s -1 ). A // is expressed in 10 -4 cm -1 .
characterize
The hyperfine interaction in the xy plane is generally much lower than the one in the z axis and is resulting in a broadening of the signal related to g thus not resolved rather (see
Figure II.F-4
). [17] Figure II.F-4: EPR spectrum of Cu(II) in water.
The determination of the A // , g // and g values are essential to characterize the coordination mode of Cu(II)-Aβ. They give insights into the nature of the neighboring atoms in the equatorial plane (i.e. number of oxygens and nitrogens) according to Peisach and Blumberg empirical correlation. [18] Thus, these EPR parameters will be used for comparison between different Cu(II)-peptide complexes in our studies.
EPR conditions
EPR spectra were recorded on a Bruker Elexsys E500 spectrometer equipped with a continuous flow cryostat (Oxford). Analysis was performed with aqueous solution containing 10% of glycerol, 65 Cu (450 µM) and the Aβ peptide (500 µM). pH was adjusted with H2SO4 (1 M) and NaOH (1 M). Experimental parameters were set as follow: T = 120 K, ν = 9.5 GHz, microwave power = 20.5 mW, Amplitude modulation = 10.0 G, Modulation frequency = 100 kHz.
II.G. X-Ray Absorption Near Edge Structure (XANES) II.G.1. X-Ray absorption: general principles [19] X-rays are electromagnetic waves with wavelengths between 10 -12 m and 10 -8 m and an energy range of 0.1-100 keV, used in X-ray absorption spectroscopy (XAS) at synchrotron radiation facilities. This technique is based on the interaction between X-rays and matter. Xrays can be absorbed by an atom, resulting in an excitation of a core electron from its ground state to a high-energy electron unoccupied orbital or to the continuum. XAS is based on the application of X-rays on a sample and the detection of either the absorption or the fluorescence generated during relaxation. The starting energy of X-rays is chosen in order to be lower than the absorption edge of the atom, and is increased during the experiment. XAS spectra are presented as the absorption/fluorescence as a function of the incident X-ray energy (in eV).
As eV. Zn data were collected from 9510 to 9630 eV using 5 eV step and 3 s counting time, from 9630 to 9700 eV using 0.5 eV step and 3 s counting time, and from 9700 to 10000 eV with a kstep of 0.05 Å -1 and 3s counting time.
For each sample, three spectra were averaged and resulting XANES spectra were background-corrected by a linear regression through the pre-edge region and a polynomial through the post-edge region and normalized to the edge jump. All spectra were individually inspected prior to data averaging to ensure that beam damage was not occurring.
Zn samples were prepared by adding the solution of ZnSO4 (0.9 mM) and peptide (1 mM) in the presence of 10% glycerol as cryoprotectant (pH adjusted to 7.1) into the sample holder. The sample was immediately frozen in liquid nitrogen.
Chapter III: Oxidation of the Aβ peptide
This chapter focuses on the oxidation of the Aβ peptide by the Cu/ascorbate/O2 system and the characterization of its oxidation sites. First, the oxidation of Aβ40 is investigated by The exact nature of the oxidized amino acid residues cannot be determined here, even if, among the residues of the tryptic Aβ6-16 peptide, histidine and tyrosine residues are the most sensitive to oxidation by HO • . [1] Several peaks are detected on the chromatogram trace of HDGYEVHHQK+16, probably due to the presence of different oxidized species with a mass shift of +16 Da. For HDGYEVHHQK+32, only one peak is detected with a low intensity.
Only one oxidation has been detected for the tryptic Aβ17-28 peptide ( Table III.A-1 summarizes the detected non-oxidized and oxidized tryptic peptides along with the monoisotopic masses of their corresponding protonated ions. Specific mass shifts of -45 Da and -89 Da have been found on the N-terminal tryptic peptide Aβ1-5, corresponding respectively to the decarboxylation-deamination and to the oxidative cleavage of Asp1. [2][3] Usually, aromatic amino acid residues are more prone to oxidation than aliphatic ones, [4] in particular because of their high electron density which favors the hydroxyl radicals attack. Thus, Common oxidations (mass increase of 16 Da) were also detected on every tryptic peptide. As oxidation leading to the formal addition of an oxygen atom can occur on several amino acid residues, we cannot conclude on the nature of the damaged amino acid residue of each tryptic peptide, even if some of the amino acid residues are more sensitive than the other ones. The same tryptic peptides were thus sequenced by LC-MS/MS in order to identify the oxidized amino acid residues. After oxidation, purification and digestion, the Aβ40 peptide was analyzed by LC-MS/MS in order to determine the site of oxidation on each tryptic peptide. As explained in Section II.B.3.c, by using Collision Induced Dissociation (CID), the peptide is cleaved at the peptide bond, leading to the formation of b and y ions, whose differences in mass are equal to the mass of the residues. [5] These ions are detected and their masses compared to the theoretical masses obtained from the peptide sequence. The oxidized residues are identified when the b or y ions values are increased by +16, as compared to the non-oxidized peptide.
The N-terminal tryptic Aβ1-5 peptide was first analyzed by LC-MS/MS but the spectra obtained were not informative. This small peptide is usually fragmented as a mono-protonated ion, leading to a poor quality of the MS/MS spectrum. In addition, the terminal arginine residue (an amino acid residue which has the highest pKa value of the side-chain function) tends to preclude the proton mobility during fragmentation, resulting in the absence of b ions. [6] Figure III.A-5 shows two MS/MS spectra of the doubly protonated ion of the oxidized Aβ6-16 (+ 16 Da). For the first one (top), the oxidation is detected on His13, oxidized in 2oxohistidine while for the second one (bottom), His14 is detected oxidized in 2-oxohistidine. [7] For both MS/MS spectra, the peptide sequence is well covered by both the b and y ions, allowing a reliable characterization of the oxidation of His13 and His14. No other MS/MS spectrum was obtained, assuming that only those two amino acid residues are oxidized in the 6-16 moiety of Aβ40.With these results, we can reasonably assume that the mass shift of +32 Da previously detected in HRMS is related to the oxidation of both His13 and His14 into 2-oxohistidine on the same peptide. However, this hypothesis cannot be confirmed here as no Aβ6-16 ion with an increase of mass of 32 Da was fragmented and detected in MS/MS. This could be due to the fact that only a few part of the Aβ40 peptide undergoes this double oxidation, resulting in a too low concentration to allow fragmentation.
Two different oxidations were also found on the Aβ17-28 tryptic peptide, as shown in the two MS/MS spectra of the doubly protonated ion corresponding to Aβ17-28 with a mass shift of +16 Da in Figure III.A-6. The two oxidations are located respectively on Phe19 and Phe20. The +16 Da mass shift on phenylalanine is related to the formation of hydrophenylalanine (see Section I.C.3.d for chemical structure). [8] Although two amino acid residues are targeted by ROS, no double oxidation of Aβ17-28 was detected in LC-HRMS or in LC-MS/MS, as it was the case for Aβ6-16. The oxidations of Phe19 and Phe20 may occur on the same peptide, but this double oxidation is likely to occur in minority on the Aβ peptide and is thus not detected in MS. The Aβ29-40 tryptic peptide was finally sequenced and, as assumed above, the oxidation detected with a mass shift of +16 Da is located on Met35, as shown by Figure III.A-7. This was expected and previously described [9][10][11][12] as methionine residue is very sensitive to oxidation and can be damaged by both the hydrogen peroxide [13][14] and the hydroxyl radical [1,14] which are produced during metal-catalyzed ROS production. [15][16] The results obtained on Aβ40 corroborate the study previously realized in the team with the truncated Aβ28 peptide. [17] Similar oxidative damages and targeted residues were detected:
oxidative cleavage and decarboxylation-deamination of Asp1, [2][3] His13 and His14 oxidized into 2-oxohistidine [7] and Phe19 and 20 detected as hydroxyphenylalanine. [8] Furthermore, Met35 was also found oxidized into methionine sulfoxide, in good agreement with what is widely described in the literature. [18][19] Figure III.A-8 summarizes the oxidations detected on Aβ40 in the present study along with the chemical structures of the oxidized amino acid residues.
Among the identified oxidized residues, three of them (Asp1, His13 and His14) are proposed to be involved in the coordination of copper in the "in-between" state of Cu-Aβ, [17] i.e. in the redox competent state responsible for ROS production (see Section I.C.2.c). They are thus in close vicinity of copper and are preferential targets for the ROS attack. Their oxidation during MCO of Aβ was expected. This is not the case for Phe or Met residues, whose oxidation could appear as side damages resulting from an escaping of the ROS away from the catalytic center. At this point, we cannot conclude on the identity of the amino acid residues targeted first by HO • because the level of oxidation for each targeted residue identified above is not known. The following part tends to answer this question.
III.B. Kinetics of Aβ 40 oxidation
The characterization of the amino acid residues of Aβ targeted by ROS and the nature of the oxidations, as investigated above, is carried out at the end of the reaction. In order to understand in what order the oxidative attack of ROS happens during MCO and what are the preferential targets of the ROS attack (among those identified above), the kinetics of the Aβ40 peptide oxidation was monitored by LC-HRMS. Among the several amino acid residues oxidized, some of them could be preferentially targeted by ROS. Thus, the reaction started by mixing Aβ, copper and ascorbate, was stopped every minutes and the resulting reaction mixture was then analyzed by LC-HRMS.
A semi-quantitative approach was used to study the level of each oxidized residue as a function of the reaction time.
III.B.1. Experimental section
Copper(II)-catalyzed oxidation of the Aβ40 peptide was carried out by mixing Aβ40, Cu(II) and ascorbate in phosphate buffer (50 mM, pH 7.4) to reach final concentrations of 60, 50 and 500 μM, respectively (substoichiometry of copper to avoid free Cu(II) in solution) for a reaction mixture volume of 2 mL. Incubation was done at room temperature for a controlled reaction time. A volume of 100 μL of the reaction mixture was taken out every minutes between 0 and 14 min and the reaction was stopped by adding 400 μL of HCl 14.8 mM (final pH 2). At pH 2, copper is not bound to Aβ anymore and ascorbate is fully protonated into ascorbic acid (pKa 4.2 [20] ), thus the ROS production is stopped. The 400 µL solution of Aβ was filtered by using Amicon 3 kDa centrifugal device (Millipore) by centrifugation for 15 min at 13500 rpm.
The final volume is around 100 µL (volume before dilution in HCl). Trypsin digestion was then carried out and the four tryptic peptides described before were obtained (see Section II.B.3 for more details).
III.B.2. Results
The trace chromatograms were obtained for every oxidized tryptic peptide of Aβ40, by using two monoisotopic ions corresponding to two charge states of the peptide. The mass accuracy was systematically set below 10 ppm. The m/z ratio used for the detection are listed in Table III.A-1. 1c). The tryptic peptide with the oxidation on both Phe was not detected (i.e. LVFFAEDVGSNK+32). Phe oxidation seems to be slower than Asp1 or His oxidation, in particular at the beginning of the reaction where a lag phase is observed. The curve starts to increase after 2 min of reaction and does not seem to reach a plateau around 8 min.
For several reasons, the evolution of Met35 oxidation as a function of the reaction time does not provide interesting information. First, methionine is very sensitive to oxidation and peptide/protein handling commonly participate to its oxidation. This is probably the reason why Met35 is already oxidized before MCO of Aβ has started (Figure III.B-1d). Second, methionine sulfoxide formed by the ROS attack can be reduced in non-oxidized methionine by ascorbate. [21] As ascorbate is present in quite high concentration in the reaction mixture, this phenomenon would be responsible for an underestimation of the level of methionine sulfoxide generated during MCO. However, a slight increase of the gradient is observed during the reaction, thus Met35 seems to be oxidized during the course of the reaction.
To summarize, Asp1, His13 and His14 seem to be the first targets of ROS attack. Similar tendencies were obtained with the truncated Aβ28 peptide, resulting in a proposition of Cu-Aβ binding mode in the "in-between" state. [17] As Asp1, His13 and His14 are the first targets for HO • during the ROS production, and because they are involved in Cu(II) or/and Cu(I) coordination in the resting states, [22][23][24][25][26][27] they were proposed to participate in Cu binding in this transient state. The results obtained with Aβ40 are consistent with this hypothesis. With this in mind, the stop of Asp and His oxidations after 8 min could be interpreted as a change of coordination during MCO. Indeed, if Asp1, His13 and His14 are involved in copper coordination during the ROS production, they are first targeted and are quickly oxidized. Thus, as they are damaged, they may not be good ligand under their oxidized form, leading to a change of coordination.
Phe oxidation starts after a 2-min lag phase and reach a much lower level of oxidation.
As Asp1, His13 and His14 residues are targeted first by HO • , the oxidation of phenylalanine residues is likely to be a side damage of the ROS production by the system, and would start when the amino acid residues targeted first by HO • are sufficiently oxidized to let escape the ROS. This is in line with the lag phase at the beginning of the reaction, indicating that Phe residues are not directly targeted by HO • , although they are very sensitive to HO • oxidation. [1] The same applies for Met35 which is not coordinated to copper.
III.C. NMR study of Aβox
The above results were obtained by MS based techniques. In order to corroborate them, we wanted to analyze Aβox by using a different technique. The chemical modification undergone by the amino acid residues of Aβ after MCO were analyzed by 1 H NMR. We have chosen to use the truncated Aβ28 instead of Aβ40, as the complete attribution of the 1 H NMR is available. [28] As the only damage observed on the 29-40 moiety of Aβ40 was attributed to Met35 oxidation and is well-described, [9][10][11][12] we keep the essential information by working on Aβ28 instead of Aβ40.
III.C.1. Experimental section
The methodology and conditions used for these experiments are detailed in Section II.E.2.
III.C.2. Results
1 H NMR of Aβ28 leads to a spectrum with numerous peaks. In order to be able to observe the impact of oxidation on the targeted amino acid residues (Asp1, His13, His14, Phe19 and Phe20), the present study was carried out at pH 10.5, where Hα of Asp1 and the aromatic protons of both His and Phe residues are well separated from other protons and thus easily detectable.
A complete NMR study of Aβ28 was previously realized in the team, [28] leading to the attribution of all the protons of Aβ28 (see Annex II for 1 H chemical shifts table and Aβ28 sequence along with the atom identifiers of each amino acid residue). Although the study was carried out at pH 7.4, it was useful to help us to attribute the protons of the amino acid residues of interest. Unlike the C-terminal moiety of Aβox (from Ala21 to Lys28) which is not disturbed by oxidation, the majority of the amino acid residues of the N-terminal moiety are strongly affected. For example, the apolar aliphatic amino acid residues such as Val and Leu residues have broaden protons signals (around 0.7-0.9 ppm) although they are weakly sensitive to oxidation. [1] Thus, the oxidation of some amino acid residues of the N-terminal moiety of Aβ28 Another signal is present only in the Aβ28ox spectrum around 3.6 ppm, with a signature corresponding to two diastereotopic protons. These protons are probably related to an oxidation product of Aβ, but the origin is unknown as its chemical shift does not correspond to an oxidized amino acid residue listed above (see Section I.C.3) or detected previously in MS (see Section III.A).
III.C.3. Summary
The interpretation of 1 H NMR spectrum of Aβ28ox was made difficult by the strong broadening of the peaks obtained. The origin of this broadening could tentatively be explained by the fact that (i) different oxidized species are generated during MCO of Aβ, thus leading to different chemical shifts for a given proton, (ii) the numerous chemical modifications undergone by the peptide may also change the environment of the protons owned by the amino acid residues not oxidized that are close to the oxidized one and (iii) the high level of concentration needed to perform NMR experiments requires for Aβox to be highly reconcentrated (from 60 µM to 400 µM), which might result in a modification of the sample (e.g. oligomerization for instance).
However, the presence of signals not affected have highlighted that the C-terminal moiety of the Aβ28 peptide (between Ala21 and Lys28) is not affected by oxidation. This is in line with identification of Asp1, His13, His14, Phe19 and Phe20 as the only oxidized amino acid residues detected in MS.
In the 1-20 part of Aβ28, the NMR spectrum of Aβ28ox does not provide information on the oxidized amino acid residues as the majority of the signals are broad. Thus, the identification of Asp1, His13 and His14 as the main oxidized amino acid residues in MS cannot be confirmed by NMR. However, the presence of a peak in Aβ28ox spectrum attributed to the methyl group of the pyruvate confirms the oxidation of Asp1 by decarboxylation and deamination.
III.D. Conclusion
In the present study, LC-HRMS, LC-MS/MS and 1 H NMR have been used to investigate the oxidation of the Aβ peptide during the ROS production. Both techniques have highlighted the chemical changes undergone by the peptide due to the ROS oxidative attack.
MS experiments enabled to determine the oxidized amino acid residues of the Aβ sequence and to characterize the nature of the oxidation undergone by each amino acid residue.
Asp1 is found oxidized either by decarboxylation-deamination or by oxidative cleavage, leading to the formation of a pyruvate or an isocyanate function respectively. His13 and 14 are found oxidized into 2-oxohistdines, Phe19 and Phe20 into hydroxyphenylalanine and Met35 into methionine sulfoxide. Furthermore, through a kinetic study of the amino acid residues oxidation, Asp1, His13 and His14 have been found to be the main targets of ROS, while Phe19, Phe20 and Met35 are secondarily targeted. The results are in line with the previous study realized on the truncated Aβ28 [17] and complete it by showing that (i) as expected, Met35 is also oxidized during the MCO of Aβ40 but as a secondary target and (ii) that Aβ28 is an appropriate model for oxidation studies as the same main oxidations are detected for both the truncated and the full-length peptides.
Although the 1 H NMR spectrum of Aβ28ox is not easily exploitable since the proton signals are broaden, the presence of signals not disturbed by oxidation has shown that no oxidation occurs in the 21 to 28 part of the Aβ28. Moreover, the decarboxylation and deamination undergone by Asp1, leading to the formation of a N-terminal pyruvate function, is confirmed by the presence of a signal attributed to the protons of the pyruvate methyl group.
Kinetic study of the amino acid residues oxidation has also shown that besides being the main targets, Asp1, His13 and His14 are strongly oxidized during the first minutes of the MCO and then are not targeted anymore. This phenomenon was also observed in the previous study [17] and has led to the proposition that these three amino acid residues would be the main ligands for Cu(II) and Cu(I) in the "in-between" state (see Section I.C.2.c) during the ROS production.
As they are targeted by the ROS, the chemical modifications they undergo may impact their aptitude to be a good ligand for copper. Hence, a change of coordination may occur and would explain the reason why they are not oxidized anymore after a few minutes of MCO.
As addressed above (see Section I.C.1) Asp1, His13 and His14 are also involved in Cu(II) or/and Cu(I) coordination in the resting states. Thus, the MCO of Aβ may have consequences on the copper coordination both in the resting and "in-between" states.
Chapter IV: Consequences of Aβ oxidation
This chapter focuses on the consequences of the Aβ peptide oxidation regarding metal coordination, as well as Reactive Oxygen Species (ROS) production and peptide aggregation, two key events of Alzheimer's Disease (AD). [1] The coordination of Cu(II), Cu(I) and Zn in 2016 [2] along with a summary of the article, written in French (requirement of the doctoral school). The supporting information related to the article are situated at the end of the chapter.
IV.A.1. Article
IV.A.2. French summary
Cet article, publié dans la revue Metallomics en 2016, [2] porte sur les conséquences de
IV.B. Zn coordination with Aβox
Zn ions are found in the amyloid plaques extracted from AD brains [3] and they can form complexes with the Aβ peptide. According to the proposed models (see Section I.C.1.a), Zn(II) would be bound by imidazole rings of His6 and either His13 or His14 residues in equilibrium, the carboxylate group of Glu11 and the carboxylate group of either Asp1, Glu3 or Asp7, with a preference for Asp1. [4] Asp1, His13 and His14 being found oxidized during MCO of Aβ, the
IV.B.2. Results
The main oxidized amino acid residues of Aβ28ox (Asp1, His 13 and His14, see Chapter III) are involved in the coordination of Zn(II). Thus, in order to investigate the impact of Aβ oxidation on Zn(II) coordination, mutated peptides are used to mimic Aβox. The D1N mutated Aβ16 peptide does not have any N-terminal carboxylate group and is employed to mimic the oxidation of Asp1. H13A mutated Aβ16 peptide bears only two His on its sequence and is used to mimic the oxidation of one His residue. In addition, H6A-H13A and H6A-H14A mutated Aβ16 peptides are used to mimic the oxidation of two His residues. In the above study (Section IV.A), Aβ7 (sequence DAEFRHD) was used to mimic the double His oxidation, but as Zn(II) also binds Glu11, its use would simulate a double His oxidation along with an unavailability of Glu11 for binding. Oxidation of Asp1 thus would not contribute that much to the Zn-Aβox signature, even if its carboxylate group, involved in Zn coordination, is strongly affected during Aβ oxidation. This could be explained by the dynamic exchange existing between the carboxylate groups of Asp1, Glu3 and Asp7 regarding Zn coordination. [4] The behavior of Zn-Aβ28ox seems to be close to that of Zn-H13A, Zn-(H6A-H13A) and
Zn-(H6A-H14A), since XANES signatures exhibit some similarities. A broaden peak is observed at 9665-9670 eV for Zn-Aβox and could result from the growth of a peak at 9668 eV, whose shape evolves to become well-defined, from Zn-H13A to Zn-(H6AH14A) and to Znbuffer. A similar evolution is observed for another growing and broaden peak detected at 9680 eV, which clearly appears in Zn-buffer spectrum. These elements would suggest that Zn affinity for Aβ would decrease upon Aβ oxidation, because the residues involved in zinc coordination are mainly targeted during MCO. In particular, the oxidation of one or two histidine residues would be responsible for this. This is in line with the affinity values of Zn for His mutants, lower than for Aβ. [4] Unfortunately, as the signatures in the white line region are broad, define precisely the several binding modes of Zn with oxidized species is precluded. For the same reason, it was not possible to obtain reliable linear combination fitting of the Zn-Aβox signature, as we found that different conclusions can be made according to the result of the fitting (see Annex III for examples of linear combination fittings).
It has been previously shown that the intensity of the Zn XANES white line was related to the number of ligands in the Zn(II) complex. [5] In the case of D1N, Aβ28 and H13A peptides, the white line intensity is 1.3 (or 1.4 for H13A), related to a four-coordination of the metal center. The same results were previously obtained for the other two His mutants (H6A and H14A). [4] Zn in buffer related white line intensity (around 2.3) allows to deduce that 6 ligands are coordinated to loosely bound Zn. In the case of H6A-H13 and H6A-H14A, both intensities being equal to 1.7, it is likely that Zn is bound by 5 ligands, probably including carboxylate groups instead of His residues. Zn-Aβ28ox related white line intensity being equal to 1.4, the number of ligands is the same than for Aβ28 or H13A.
In summary, the XANES signature of Zn-Aβ28ox shows that the Zn binding mode is affected when Aβ is oxidized, mainly because of the oxidation of His residue(s). However, the modification is not drastic as the number of ligands does not change. The oxidation of Asp1 would not have a substantial impact on Zn binding mode, in line with the existing dynamic exchange between Asp1, Glu3 and Asp7 in Zn coordination.
In the amyloid cascade hypothesis, the coordination of Zn ion to Aβ is proposed to modulate the aggregation process (see Section I.B.3) and hence to impact the morphology of the formed aggregates. [6][7] Thus, besides the fact that the chemical modifications undergone by Aβ during ROS attack could directly impact its ability to aggregate, the change of Zn binding mode due to Aβ oxidation could also have an effect on the aggregation process in the presence of Zn.
IV.C. Aggregation of Aβox
Aggregation of the Aβ peptide is one of the main events of AD, leading to the formation of amyloid plaques. [8] As addressed above (see Section I.B.4), Aβ is an unfolded peptide which is prone to aggregation and can form β-sheet rich structures. [9] In the previous chapter, the oxidized amino acid residues have been identified after MCO of the full-length Aβ40 peptide.
Asp1, His13 and His14 are the main amino acid residues targeted by ROS, but some minor oxidations may also occur on Phe19, Phe20 and Met35. As Aβ40 undergoes chemical modifications during MCO, its propensity to aggregate into β-sheet rich structures might be affected. In the present study, the aggregation capability of both Aβ40 and Aβ40ox is studied and compared. Aggregation into β-sheet structures is monitored by Thioflavin-T (ThT) fluorescence as a function of time, [10][11] and the presence (or absence) of fibrillar species at the end of the fluorescence experiment is investigated by Transmission Electron Microscopy (TEM). [12] This part presents the preliminary results we obtained on Aβ40 and Aβ40ox aggregation without metals.
IV.C.1. Experimental section
The preparation of Aβ40 and Aβ40ox solutions is detailed in Section II.A. The solutions of Aβ40 and Aβ40ox (20 µM) were incubated during 200 h at 37°C and the fibril formation was monitored by ThT fluorescence (the fluorescence conditions are presented in Section II.D.3).
The fluorescence curves obtained are an average of the fluorescence curves of 3 replicates.
Then, the solutions were collected and prepared for TEM using the conventional negative staining procedure. 20 μL of solution was adsorbed on Formvar-carbon-coated grids for 2 min, blotted, and negatively stained with uranyl acetate (1%) for 1 min. Grids were examined with a TEM (Jeol JEM-1400, JEOL Inc, Peabody, MA, USA) at 80 kV. Images were acquired using a digital camera (Gatan Orius, Gatan Inc, Pleasanton, CA, USA) at ×25 000 magnification. As Phosphate buffer reacts with uranyl acetate, HEPES buffer was preferred and employed for TEM experiments, and thus for ThT fluorescence experiments, as well.
IV.C.2. Results
a. Aggregation of Aβ and Aβox
ThT is a fluorescent dye whose fluorescence yield is enhanced when it interacts with βsheet rich structures such as Aβ fibrils (see Section II.D.3 for more details). [13][14] In addition, it displays a bathochromic shift of both excitation and emission maximum wavelengths. [15] Thus, The change of the overall charge of the peptide could also be the origin of the modification of aggregation process. [16][17] The oxidation of the protonated Asp1 residue can lead to the formation of the neutral pyruvate function. [18][19][20] Thus, although Asp1 is not located
ThT
200 nm 200 nm
on the peptide moiety responsible for aggregation, its alteration could disturb the aggregation process.
The aggregation study of mutated peptides (such as H13A-Aβ40, H14A-Aβ40 to mimic His oxidation or the N-terminal acetylated AcAβ to mimic Asp1 oxidation) could be interesting to determine if the Asp1 or the His oxidation is the origin of the Aβ fibrillization inhibition. [21] .
IV.C.3. Outlook
The preliminary results obtained by both fluorescence and TEM are very promising as they highlight a strong modification in the aggregation process of Aβ subjected to MCO.
Similar results were previously obtained after catalytic photo-oxygenation of Aβ. [22] However, as aggregation is a very sensitive process, the results have to be taken with caution. A series of controls should be carried out in order to ensure that the different aggregation behavior of Aβox effectively comes from the oxidative damages undergone by Aβ and not from the multistep purification procedure (described in Section II.A) of Aβox. The purification procedure undergone by the oxidized peptide will be carried out on the non-oxidized Aβ40 as well to ensure that it does aggregate after the procedure. Moreover, aggregation experiments will be realized with a mixture of oxidized and non-oxidized peptides at different ratios and compared with the samples containing the same quantity of non-oxidized peptide than in the mixture. The comparison of the fluorescence curves might be informative on the possible effect of Aβ40ox on Aβ40 aggregation process. The set of new experiments is being carried out at the laboratory and should complete the presently described results.
IV.D. Conclusion
In the present studies, the impact of MCO of Aβ has been investigated on metal ions coordination, ROS production and aggregation. and Cu ions can modulate the aggregation process by binding Aβ, [6][7] the impact of Aβ oxidation on the aggregation in the presence of Cu or Zn would also be studied.
The oxidation of the Aβ peptide occurring during ROS production induces strong modifications in ROS production and aggregation, two key events of AD. It seems to favor the amorphous-like aggregates such as oligomeric species which are proposed to be more toxic than fibers. [23][24] It also leads to an enhanced catalytic activity for ROS production. Thus, Aβ oxidation appears to be a detrimental event for AD, for both aggregation and ROS production processes.
Experimental
Titration of Aβ28, AcAβ28 and Aβ40
All the synthetic peptides were bought from GeneCust (Dudelange, Luxembourg), with purity grade > 95%. Stock solutions of the Aβ28 (sequence DAEFRHDSGYEVHHQKLVFFAEDVGSNK) and Ac-Aβ28 (sequence AcDAEFRHDSGYEVHHQKLVFFAEDVGSNK), peptides were prepared by dissolving the powder in milliQ water (resulting pH ≈ 2). Peptide concentration was then determined by UV-visible absorption of Tyr10 considered as free tyrosine (at pH 2, (ε276-ε296) = 1410 M -1 cm -1 ). Stock solution of Aβ40 peptide (sequence DAEFRHDSGYEVHHQKLVFFAEDVGSNKGAIIGLMVGGVV) was prepared by dissolving the powder in NaOH (50 mM) and purifying the solution in FPLC. The peptide concentration was then determined by UV-visible absorption of Tyr10, considered as free tyrosine ((ε293-ε360) = 2400 M -1 cm -1 ) in NaOH (50 mM, resulting pH ≈ 13).
Proteolytic digestion
The solution of Aβ was filtered using Amicon 3 kDa centrifugal device (Millipore) by centrifugation for 15 min at 13500 rpm, then washed and centrifuged twice with 200 μL sodium hydrogenocarbonate (100 mM, pH 8). The concentrated sample (approx. 50 μL) was recovered and transferred to an Eppendorf ProteinLoBind 1.5 mL vial. Trypsin (0.05 ng/μL in formic acid 0.1%) was added to obtain a Aβ/trypsin ratio of 20/1 (w/w) and digestion was carried out at 37°C for 3h in a Thermomixer (Eppendorf), 10 s mixing at 750 rpm every min.
Mass spectrometry
High Performance Liquid Chromatography / Mass Spectrometry (HPLC/MS) analysis was performed on an ion-trap mass spectrometer (LCQ DECA XP Max, ThermoFisher), equipped with an electrospray ionization source, coupled to a SpectraSystem HPLC system. Sample (10 µL of Aβ tryptic digest) was injected onto the column (Phenomenex, Synergi Fusion RP-C18, 250 × 1 mm, 4 µm), at room temperature. Gradient elution was carried out with formic acid 0.1% (mobile phase A) and acetonitrile/water (80/20 v/v) formic acid 0.1% (mobile phase B) at a flow-rate of 50 µL.min -1 . The mobile phase gradient was programmed with the following time course: 12% mobile phase B at 0 min, held 3 minutes, linear increase to 100% B at 15 min, held 4 min, linear decrease to 12% B at 20 min and held 5 min. The mass spectrometer was used as a detector, working in the full scan positive mode between 50 and 2000 Da followed by data dependent scans of the two first most intense ions, with dynamic exclusion enabled. Isolation width was set at 1 Da and collision energy at 28% (units as given by the manufacturer), using wideband activation. The generated tandem MS data was searched using the SEQUEST algorithm against the human Aβ peptide sequence. Dynamic modifications were specified according to the expected mass shift due to the Aβ peptide oxidation (Supporting Information, Table s1). The same operating conditions (column and mobile phase gradient) were used to carry out high resolution mass spectrometry (HPLC/HRMS) experiments, by using a LTQ-Orbitrap XL mass spectrometer (ThermoFisher Scientific, Les Ulis, France) coupled to an Ultimate 3000 LC System (Dionex, Voisins-le-Bretonneux, France). The Orbitrap cell was operated in the full-scan mode at a resolution power of 60 000. HPLC/HRMS was also used to check for digestion efficiency, systematically found close to100 % (non-digested peptide not detected). HPLC High Performance Liquid Chromatography analysis was performed on an Agilent 1200 series device (Agilent technologies) equipped with a DAD detector. Sample (10 µL) was injected onto the column (Acclaim 120 C18, 50 × 3 mm, 3 µm, ThermoScientific), at room temperature. The gradient elution was carried out with formic acid 0.1% (mobile phase A) and acetonitrile/water (80/20 v/v) formic acid 0.1% (mobile phase B) at a flow-rate of 0.5 mL.min -1 . The mobile phase gradient was programmed with the following time course: 5% mobile phase B at 0 min, held 3 minutes, linear increase to 55% B at 8 min, linear increase to 100% of B at 9 min, held 2 min, linear decrease to 5% B at 12 min and held 3 min. Aβ28 was detected with the absorption of Tyr10 at 276 nm.
Tables
Table S1: Monoisotopic masses used for high resolution mass spectrometry. Monoisotopic apparent masses (m/z in Th) of mono-, di-and triply-protonated ions of the tryptic peptides of oxidized Aβ16; +16 accounts for the formal addition of one oxygen atom during oxidation (conversion of Histidine into oxohistidine).
Position Peptide [M+H] + [M+2H] 2+
[M+3H] 3+ S1. S1. S1. bien plus faible que dans le cas des états au repos. Cet état fugace étant très peu peuplé (de l'ordre de 0.1 % [1] ), la caractérisation directe de son mode de coordination par les techniques spectroscopiques traditionnelles [2] n'est pas possible. La détection des principales oxydations encourues par le peptide Aβ durant la production de ROS par spectrométrie de masse (MS) a mené à la proposition de 3 principaux ligands du cuivre dans l'état IB. [3] L'aspartate situé en première position, et les histidines situées Comme le cuivre a un mode de coordination avec le peptide Aβ impliquant des acides aminés spécifiques, la mutation de ces derniers entraîne un changement du mode de coordination. Les modes de coordination de Cu(I) et Cu(II) dans les états au repos ont été étudiés avec de nombreux mutants de Aβ. Pour Cu(II), la non-disponibilité de l'amine terminale de l'Asp1 (via l'utilisation d'un peptide avec une acétylation de l'amine N-terminale) entraîne un profond changement de coordination. [4][5] Pour Cu(I), la présence d'une seule histidine dans la séquence (via la mutation de deux histidines par des alanines) a également un impact sur le mode de coordination. [6] De la même manière, la mutation d'un acide aminé impliqué dans la sphère de coordination du cuivre dans l'état IB devrait induire une variation dans la production des ROS, puisque c'est cet état qui est responsable du transfert électronique.
La production de ROS par le cuivre avec les différents peptides modifiés a été suivie par la fluorescence de l'acide 7-hydroxycoumarine-3-carboxylique (7-OH-CCA), produit par le piégeage des radicaux hydroxyles (HO • ) par l'acide coumarine-3-carboxylique (CCA). Le blocage de l'amine terminale ainsi que la mutation de l'Asp1 menant à un changement de la chaîne latérale contenant le groupe carboxylate de l'Asp1 ont fortement diminué la vitesse de production de HO
VI.B. French summary
Ce chapitre est composé d'une communication publiée dans Dalton Transactions en août 2016. [1] L'article détaille l'étude réalisée sur les propriétés pro-et anti-oxydantes de l'ascorbate dans le contexte de la maladie d'Alzheimer. L'ascorbate est un réducteur présent en grande concentration dans le cerveau humain (jusqu'à 10 mM dans les neurones). [2] Il est classé dans la catégorie des anti-oxydants puisqu'il a la capacité de réagir avec certaines espèces oxydantes telles que les espèces réactives de l'oxygène (ROS), en particulier O2 d'oxydo-réduction entre l'ascorbate et le dioxygène n'est pas efficace [3] mais le devient en présence d'un catalyseur tel que le cuivre. L'ascorbate réduit l'ion cuivrique en ion cuivreux, et ce dernier transfère à son tour un électron au dioxygène, formant ainsi l'anion superoxyde.
Le cuivre est ensuite à nouveau réduit par l'ascorbate, ce qui entretient le cycle catalytique.
Lorsque le cuivre est coordiné au peptide Aβ, il reste capable de cycler entre ses états d'oxydation Cu(II)-Aβ et Cu(I)-Aβ, le système catalysant ainsi la production de ROS. [4][5] Dans ce cas, l'ascorbate peut donc être considéré comme ayant des propriétés pro-oxydantes.
Les effets pro-et anti-oxydants de l'ascorbate ont été étudiés par spectroscopie de fluorescence, en présence de cuivre libre ou du complexe Cu-Aβ et de dioxygène. pKa (I/II) = 7.8 [6] Figure S2: Schematic view of the proposed Cu(I) coordination site in Aβ. [6] Fluorescent detection of HO • by CCA Fluorescence emission of 7-OH-CCA is very sensitive to pH as, the hydroxyl-group has a pKa of 7.5 (Figures S2 andS3), and only the deprotonated form emits at 450 nm after excitation at 390 nm. Since the experiment is performed at pH 7.4, i.e. very close to the pKa, the pH has to be strictly stable in order to have accurate results. Because the oxidation of ascorbate releases protons, it is not easy to keep the pH constant even with a high concentrated buffer, this issue being more important at higher ascorbate concentrations. Therefore, we Since similar results are obtained whatever the length of the peptide (see Figure S10), the MS experiments were performed with Aβ28 instead of Aβ16 for technical reason. S1. [1] H. Eury in Etude de l'intéraction de la thioflavine T et de complexes de ru (ii) avec le peptide amyloïde bêta dans le cadre de la maladie d'alzheimer, Vol. 2013. Nous avons également étudié la cinétique d'oxydation des acides aminés par spectrométrie de masse haute résolution, afin de déterminer s'il y avait des cibles privilégiées pour les ROS parmi ces 6 acides aminés retrouvés oxydés. La figure 3 montre les courbes d'évolution de l'oxydation au cours du temps pour les 4 peptides obtenus après la digestion trypsique du peptide Aβ40. L'oxydation de Asp1 et des histidines a lieu dès le début de la réaction de production de ROS et semble s'arrêter au bout de 8 min (figures 3a et 3b). Au contraire, l'oxydation des phénylalanines n'a lieu qu'au bout de quelques minutes, après la phase de latence observée sur la courbe (figure 3c). La quantité de phénylalanine oxydée ne cesse de s'accroitre pendant le reste de la réaction. Pour la Met35 (figure 3d), aucune conclusion ne peut être faite, l'ascorbate présent lors de la réaction ayant la capacité de réduire la méthionine oxydée en méthionine, ce qui a pour conséquence de biaiser les résultats.
LC-MS Experiments
Les résultats ainsi obtenus ont permis d'identifier Asp1, His13 et His14 comme cibles privilégiées des ROS car ils subissent des dommages oxydatifs dès le début de la réaction alors que les phénylalanines, cibles secondaires, sont oxydées après un temps de latence.
OF ABBREVIATIONS ..................................................................................................................... GENERAL INTRODUCTION ................................................................................................................... CHAPTER I: CONTEXT OF THE PROJECT ......................................................................................... I.A. ALZHEIMER'S DISEASE .................................................................................................................... c. Tyrosine ....
....................................................................................................................................c. Tandem Mass Spectrometry (MS/MS) applied to proteomic analysis ........................................... II.B.4. Analysis of Aβ by MS and MS/MS ................................................................................... a. MS/MS ........................................................................................................................................... b. High-Resolution Mass Spectrometry (HRMS) ................................................................................. II.C. UV-VISIBLE SPECTROSCOPY ........................................................................................................ II.C.1. General principles ........................................................................................................... II.C.2. Ascorbate consumption .................................................................................................. II.D. FLUORESCENCE SPECTROSCOPY ....................................................................................................
III.D. CONCLUSION ........................................................................................................................... REFERENCES ........................................................................................................................................... CHAPTER IV: CONSEQUENCES OF AΒ OXIDATION ........................................................................... IV.A. CU COORDINATION AND ROS PRODUCTION WITH AΒOX ................................................................... IV.A.
Figure I.A-1 shows the repartition of the 46.8 million people living with dementia. In Europe, 10.5 million people are estimated to suffer from a neurodegenerative disease.
Figure I.A- 3 :
3 Figure I.A-3: Neuropathological lesions revealed by immunohistochemistry. (a) Senile plaques observed by immunohistochemistry with antibodies against Aβ. (b) Neurofibrillary tangles observed by immunohistochemistry with antibodies against phosphorylated Tau (Pictures from reference [11])
. Clinical diagnosis In 1984, the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) has
I.B. 1 .
1 Aβ: Structure and formation Aβ is a 38 to 43 amino acid residue peptide (Figure I.B-1) derived from the enzymatic cleavage of APP. Depending on the exact location of the cleavage on the C-terminal part, several lengths can be formed, from Aβ1-38 to Aβ1-43. However, the most abundant species produced in the brain are Aβ1-40 and to a lesser extent Aβ1-42. Aβ is amphiphilic: the N-terminal moiety is hydrophilic while the C-terminal one is hydrophobic.
Figure I.B- 1 :
1 Figure I.B-1: Amino acid sequence of Aβ1-43 (1-letter code)
Figure I.B-2: A schematic view of APP proteolytic cleavage. In the non-amyloidogenic pathway, APP is first cleaved by α-secretase and then by γ-secretase to form truncated Aβ17-40/42 peptides or by β-secretase leading to the formation of the truncated Aβ1-16. In the amyloidogenic pathway, APP is cleaved consecutively by the β-and γ-secretases leading to the formation of full-length Aβ1-40/42 peptides.
domain, APP proteolysis by both β-and γ-secretases leads to the formation of a mutated peptide. The possible mutations are shown in Figure I.B-3.
Figure I.B- 3 :
3 Figure I.B-3: Familial AD mutations on Aβ1-43. The amino acid residues mutated and the names of the mutations are colored.(1-letter code).[13]
Figure I.B- 4 :
4 Figure I.B-4: Schematic representation of the amyloid cascade hypothesis with the intervention of metal ions.
Figure I.B- 5 :
5 Figure I.B-5: Schematic representation of amyloid aggregation (top section) and AFM images (2 × 2 µM) of the Aβ peptide at the oligomeric and fibrillary stages superimposed with the typical sigmoid curve of fibril formation (bottom section). Picture from reference[46] .
(
TEM) give information about the size and shape of aggregates (see AFM images in Figure I.B-5). I.C. Aβ, metal ions and Reactive Oxygen Species I.C.1. Coordination of Aβ with metal ions
Figure I.C- 1 :
1 Figure I.C-1:Proposed coordination sites of Zn(II) to the Aβ peptide at pH 7.4.[56]
5 are shown in Figure I.C-2. For component I, it is now established that Cu(II) is bound to the NH2 terminus, the adjacent CO function from Asp1-Ala2 and to imidazole rings of His6 and either His13 or
Figure I.C- 2 :
2 Figure I.C-2: Schematic representation of equatorial Cu(II)-Aβ binding sites depending on the pH. The pKa of the different components are indicated in the pH scale (Picture from reference [65]).
Figure I.C- 2 )
2 Figure I.C-2) is the most accepted model and it will be used as the component II model thereafter.
Cu(I) coordination to the Aβ peptide Copper is a redox-active ion which is present physiologically in two redox states: Cu(I) and Cu(II). Cu(I) coordination with Aβ has been investigated more recently than Cu(II) coordination and the involvement of histidine residues is now consensual. Several binding models are suggested, two of them being most populated (Figure I.C-3).
Figure I.C- 3 :
3 Figure I.C-3: Schematic view of the two proposed models for Cu(I) coordination in the Aβ peptide.Adapted from reference[66] .
I.C. 2 .
2 Reactive Oxygen Species a. ROS and oxidative stress Reactive oxygen species (ROS) are radicals and molecules deriving from the incomplete reduction of dioxygen. They are produced in small quantity during the in vivo metabolism of oxygen, through four successive 1-electron reductions of O2 leading to H2O formation (Figure I.C-
Figure I.C- 4 :
4 Figure I.C-4: Schematic view of the ROS production during oxygen reduction (black pathway) and the enzymes involved in ROS detoxification (blue pathways)
Figure I.C- 5 : ( 1 )
51 Figure I.C-5: (1) Dismutation of superoxide into dioxygen and hydrogen peroxide catalyzed in living systems by SOD. (2) Dismutation of hydrogen peroxide into dioxygen and water catalyzed in living systems by catalase. (3) Hydrogen peroxide reduction catalyzed by the glutathione peroxidase (GSH). GS-SH: Glutathione disulfide.
Figure I.C- 6 : ( 1 )
61 Figure I.C-6: (1) Fenton reaction (2) Haber-Weiss reaction catalyzed by iron ions.
Figure I.C- 7 :Figure I.C- 8 :
78 Figure I.C-7: Mechanism of ROS production from a reductant and dioxygen catalyzed by the Cu-Aβ complex. The ROS produced are the superoxide anion (O2 • ), hydrogen peroxide (H2O2) and the hydroxyl radical (HO • ).
acid residue 3
3 reported the damages undergone by the Aβ peptide during the copper-mediated oxidation. The amino acid residues damaged are summarized in Figure I.C-9 and further described in the following paragraphs.
Figure I.C- 9 :
9 Figure I.C-9: Schematic view of the different oxidative modifications (black circle), cleavages (blue arrows) and interactions (red arrows) undergone by the Aβ42 peptide during the copper-mediated oxidation (from reference [87]).
Figure I.C- 10 :
10 Figure I.C-10: Structural formula of histidine (left) and 2-oxo-histidine (right).
Figure I.C- 11 : 2 -
112 Figure I.C-11: 2-oxo-histidine formation from the oxidative attack of hydroxyl radical at the C-2 positionof the imidazole ring of Histidine[105]
Figure I.C-12 shows an oxidative mechanism leading to the formation of either pyruvate (blue pathway), isocyanate (red pathway) or 2hydroxyaspartate (green pathway) function through the formation of an alkoxyl radical.
Figure I.C- 12 :
12 Figure I.C-12: Mechanism of aspartate oxidation with three different pathways starting from alkoxyl radical and leading to the formation of 2-hydroxyaspartate (green), isocyanate (red) and pyruvate (blue).
Ultra 3
3 kDa membrane (Millipore), washed with Ethylenediaminetetraacetic acid (EDTA) (10 equivalents) to remove copper, then with water and finally with a NaOH (50 mM, pH ≈ 13) solution. The oxidized peptide solution is recovered and the concentration is determined by UV-visible absorption of Tyr10, considered as free tyrosine ((ε293-ε360) = 2400 M -1 cm -1 ) in NaOH (50 mM, resulting pH ≈ 13). As the oxidized peptide solution has a background absorbance at 293 nm from an unknown origin, the curve is fitted to subtract the absorbance due to the tailing (Figure II.A-2). The Aβ28 peptide was used instead of Aβ16 for technical reasons (membrane cut-off of 3 kDa).
Figure II.A- 2 :
2 Figure II.A-2: UV-Vis spectrum of oxidized Aβ28 (black curve) and fit (blue curve) to subtract the background absorbance from Tyr absorbance.
technique was first dedicated to proteins studies as it may produce multiple charged ions from large molecules owning several ionizable sites, thus improving the sensitivity and extending the mass range of the analyzer for the study of high molecular weight molecules such as proteins and peptides. Nowadays, ESI is also used for the study of smaller molecules. The principle of electrospray ionization is illustrated in Figure II.B-1.
Figure II.B- 1 :
1 Figure II.B-1: Principle of electrospray ionization.
ionizable sites, the formation of multiple charged ions can occur, by addition/removal of several protons on the same molecule. Usually, the ions are either under the form [M + nH] n+ in positive mode or [M -nH] n-in negative mode, where n is the number of protons. They are observed at the mass M+1 or M-1 for monocharged ions or more generally at the mass (M+n)/n or (M-n)/n for n-times charged ions. Thus, the technique is not suitable for molecules without protonation sites such as apolar compounds. Other types of ions can be detected such as sodium ([M + Na] + ) or potassium ([M + K] + ) adducts and in some cases, ammonium adduct [M + NH4] + . Furthermore, it is likely to detect dimers that are due to the bridging of two monomers by a proton (or Na + /K + ). Those dimers are under the form [MHM] + or [MHM] and observed at the mass 2M+1 or 2M-1 for monocharged ions. II.B.3. Ion trap a. Principle In 1960, W. Paul (Nobel Prize in Physics 1989
them to the detector by deflecting them through an electric field. The ion trap consists of two hyperbolic electrodes called endcap electrodes and a hyperbolic ring electrode between them (Figure II.B-2).Ions come through the entrance endcap electrode and are trapped in the ion trap center by an oscillating electric field that plays on the ion trajectory, allowing to isolate the ion into the trap or to eject it towards the detector. Actually, an appropriate variation of the electric field allows to specifically destabilized the ion trajectory in order to eject the ions one by one. This oscillating electric field originates from alternating voltages applied on the endcap electrodes (negative voltage) and on the ring electrode (positive voltage).
Figure II.B- 2 :
2 Figure II.B-2: Schematic representation of an ion trap.
repeated several times (up to 10 on most of the commercially available spectrometers) and is called MS n process (Figure II.B-3). However, for each additional fragmentation, a little part of the ions is lost in the trap and the ions detected are less intense and the technique is less sensitive.
Figure II.B- 3 :
3 Figure II.B-3: Schematic view of the ion trap operation for successive fragmentation mode (MS n ).
and marketed since 2005, is an ion trap analyzer composed of an inner spindle-shaped electrode and an outer barrel-shaped electrode, both subjected to a direct voltage. The ions are injected tangentially into the field, trapped in the orbitrap and they adopt an orbital motion around the spindle-shaped electrode. The specific geometry of the trap and the electrostatic attraction towards the inner electrode, compensated by the ions inertia, force the ions to cycle around the inner electrode in complex spiral patterns (Figure II.B-4, red arrow). The axial component of the resulting oscillations is independent of initial parameters of ions such as initial kinetic energy of injected ions. However, its frequency is proportional to (m/z) -1/2 . Thus, the axial oscillations of the ion are detected as an image current induced in the outer electrode and transformed into mass spectra using Fourier Transform. The orbitrap delivers low-ppm mass accuracy with high resolution (up to 150 000 for ions produced by laser ablation) and is thus employed to do High-Resolution Mass Spectrometry (HRMS).
Figure II.B- 4 :
4 Figure II.B-4: Schematic view of the orbitrap. The red arrow shows the complex spiral trajectory of the ion around the inner electrode.
peptide bond breakage and thus mainly to the formation of b and y ions (Figure II.B-5, orange arrows). By detecting and combining the series of ions, the sequence of the protein/peptide can be determined. II.B.4. Analysis of Aβ by MS and MS/MS a. MS/MS Tandem Mass Spectrometry is a relevant tool for protein/peptide sequencing. It can also provide information and allow for identification of a modified amino acid residue in a polypeptide chain, since any modification will affect the masses of the corresponding b and y ions. Thus, the characterization of the oxidation sites of the Aβ peptide can be carried out by analyzing the oxidized peptide by MS/MS (usually coupled to liquid chromatography, LC-MS/MS), the b and y ions detected providing information about a mass shift on one amino acid residue of the sequence, related to its oxidation. However, CID fragmentation is usually less efficient as the mass of the molecule increases and the direct fragmentation of the whole Aβ40 peptide (top-down fragmentation) did not provide reliable results with the spectrometer we used. Thus, a proteolytic digestion was realized in order to cleave the Aβ peptide into smaller peptides, more easily fragmented (bottom-up fragmentation). Trypsin was chosen for digestion and cleaves peptides at the carboxyl side of the lysine or arginine residues, except when they are followed by a proline residue. The four tryptic peptides obtained from Aβ40 digestion by trypsin are shown in Figure II.B-6, and their monoisotopic masses (m/z) used for their detection are listed in Table II.B-1 Table II.B-1 along with the ones of the Aβ16, Aβ28 and Aβ40 peptides. A complete table listing the monoisotopic masses of the peptides studied as well as their oxidized counterparts is shown in Annex I.
Figure II.B- 6 :
6 Figure II.B-6: Sequence of the four tryptic peptides obtained after the Aβ40 digestion by trypsin.
Figure
Figure II.C-1 is a schematic representation of an UV-Vis experiment. The UV-Vis light (or beam) with an initial intensity I0 is created at the light source and goes through the sample. Upon light excitation, the energy given by the absorbed photon allows the transition of an electron from its ground state to an excited state. This energy has to correspond to the energy of permitted transitions (Figure II.C-1, green arrow in energy level diagram). Then, the light exiting from the sample with a transmitted intensity It is analyzed at the detector. An UV-Vis spectrum is obtained with absorption or transmission of the light (see definitions beyond) as a function of the wavelength.
Figure II.C- 1 :
1 Figure II.C-1: A schematic view of an UV-Vis spectroscopy experiment.
-
Low concentration of the compounds in the solution (A < 1, i.e. at least 10% of light is transmitted) -Solution studied not fluorescent or heterogeneous -No photochemical transformations of the compounds studied -No interaction between the compounds studied and the solvent The absorbance is an additive value. If for a given wavelength, two compounds 1 and 2 absorb, the total absorbance is equal to the sum of the absorbance of each compound, as given by Equation II.C-4. 𝑨 = 𝑨 𝟏 + 𝑨 𝟐 = 𝒍 (𝜺 𝟏 . 𝒄 𝟏 + 𝜺 𝟐 . 𝒄 𝟐 ) Equation II.C-4 II.C.2. Ascorbate consumption The Reactive Oxygen Species (ROS) produced by the Aβ/copper/ascorbate system can indirectly be monitored by ascorbate consumption. As discussed above (see Section I.C.2.b), ascorbate (Figure II.C-2a) gives an electron to reduce Cu(II)-Aβ and thus produce ROS. The decrease of ascorbate concentration can be monitored by UV-Visible spectroscopy as ascorbate has a maximal absorption in UV at 265 nm (Figure II.C-2b).
Figure
Figure II.C-2: (a) Structure of L-ascorbate anion and (b) UV-Visible spectrum of ascorbate (0.1 mM) in phosphate buffer (50 mM, pH 7.4) (right panel).
cyclic molecules named fluorophores. Upon a specific light excitation, they can re-emit light at a higher wavelength. The Jablonski diagram illustrates the fluorescence process (Figure II.D-1).
Figure II.D- 1 :
1 Figure II.D-1: Jablonski diagram.
Figure II.D- 2 :
2 Figure II.D-2: Schematic spectrum of excitation (blue curve) and emission (red curve) of a fluorophore, highlighting the Stokes shift (difference between maximum wavelengths of emission and excitation).
A
fluorophore is characterized by its excitation and emission wavelengths (Figure II.D-2) and by the quantum yield (Φf), the lifetime (τ) and the fluorescence intensity (If). The quantum yield characterizes the efficiency of fluorescence compared with the other deactivation pathways for a given fluorophore. It is directly related to the ratio between the number of photons emitted and number of photons absorbed which is turned into the ratio of the fluorescence intensity (If) and the absorption intensity (Ia) (Equation II.D-1). The value of quantum yield is between 0 and 1, the best fluorophores having a quantum yield close to 1.Moreover, the quantum yield varies with the environment of the fluorophore such as concentration, pH and the nature of the solvent.
Figure
Figure II.D-3: Coumarin-3-carboxylic acid (left) and 7-hydroxycoumarin-3-carboxylic acid (right) structures.
Figure II.D- 4 :
4 Figure II.D-4: Excitation spectrum (black curve, λemission = 450 nm) and emission spectrum (blue curve, λexcitation = 395 nm) of the 7-hydroxycoumarin-3-carboxylic acid (50 µM) in phosphate buffer at pH 7.4.
Figure II.D- 5 :
5 Figure II.D-5: (a) Structure of ThT (top) and representation of the rotation of the bond between the benzothiazole moiety (pink) and aniline moiety (blue) (bottom). (b) "Channel" model of ThT binding to fibril-like β-sheets. ThT is proposed to bind along surface side-chain grooves running parallel to the long axis of the β-sheet. Pictures from Reference[16]
The 1 H nucleus can occupy 2 energy levels (2I+1) related to two spin alignments to 0 B : parallel (magnetic quantum number m = ½) or anti-parallel (m = -½) (Figure II.E-1). The level of lower energy is slightly more populated by nuclei than the other one, resulting in a non-zero magnetic moment vector sum called magnetization ( M ) parallel to 0 B (Figure II.E-2a).
Figure II.E- 1 :
1 Figure II.E-1: Energy level of spin states for a nucleus with a spin I = ½ (such as 1 H) subjected to a magnetic field 0 B .
Figure II.E- 2
2 Figure II.E-2 Schematic view of the different phenomena during a 1 H NMR experiment.
Figure II.E- 3 :
3 Figure II.E-3: Fourier transform of the Free Induction Decay (FID) into NMR spectrum.
For 1 H
1 NMR experiments, the tetramethylsilane (TMS) is used as reference with its 12 protons that have the same chemical shift, far from the one of the protons usually studied.II.E.2. Application to Aβ chemical structure study a. Samples preparationAβ28 peptideA stock solution of Aβ28 2 mM was diluted with D2O to reach a peptide concentration of 0.4 mM, and the pH was adjusted to 10.5 with NaOD 1 mM.Aβ28ox peptideA stock solution of Aβ28ox 2.7 mM (for preparation, see Section II.A.3) was diluted with D2O to reach a peptide concentration of 0.4 mM, and the pH was adjusted to 10.5 with NaOD 1 mM.A volume of 600 µL is required to run the NMR experiment.b. NMR conditionsThe 1 H NMR experiments were recorded on a Bruker Avance 500 spectrometer equipped with a 5 mm triple resonance inverse Z-gradient probe (TBI 1 H,31 P, BB). The presaturation of the water signal was achieved with a zqpr sequence (Bruker).1 H NMR experiments are performed at 298K. The parameters for the acquisition are d1 = 30 and p1 = 6.6 µs. II.F. Electron Paramagnetic Resonance II.F.1. General principles Electron Paramagnetic Resonance (EPR) is an absorption spectroscopy which has similarities with NMR. The electromagnetic waves interact with the magnetic moment associated with electrons and not with nuclei. Yevgeny Zavoisky and Brebis Bleaney have developed independently EPR at the same time in 1944. EPR is mainly employed for studying compounds with an impaired number of electrons such as radicals and paramagnetic metal complexes.An electron is characterized by the spin quantum number S = ½, the magnetic moment μ and the two spin states ms = -½. and ms = + ½. In the absence of a magnetic field, the electron spin states are degenerated. However, when subjected to a magnetic field (B0), the magnetic moment of the electron aligns itself parallel (ms = -½) or antiparralel (ms = + ½) to B0. It results in a removal of degeneracy, the spin states having thus two different energies, ms = -½ having the lower one (Figure II.F-1). Those energy levels are called Zeeman levels.
Figure II.F- 1 :
1 Figure II.F-1: Energy level of spin states for an electron subjected to a magnetic field 0 B .
Figure
Figure II.F-2 shows a schematic view of an EPR equipment. A microwave source (usually a Gunn diode) provides the electromagnetic wave of fixed frequency ν, which goes through an attenuator, and is sent to the sample cavity by passing into the circulator. The sample cavity is surrounded by a magnet which generates the static magnetic field 0 B . The resulting
Figure II.F- 2 :
2 Figure II.F-2: Schematic representation of an EPR equipment.
Figure II.F- 3 :
3 Figure II.F-3: Example of EPR spectrum. (Top) absorption and (bottom) first derivative as a function of the magnetic field strength.
3 different Landé factors (gx, gy, gz, described by the Equation II.F-2) characterize the system, resulting in 3 signals at respectively Bx,res, By,res and Bz,res values of magnetic field on the EPR spectrum. 𝒈 𝒙/𝒚/𝒛 = 𝒉𝝂 𝝁 𝑩 . 𝑩 𝒙/𝒚/𝒛 𝒓𝒆𝒔 Equation II.F-2
the system, resulting in two different signals centred at B // res and B res values of magnetic field. 𝒈 // = 𝒉𝝂 𝝁 𝑩 . 𝑩 // 𝒓𝒆𝒔 Equation II.F-4 𝒈 = 𝒉𝝂 𝝁 𝑩 . 𝑩 𝒓𝒆𝒔 Equation II.F-5 As Cu(II) has an electron spin of S = ½ and a nuclear spin of I = 3/2, the hyperfine coupling will result in four signals (2I+1) for each Zeeman transition. The signal related to g // is subdivided into 4 distinct signals, as shown in Figure II.F-4. The hyperfine coupling constant A // can be calculated from the spectrum, with the Equation II.F-6. 𝑨 // = 𝓐 // 𝒈 // 𝝁 𝑩 𝒉 𝒄 Equation II.F-6
Figure II.G-1 shows theBohr model which illustrates the phenomenon. When the electron is excited, a core hole is created, filled in approximately a femto-second after the relaxation of an electron from a highenergy electron orbital, with a release of energy, leading in some cases to fluorescence light emission. Depending on the X-ray energy, the excited electron can come from K-shell, L-shells or M-shells. The closer the electron from the nucleus, the higher the energy required to excite it.
Figure II.G- 1 :
1 Figure II.G-1: Bohr model of an atom with the different electron shells.
an example, Figure II.G-2 shows a spectrum of the X-ray absorption of NiO as a function of the energy, as along with the X-ray Absorption Near Edge Structure (XANES) and the Extended X-ray Absorption Fine Structure (EXAFS) regions of the spectrum.
Figure II.G- 2 :
2 Figure II.G-2: NiO X-ray absorption spectrum highlighting the XANES and EXAFS regions.
using
Liquid Chromatography coupled to Tandem Mass Spectrometry (LC-MS/MS) and to High Resolution Mass Spectrometry (LC-HRMS), in order to characterize the oxidative damages on Aβ and to identify the oxidized amino acid residues. The study is complemented by proton Nuclear Magnetic Resonance ( 1 H NMR) analysis of the oxidized and non-oxidized Aβ peptide. The kinetics of the amino acid residues oxidation of Aβ40 is finally monitored by LC-HRMS. All these studies may provide information on the chemical modifications undergone by the amino acid residues of Aβ during oxidation, and will be further useful for studying the consequence of Aβ oxidation regarding Reactive Oxygen Species (ROS) production and aggregation, two events implicated in Alzheimer's Disease. III.A. Characterization of the oxidation sites During the ROS production catalyzed by the Cu-Aβ complex, the Aβ peptide is targeted by ROS. The amino acid residues damaged by ROS undergo chemical modifications, leading to a change of their mass. Thus, the characterization of Aβ oxidation can be investigated by MS. After Metal Catalyzed Oxidation (MCO) of Aβ by the copper/ascorbate/O2 system, purification and tryptic digestion, the sample is analyzed by LC-HRMS in order to measure the exact mass of every tryptic peptides and thus deduce the nature of the chemical modification undergone. Then, LC-MS/MS analysis of the sample is carried out in order to sequence the peptide and identify the amino acid residues targeted by ROS. The combination of HRMS and MS/MS is a great source of information for the investigation of the oxidation sites on the Aβ peptide. III.A.1. Experimental section Copper(II)-catalyzed oxidation of the Aβ40 peptide was carried out by mixing Aβ40, Cu(II) and ascorbate in phosphate buffer (50 mM, pH 7.4) to reach final concentrations of respectively 60, 50 and 500 μM (sub-stoichiometry of copper to avoid free Cu(II) in solution) in a reaction mixture volume of 100 μL. Incubation was done at room temperature for at least 15 min. Trypsin digestion was then realized in order to obtain the four tryptic peptides described above (see Section II.B.3). LC-HRMS and LC-MS/MS conditions are detailed in Section II.B.4. III.A.2. Results a. Detection of the oxidized tryptic peptides by LC-HRMS The digested oxidized Aβ40 was analyzed by LC-HRMS and the specific modifications due to amino acid residues oxidation were searched. As a reminder, the most common products obtained upon ROS oxidation of Aβ result from: (i) formal addition of one oxygen atom (mass shift of +16 Da), (ii) formal addition of molecular oxygen (+32 Da), (iii) carbonylation (+14 Da), (iv) decarboxylation/deamination of Asp1 (-45 Da), (v) oxidative cleavage of Asp1 (-89 Da), (vi) dityrosine cross-linking (formally -1 Da) (see Section I.C.3 for more information).From the total ion current (TIC) chromatogram obtained, chromatograms were traced from the theoretical monoisotopic masses of each tryptic peptide of Aβ40 and for their oxidized counterparts. Mass spectra were extracted at the retention time where the maximal intensity of the chromatographic peak is detected, showing the experimental monoisotopic masses. Thus, the theoretical and experimental monoisotopic masses were compared to evaluate the accuracy of the detection. All the masses were measured with a mass accuracy of 5 ppm. For the tryptic Aβ1-5 peptide that contains the N-terminal part of the peptide, four different ions are found (Figure III.A-1). First, the ion at m/z 637.2925 corresponds to the mono-protonated non-oxidized Aβ1-5. Ions at m/z 592.2715 and 548.2444, related to mass shifts of -45 Da and -89 Da, are specific mass modifications assigned respectively to the decarboxylation and deamination of Asp1 (DAEFRdd) and to the oxidative cleavage of Asp1 (DAEFRox).
7 Figure
7 Figure III.A-1: (Top, left panel) Trace chromatograms of DAEFR and its oxidation products DAEFRdd, DAEFRox and DAEFR+16 obtained by LC-HRMS for Aβ40 submitted to MCO for 15 min. (Top, right panel) Mass spectra extracted from the corresponding trace chromatograms at the retention time of maximal intensity of the peak. (Bottom) Theoretical (th) and experimental (exp) monoisotopic masses of native and oxidized Aβ1-5 tryptic peptides and mass difference between experimental and theoretical masses. NL: normalized intensity, RT: retention time (min).
Figure
Figure III.A-2 shows the mass spectra obtained for the Aβ6-16 tryptic peptide and its two oxidized homologs. The doubly protonated ion at m/z 668.8032 is the major ion of Aβ6-16 detected in the sample. Two other doubly protonated ions are detected at m/z 676.7997 and 684.7979, corresponding to the Aβ6-16 peptide with the formal addition of one and two oxygen atoms, respectively (+ 16 Da and +32 Da).
Figure
Figure III.A-2: (Top, left panel) Trace chromatograms of HDSGYEVHHQK and its oxidation products HDSGYEVHHQK+16 and HDSGYEVHHQK+32 obtained by LC-HRMS for Aβ40 submitted to MCO for 15 min. (Top, right panel) Mass spectra extracted from the corresponding trace chromatograms at the retention time of maximal intensity of the peak. (Bottom) Theoretical (th) and experimental (exp) monoisotopic masses of native and oxidized Aβ6-16 tryptic peptides and mass difference between experimental and theoretical masses. NL: normalized intensity, RT: retention time (min).
4 Figure
4 Figure III.A-3: (Top, left panel) Trace chromatograms of LVFFAEDVGSNK and its oxidation product LVFFAEDVGSNK+16 obtained by LC-HRMS for Aβ40 submitted to MCO for 15 min. (Top, right panel) Mass spectra extracted from the corresponding trace chromatograms at the retention time of maximal intensity of the peak. (Bottom) Theoretical (th) and experimental (exp) monoisotopic masses of native and oxidized Aβ17-28 tryptic peptides and mass difference between experimental and theoretical masses. NL: normalized intensity, RT: retention time (min).
9 Figure
9 Figure III.A-4: : (Top, left panel) Trace chromatograms of GAIIGLMVGGVV and its oxidation product GAIIGLMVGGVV+16 obtained by LC-HRMS for Aβ40 submitted to MCO for 15 min. (Top, right panel) Mass spectra extracted from the corresponding trace chromatograms at the retention time of maximal intensity of the peak.(Bottom) Theoretical (th) and experimental (exp) monoisotopic masses of native and oxidized Aβ29-40 tryptic peptides and mass difference between experimental and theoretical masses. NL: normalized intensity, RT: retention time (min).
the results obtained highlight the particular sensitivity of Asp1 during Aβ oxidation. The two oxidative damages detected on Asp1 strongly affect the chemical structure of the N-terminal part of the peptide (see Section I.C.3.b for chemical structures). Such a modification would probably have a significant impact on the behavior of the oxidized Aβ peptide regarding copper coordination, ROS production and aggregation.
Figure III.A- 5 :
5 Figure III.A-5: Fragmentation spectra of doubly protonated ion (m/z 676.9) of the tryptic Aβ6-16 peptide allowing for identification of His13 (top) or His14 (bottom) oxidation. b and y ions series (charge 1+, 2+ and 3+) detected are summarized in the peptide sequence. * indicates an increase of mass of +16 Da on the colored amino acid residue.
Figure III.A- 6 :
6 Figure III.A-6: Fragmentation spectra of doubly protonated ion (m/z 671.4) of tryptic Aβ17-28 with oxidation on Phe19 (top) or Phe20 (bottom). b and y ions series (charge 1+ and 2+) detected are summarized in the peptide sequence. * indicates an increase of mass of +16 Da on the colored amino acid residue.
Figure III.A- 7 :
7 Figure III.A-7: Fragmentation spectra of doubly protonated ion (m/z 551.3) of tryptic Aβ29-40 with oxidation on Met35. b and y ions series (charge 1+ and 2+) detected are summarized in the peptide sequence. * indicates an increase of mass of +16 Da on the colored amino acid residue.
Figure III.A- 8 :
8 Figure III.A-8: (Top) Peptide sequence of Aβ40 with the oxidized amino acid residues (black circle) detected along with the nature of oxidation and the change of mass and (bottom) chemical structure of the oxidized amino acid residues. The 3-hydroxyphenylalanine is shown as an example of phenylalanine oxidation product, but the 2-and 4-hydroxyphenylalanines can also be formed (see Section I.C.3.d).
Figure III.B-1 shows the area ratio between the oxidized peptide and the equivalent non-oxidized peptide (control at t=0) as a function of reaction time. Such a calculated ratio gives a general tendency of the evolution of the oxidized species throughout the completion of the reaction. It cannot be considered as an accurate quantification because each peptide has a different propensity to get ionized in the ESI source and cannot be formally compared to another one. Decarboxylation-deamination of Asp1 (DAEFRdd) seems to be the major oxidative reaction since the level of DAEFRdd strongly increases with the reaction time. DAEFRox (oxidative cleavage of Asp1) also increases with time, but to a much lesser extent (Figure III.B-1a). The level of DAEFRdd is increasing with time and seem to stop after 8 min. For DAEFRox, the tendency is less clear, but the formation rate of DAEFRox seems also to slow down after 8 min.
Figure III.B- 1 : 1 . 16 (
1116 Figure III.B-1: Oxidation of the Aβ40 tryptic peptides as a function of the time. Area ratio between the oxidized and the equivalent non-oxidized (control, t=0) tryptic peptide of Aβ40 as a function of the reaction time. (a) Decarboxylation and deamination of Asp1 (DAEFRdd) and oxidative cleavage of Asp1 (DAEFRox); (b) oxidation of His13/His14 (HDSGYEVHHQK+16 and +32); (c) oxidation of Phe19/20 (LVFFAEDVGSNK+16); (d) oxidation of Met35 (GAIIGLMVGGVV+16). Mass tolerance set at 10 ppm.m/z ratios used for detection are specified in TableIII.A-1.
Figure III.C-1 shows the 1 H
1 Figure III.C-1 shows the 1 H NMR spectra obtained for Aβ28 (black line) and Aβ28ox (blue line) in the four shift ranges of interest: 0.5-2.0, 2.0-3.3, 3.5-4.7 and 6.7-7.9 ppm. Oxidation strongly affects the 1 H NMR signal, as the peaks of Aβ28ox are very broad compared with the one of Aβ28. Thus, only a qualitative comparison is made between Aβ28 and Aβ28ox. The amino acid residues of the C-terminal moiety are not affected by oxidation: Asp23, Val24, Gly25, Ser26 Asn27 and possibly Ala21 and Lys28. For the latter two, their proton signals have similar chemical shifts than Ala2 and Lys16 (around 1.4 ppm). The two doublets of Ala2 and Ala21 form a triplet-like signal and the two doublets of triplets of Lys16 and Lys28 are hidden under the Ala signals. After oxidation, a doublet corresponding to Hβ protons of an Ala residue in observed, hiding a part of another signal, probably a doublet of triplets of the Hγ protons of a Lys residue. Thus, as the amino acid residues close to Ala21 and Lys28 are not affected by oxidation, it is likely that Ala2 and Lys16 are affected by oxidation and Ala21 and that Lys28 are not.
Figure III.C- 1 : 1 H
11 Figure III.C-1: 1 H NMR spectra of Aβ28 (a) and Aβ28ox (b) at pH 10.5.
seems to affect the proton signals of the neighboring amino acid residues. As Val and Leu residues (except Val24 which is not affected by oxidation) are close to His13, His14, Phe19 or Phe20, which are found oxidized in MS experiments (see Section III.A), this could explain their broad NMR signals. However, some signals seem to be wider than the ones of Val and Leu residues. In particular, the signals of Hα and Hβ protons of Asp1 (at 3.68 and 2.45 ppm) are almost erased. Moreover, a signal is observed at 2.6 ppm only in Aβ28ox (Figure III.C-1b, red oval). This is likely to be the signal of the three equivalent protons of the methyl group in the pyruvate function formed by the decarboxylation-deamination of Asp1 (see Section I.C.3.b). Thus, the results are in line with the oxidation of Asp1 by decarboxylation-deamination detected in MS (see Section III.A). Signals assigned to Hδ and Hε of His6, His13 and His14 are observed at chemical shifts around 7.6 and 6.9 ppm respectively. Hα and Hβ peaks of His are not noticeable since they have chemical shifts around 3 and 4.5 ppm, as numerous other protons of the peptide (Figure III.C-1) and are thus difficult to isolate. 1 H NMR peaks of His are also strongly affected by oxidation. The same tendency is observed for the other aromatic residues (Phe and Tyr residues).
(II) coordination to the oxidized Aβ peptide (Aβox) has been investigated by Electron Paramagnetic Resonance (EPR) and X-ray absorption (XANES). The ROS production during metal-catalyzed oxidation (MCO) of Aβ has been studied at different steps of Aβ oxidation. Finally, the impact of Aβ oxidation on fibrils formation has been studied by Thioflavin T (ThT) fluorescence and Transmission Electron Microscopy (TEM). IV.A. Cu coordination and ROS production with Aβox This section focuses on the impact of Aβ oxidation on Cu(I) and Cu(II) binding modes and on the ROS production. It is composed of an article published in the journal Metallomics
l'oxydation du peptide Aβ sur la production des espèces réactives de l'oxygène (ROS) et la coordination avec les ions Cu(I) et Cu(II). En présence d'oxygène et d'un agent réducteur comme l'ascorbate, le complexe cuivre-peptide a la capacité de produire des ROS en cyclant entre les états redox Cu(I) et Cu(II). Dans le chapitre précédent, nous avons montré que le peptide Aβ est une cible privilégiée pour les radicaux hydroxyles ainsi générés et nous avons identifié les acides aminés Asp1, His13 et His14 comme cibles préférentielles du radical hydroxyle. Ils sont tous trois connus pour être impliqués dans la sphère de coordination de Cu(II) et /ou Cu(I) (voir Section I.C.1), comme le montre la Figure IV.A-1.
Figure IV.A- 1 :[ 2 ]
12 Figure IV.A-1: Modes de coordination du peptide Aβ avec les ions Cu(II) (panel a) et Cu(I) (panel b) à pHphysiologique.[2]
Figure IV.A- 2 :
2 Figure IV.A-2: Vue schématique de l'attaque oxydative du radical hydroxyle sur le peptide Aβ lors de la production d'espèces réactives de l'oxygène.[2]
binding mode of Zn(II) might be impacted by Aβ oxidation, as previously observed for Cu(II) and Cu(I) (Section IV.A). In the present study, the impact of Aβ oxidation on Zn(II) coordination is investigated by XANES. Several mutated peptides are employed to mimic the oxidation of each amino acid residue. As Zn(II) has a d 10 electronic configuration, it is "silent" is most of the classical spectroscopic techniques such as UV-Vis and EPR. Hence, XANES belongs to the few techniques suitable for Zn(II) coordination mode investigation. IV.B.1. Experimental section Stock solutions of Aβ28 (sequence DAEFRHDSGYEVHHQKLVFFAEDVGSNK), D1N (sequence NAEFRHDSGYEVHHQK), H13A (sequence DAEFRHDSGYEVAHQK), H6A-H13A (sequence DAEFRADSGYEVAHQK) and H6A-H14A (sequence DAEFRADSG-YEVHAQK) were prepared and titrated (see Section II.A for more details). The preparation, purification and dosage of Aβ28ox is detailed in Section II.A.3. ZnSO4.H2O was purchased from Strem Chemicals. A Zn(II) stock solution (10 mM) was prepared in ultrapure water. HEPES buffer was bought from Sigma and dissolved in ultrapure water to reach a 0.5 M concentration and a pH 5.2 adjusted with NaOH (10 M) to obtain a final pH of 7.1. The methodology and the conditions of the Zn XANES experiments are detailed in Section II.G.2.b.
Figure IV.B- 1 :
1 Figure IV.B-1: Zn XANES K-edge X-ray absorption spectra of Aβ28-Zn(II) (red curve), AcAβ16-Zn(II) (orange curve), Aβ28ox-Zn(II) (black curve), H13A-Zn(II) (blue curve), (H6A-H13A)-Zn(II) (purple curve), (H6A-H14A)-Zn(II) (green curve) and Zn(II) in buffer (grey curve). The conditions were as follow: 0.9 mM Zn(II), 1 mM peptide in a HEPES buffer (0.1 M, pH 7.1) solution.
Figure
Figure IV.B-1 shows the Zn K-edge XANES spectra of Zn-Aβ28 (red curve), Zn-Aβ28ox (black curve), Zn-D1N (orange curve), Zn-H13A (blue curve), Zn-(H6A-H14A) (green curve)and Zn in buffer (grey curve). The XANES signatures of several Zn-peptide complexes are very different around 9670 eV, i.e. in the white line region which is very sensitive to the site geometry. In particular, the XANES signatures of Zn-Aβ28ox and Zn-Aβ28 are different between 9660 and 9680 eV, meaning that Aβ oxidation has an impact on Zn coordination, as expected.First, the XANES signature of Zn-D1N is very close to that of Zn-Aβ, suggesting that a modification of the N-terminal part of Aβ does not induce a strong change in Zn coordination.
fluorescence monitoring is well-suited for kinetic study of Aβ aggregation into fibrils. Both Aβ40ox and Aβ40 solutions incubated at 37°C in the presence of ThT were monitored by fluorescence. Figure IV.C-1 shows the ThT fluorescence as a function of the reaction time for Aβ40 (grey curve) and Aβ40ox (blue curve). For Aβ40, fluorescence starts to increase after around 90 hours, meaning that the peptide aggregates and forms β-sheet structures (fibrils). The fluorescence half time is observed at t ½ = 134 hours. For Aβ40ox, fluorescence remains at the background level all along the 200 hours of the reaction time. This absence of fluorescence increase suggests that oxidation has a strong impact on the aggregation capability of Aβ because Aβ40ox does not form any β-sheet structures.
Figure IV.C- 1 :
1 Figure IV.C-1: β-sheet formation during Aβ40 (grey curve) and Aβ40ox (blue curve) aggregation. ThT fluorescence as a function of the time of HEPES (50 mM, pH 7.4) buffered solution containing Aβ40 or Aβ40ox (20 µM) and ThT (10 µM) incubated at 37 °C.
Figure IV.C- 2 :
2 Figure IV.C-2: TEM pictures of Aβ40 (left) and Aβ40ox (right) after 200 h of aggregation at 37°C
Figure IV.C- 3 :
3 Figure IV.C-3: Structural model for Aβ40 fibrils, obtained from solid state NMR constraints. (a) Representation of a single molecular layer constituted by a double-layered structure, with a parallel βsheets formed by residues 12-24 (orange) and 30-40 (blue). The arrow shows the direction of the long axis of the fibril. (b) Aβ40 viewed down the long axis of the fibril. Green colors correspond to hydrophobic sidechains, magenta to polar, blue to positive and red to negative ones. Picture from Ref[21] .
Zn(II), Cu(I) and Cu(II) binding modes with Aβox have been probed by XANES for the former two and by EPR for the latter. Aβ oxidation leads to a change in the coordination of the three metal ions. In addition, Cu(I) XANES and Cu(II) EPR results allowed to get relative quantification of the oxidative damages: around 40 % of Asp1 would be damaged and not available anymore for binding while around 40 % of the oxidized species would have two oxidized His. ROS production catalyzed by Cu-Aβox was also investigated. Cu-Aβox has a higher catalytic activity and released more HO • than Cu-Aβ. The change in ROS production occurred at a specific step of Aβ oxidation, when the peptide is sufficiently oxidized to induce a change in Cu coordination. Both aggregation monitoring by ThT fluorescence and sample imaging by TEM highlight a strong modification of the Aβ aggregation process, leading to amorphous-like aggregates instead of fibrillar aggregates. These preliminary results have to be validated by further controls and future experiments could be carried out with mutated peptides in order to determine if one specific amino acid residue oxidation in particular affects the aggregation process. Zn and Cu binding modes with Aβ were also strongly affected by Aβ oxidation. As Zn
FigureFiguresFigure S1 :Figure S2 :
S1S2 Figure S8 and S9.
Figure S3 :
S3 Figure S3: Remaining Aβ28 peptide. Trace chromatograms of Aβ28 (60 µM) after 30 min in the presence of Cu (50µM) (left panel) or Cu (50 µM) and ascorbate (0.5 mM) (right panel) in phosphate buffer pH 7.4 (50 mM). Aβ28 is detected with the [M+3H] 3+ , [M+4H] 4+ and [M+5H] 5+ m/z ratio ions: 1087.8503, 816.1397 and 653.1133. Mass tolerance set at 5 ppm. Ratio between the chromatogram area of Aβ28-Cu(II) with ascorbate to chromatogram area of Aβ28-Cu(II) without ascorbate is around 0.2, meaning that around 80% of Aβ28 is oxidized (at least one amino acid residue is oxidized).
Figure S4 :
S4 Figure S4: Linear Combination Fitting of A28-Cu(I) and Aβ7-Cu(I) to reproduce Aβox-Cu(I) spectrum. The fitting allows deducing that around 40% of Cu(I) is bound the same way than Aβ7 (bearing only one His), and that no Cu(I) is released in buffer (0% of Cu(I)-HEPES contribution). Quantitative results of linear combination fitting (Right panel) and the resulting XANES spectra with 42% Cu(I)-Aβ7 and 58% Cu(I)-Aβ28 XANES signatures (blue curve) compared to Cu(I)-Aβox XANES spectrum (grey curve). Cu(I)-Aβ7 (green curve) and Cu(I)-Aβ28 (red curve) XANES signatures, weighted according to the result found in the linear combination fitting, are also shown (Left panel).
Figure S5 :
S5 Figure S5: EPR spectra of Cu(II)-Aβox (black line) and Cu(II)-Aβ28 (grey line) at pH 7.4. Aqueous solution with 10% of glycerol containing 65 Cu (450 µM) and peptide (500 µM). T = 120 K, ν = 9.5 GHz, microwave power = 20.5 mW, Amplitude modulation = 10.0 G, Modulation frequency = 100 kHz.
Figure S6 :
S6 Figure S6: Linear combination fitting of 60% Cu(II)-Aβ28 and 40% Cu(II)-AcAβ28 (red line) compared to Cu(II)-Aβox signature (black line).
Figure S7 :
S7 Figure S7: Oxidation of Asp1 (DAEFRdd) in the presence or absence of H2O2. Trace chromatograms of the Aβ oxidized tryptic peptide DAEFRdd (decarboxylation and deamination of Asp1), after successive additions of ascorbate (4 nmol, final conc. 20 µM; left panel) or ascorbate/H2O2 (4/10 nmol, final conc. 20/50 µM; right panel). A28 25 µM and Cu(II) 20 µM, phosphate buffered pH 7.4 (50 mM). Mass tolerance set at 5 ppm; m/z ratios used for detection are specified in TableS1.
Figure S8 :
S8 Figure S8: Oxidation of Asp1 (DAEFRox) in the presence or absence of H2O2Trace chromatograms of the oxidized Aβ tryptic peptide DAEFRox (oxidative cleavage of Asp1), after successive additions of ascorbate (4 nmol, final conc. 20 µM; left panel) or ascorbate/H2O2 (4/10 nmol, final conc. 20/50 µM; right panel). A28 25 µM and Cu(II) 20 µM, phosphate buffered pH 7.4 (50 mM). Mass tolerance set at 5 ppm; m/z ratios used for detection are specified in TableS1.
Figure S9 :
S9 Figure S9: Oxidation of His (HDSGYEVHHQK+16) in the presence or absence of H2O2.Trace chromatograms of the oxidized Aβ tryptic peptide HDSGYEVHHQK+16 (formal addition of an oxygen atom), after successive additions of ascorbate (4 nmol, final conc. 20 µM; left panel) or ascorbate/H2O2 (4/10 nmol, final conc. 20/50 µM; right panel). A28 25 µM and Cu(II) 20 µM, phosphate buffered pH 7.4 (50 mM). Mass tolerance set at 5 ppm; m/z ratios used for detection are specified in TableS1.
Figure S10 :
S10 Figure S10: MCO of Aβ in the presence of H2O2, identification of oxidized His13. Series of b and y ions (charge 1+ and 2+) used for the identification of the oxidation of His13, and corresponding MS/MS spectrum of the doubly protonated ion of HDSGYEVHHQK+16. Same experimental conditions as the ones of Figure S8.
Figure S11 :
S11 Figure S11: MCO of Aβ in the presence of H2O2, identification of oxidized His14. Series of b and y ions (charge 1+ and 2+) used for the identification of the oxidation of His14, and corresponding MS/MS spectrum of the doubly protonated ion of HDSGYEVHHQK+16. Same experimental conditions as the ones of Figure S8.
Figure S12 :
S12 Figure S12: Fluorescence curves of phosphate buffered solutions (50 mM) containing Cu (20 µM), Aβ peptide (25 µM, except the green curve: 50 µM), CCA (0,5 mM) with additions of ascorbate (20 µM) and hydrogen peroxide (50 µM) realized every 15 min during 2 hours.
Figure S13 :
S13 Figure S13: Fluorescence at the plateau of phosphate buffered solution (50 mM) containing Cu (20 µM), Aβ peptide (25 µM), CCA (0,5 mM) as a function of the number of ascorbate and H2O2 additions. A total of 8 additions of 2 µL ascorbate (2 mM) and 2 µL hydrogen peroxide (5 mM) are realized, i.e. respectively 4 and 10 nmol for each addition (initial concentrations reaching 20 and 50 µM for the first addition, respectively).
Figure V.B- 1 :
1 Figure V.B-1: Vue schématique de l'équilibre entre les états "resting" (au repos) et "in-between" du complexe Cu-Aβ pendant la production de ROS.
en position 13 et 14 ,
14 qui sont impliqués dans la coordination de Cu(I) ou/et de Cu(II) au repos (voir Section I.C.1), sont les principaux acides aminés retrouvés oxydés après la production de ROS. Il est donc proposé que ces 3 acides aminés sont également les ligands du cuivre dans l'état IB qui est responsable de la production de ROS. La Figure V.B-2 schématise les réarrangements entre les états au repos, appelés états « resting » et les états IB, ainsi que les transferts électroniques entre Cu(II)-Aβ et l'ascorbate et entre Cu(I)-Aβ et le peroxyde d'hydrogène.
Figure V.B- 2 :
2 Figure V.B-2: Représentation schématique des modes de coordination de Cu(II)-Aβ et Cu(I)-Aβ au repos (resting states) et la proposition de mode de coordination de Cu(I/II)-Aβ dans l'état « in-between ».[3]
Figure V.B- 3 :
3 Figure V.B-3: Représentation schématique du mode de coordination proposé de Cu(I/II)-Aβ dans l'état « in-between » lors de la production de ROS en présence d'un substrat.
Figure V.B- 4 :
4 Figure V.B-4: Mécanisme proposé des transferts électroniques entre l'ascorbate et le complexe Cu(II)-Aβ et entre le peroxyde d'hydrogène et le complexe Cu(I)-Aβ, réalisés par le passage vers des modes de coordination du complexe Cu-Aβ transitoires appelés états « in-between ». L'amine terminale et le groupe carboxylate de l'Asp1 sont indiqués en bleu et le motif imidazole de l'His impliquée dans l'état IB est indiqué en violet.
Figure VI.B- 1 :
1 Figure VI.B-1: Effets pro-oxydants de l'ascorbate menant à la production de ROS catalysée par le cuivre (voie rouge) et effets anti-oxydants de l'ascorbate menant à la formation d'espèces non radicalaires (voie verte).
Figure VI.B- 2 :
2 Figure VI.B-2: Structures chimique de l'acide coumarine-3-carboxylique (nommé CCA, panel a) et l'acide 7-hydroxycoumarine-3-carboxylique (nommé 7-OH-CCA, panel b).
Figure VI.B- 3 :
3 Figure VI.B-3: Représentation schématique des effets pro-et anti-oxydants de l'ascorbate et conséquences sur les cibles biologiques environnantes et sur le peptide Aβ.AscH -est l'abréviation de l'ascorbate.[1]
Figure S1 :
S1 Figure S1: Schematic view of the proposed Cu(II) binding site in Aβ in component I (left) and II (right).pKa (I/II) = 7.8[6]
established a new methodology to take into account this possible pH drift: (i) For the gradient measurement (Figure2in Full Text), we have verified that the pH drift is negligible even at high ascorbate concentration. This is due to the weak ascorbate consumption during the first minutes of the experiment. (ii) For the plateau measurement (FigureS5and figure 3 in the Full Text), we have established a new protocol in which the pH was raised at the end of the reaction to pH 8.5, a region where 7-OH-CCA fluorescence is weakly sensitive to pH changes.Additionally, we have also verified that the rate of HO • trapped by CCA was independent of pH in the range of pH 7.0 to 7.4, where 7.0 is the minimal value of final pH obtained after experiment with high ascorbate concentration (FigureS4). Thus, the value of the measured fluorescence is proportional to the HO • trapped at pH 7.4. -3-carboxylate / 7-oxidocoumarin-3-carboxylate (pKa 7.5)
Figure S3 :
S3 Figure S3: Fluorescence at 450 nm as a function of 7-OH-CCA concentration, after excitation at 390 nm. Phosphate-buffered solution (50 mM, pH 7.4) of 7-OH-CCA (concentration range from 0 to 0.5 mM).
Figure S4 :
S4 Figure S4: UV-Visible spectra of the 7-OH-CCA (50 µM) in a phosphate buffered solution (50 mM) with pH increased from 5.9 to 9.4 at 25°C.
Figure S5 :
S5 Figure S5: Determination of the pKa of the hydroxyl functional group of the 7-OH-CCA. Absorbance is plotted as a function of the pH of the acidic form absorbing at λmax = 338 nm (grey crosses) and the basic form absorbing at λmax = 388 nm (black crosses).
Figure S6 :
S6 Figure S6: Impact of the pH on the formation of 7-OH-CCA during the metal-catalyzed production of the HO • . Reaction of Cu (50 µM), ascorbate (0.5 mM for grey panel and 1 mM for dark panel) in phosphate buffered solution (50 mM, pH 6.5, 7.0, 7.4, 8.0 or 8.5) containing CCA (0.5 mM). At the end of the reaction, the pH is ajusted to 8.5 with POPSO buffer (0.4 M, pH 9.0).
Figure S7 :
S7 Figure S7: Reaction of Cu (50 µM, black curves) or Cu-Aβ16 (50-60 µM, grey curves) with ascorbate (0.5 mM, in phosphate buffered solution, pH 7.4). Left: Ascorbate consumption followed using UV (absorption of ascorbate at λmax = 264 nm) as a function of the time. Right: Fluorescence of the 7-OH-CCA produced by HO • trapping by CCA (0.5 mM) as a function of the time. When ascorbate is fully consumed (see left figure), POPSO buffer is added to increase the pH to 8.5. Arrow indicates the addition of POPSO on dotted curve.
Figure S8 :Figure S9 :
S8S9 Figure S8: Trace chromatograms of the non-oxidized Aβ28 peptide (60 µM) in the presence of Cu II (50 µM) and ascorbate (0, 0.1, 0.2, 0.3, 0.4, 0.5 or 3.5 mM) at the end of the reaction. Mass tolerance set at 5 ppm; m/z ratios used for detection are specified in blue in table S1.
Figure S10 :Figure S11 :
S10S11 Figure S10: Initial rates of 7-OH-CCA fluorescence at 450 nm, reflecting the scavenging of HO • by CCA. Phosphate-buffered solution (50 mM, pH 7.4) of CCA (0.5 mM), Cu (50 µM), ), Aβ16, Aβ28 or Aβ40 (60 µM)and ascorbate (concentration between 0 and 5 mM).
Figure 1 :
1 Figure 1: Chemical structure of Aβ28 sequence along with the atom identifiers of each amino acid residue.
Figure 2 : 5 .Figure 5 :
255 Figure 2: 1H NMR spectra of Aβ28 (top) and Aβ28ox (bottom) at 400 µM, pH 10.5.
Figure 1 :
1 Figure 1 : Production d'espèces réactives de l'oxygène (ROS) catalysée par le complexe Cu-Aβ en présence d'ascorbate (Asc). L'anion superoxyde (O2 •-), le peroxyde d'hydrogène (H2O2) et le radical hydroxyle (HO • ) sont formés à partir du dioxygène.
Figure 2 :
2 Figure 2 : Séquence peptidique de Aβ40 avec les acides aminés oxydés (cercles noirs) détectés ainsi que la nature de l'oxydation et le changement de masse (en Da) et structure chimique des acides aminés oxydés. La 3-hydroxyphenylalanine est montrée comme exemple de produit d'oxydation de la phénylalanine, mais les 2-et 4-hydroxyphenylalanines pourraient également être formés.
Figure 3 :
3 Figure 3 : Oxydation des peptides obtenus après digestion trypsique de Aβ40 en fonction du temps. Ratio des aires entre le peptide trypsique de Aβ40 oxydé et son equivalent non-oxydé (contrôle, t=0) en fonction du temps de réaction. (a) Décarboxylation et déamination de Asp1 (DAEFRdd) et clivage oxydatif de Asp1 (DAEFRox); (b) oxydation de His13/His14 (HDSGYEVHHQK+16 et +32); (c) oxydation de Phe19/20 (LVFFAEDVGSNK+16); (d) oxydation de Met35 (GAIIGLMVGGVV+16).
Figure 4 :
4 Figure 4 : Modes de coordination du peptide Aβ avec les ions Cu(II) (panel a) et Cu(I) (panel b) à pH physiologique.
Figure 5 :
5 Figure 5 : Spectre XANES au seuil du Cu de Aβ-Cu(I) (courbe noire), Aβox-Cu(I) (courbe bleue) et Aβ7-Cu(I) (courbe grise). Les flèches indiquent l'évolution de la signature de Aβox-Cu(I) comparée à celle de Aβ-Cu(I).
Figure 6 : 5 .
65 Figure 6 : Spectres RPE de Cu(II)-Aβox (courbe bleue), Cu(II)-Aβ (courbe noire) et Cu(II)-AcAβ (courbe grise) à pH 12,5.
Figure 7 :
7 Figure 7 : Vue schématique de l'attaque oxydante du radical hydroxyle sur le peptide Aβ lors de la production d'espèces réactives de l'oxygène.
Figure 9 :
9 Figure 9 : Vue schématique de l'équilibre entre les états "resting" (au repos) et "in-between" du complexe Cu-Aβ pendant la production de ROS.
Figure 10 :
10 Figure 10 : Représentation schématique des modes de coordination de Cu(II)-Aβ et Cu(I)-Aβ au repos (resting states) et la proposition de mode de coordination de Cu(I/II)-Aβ dans l'état « in-between ».
Figure 11 :
11 Figure 11 : Représentation schématique du mode de coordination proposé de Cu(I/II)-Aβ dans l'état « inbetween » lors de la production de ROS en présence d'un substrat.
Figure 12 :
12 Figure 12 : Mécanisme proposé des transferts électroniques entre l'ascorbate (Asc -) et le complexe Cu(II)-Aβ et entre le dioxygène et le complexe Cu(I)-Aβ, réalisés par le passage vers des modes de coordination du complexe Cu-Aβ transitoires appelés états « in-between ». L'amine terminale et le groupe carboxylate de l'Asp1 sont indiqués en bleu et le motif imidazole de l'His impliquée dans l'état in-between est indiqué en violet. Asc • représente le radical ascorbyl.
Figure 14 :
14 Figure 14 : Structures chimique de l'acide coumarine-3-carboxylique (nommé CCA, panel a) et l'acide 7hydroxycoumarine-3-carboxylique (nommé 7-OH-CCA, panel b).
Figure 15 :
15 Figure 15 : Représentation schématique des effets pro-et anti-oxydants de l'ascorbate et conséquences sur les cibles biologiques environnantes et sur le peptide Aβ. AscH -est l'abréviation de l'ascorbate.
...................................................................................................................................... d. Phenylalanines ............................................................................................................................... e. Methionine ..................................................................................................................................... f. Other cleavages .............................................................................................................................. REFERENCES ........................................................................................................................................... II.A. PREPARATION OF THE AΒ PEPTIDE ................................................................................................ II.A.1. Solubilisation and monomerization ................................................................................ II.A.2. Dosage [1] ......................................................................................................................... II.A.3. Oxidation and purification of Aβ ..................................................................................... II.A.4. Preparation for aggregation ........................................................................................... II.B. MASS SPECTROMETRY ...............................................................................................................
CHAPTER II: METHODOLOGIES ...................................................................................................... II.B.
1. General principles ........................................................................................................... II.B.2. Electrospray Ionization (ESI) ............................................................................................ II.B.3. Ion trap ............................................................................................................................ a. Principle ......................................................................................................................................... b.
Samples preparation ...................................................................................................................... b. NMR conditions .............................................................................................................................. II.F. ELECTRON PARAMAGNETIC RESONANCE ........................................................................................
...................................................................................................... II.D.2. Fluorescence of 7-hydroxycoumarin-3-carboxylic acid ................................................... II.D.3. Thioflavin T fluorescence ................................................................................................. II.E. PROTON NUCLEAR MAGNETIC RESONANCE ................................................................................... II.E.1. General principles ........................................................................................................... II.E.2. Application to Aβ chemical structure study .................................................................... a. II.F.1. General principles ........................................................................................................... II.F.1. Application to Cu(II) coordination study ......................................................................... II.G. X-RAY ABSORPTION NEAR EDGE STRUCTURE (XANES) ...................................................................
. Cu K-edge XANES conditions .......................................................................................................... b. Zn K-edge XANES conditions .......................................................................................................... REFERENCES ........................................................................................................................................... III.A. CHARACTERIZATION OF THE OXIDATION SITES ................................................................................. III.A.1. Experimental section ....................................................................................................... III.A.2. Results ............................................................................................................................. a. Detection of the oxidized tryptic peptides by LC-HRMS ................................................................ b. Characterization of the oxidation sites by LC-MS/MS .................................................................... III.B. KINETICS OF AΒ40 OXIDATION ...................................................................................................... III.B.1. Experimental section ....................................................................................................... III.B.2. Results .............................................................................................................................
......................................................................... II.G.2. Application to Cu(I) and Zn(II) coordination .................................................................... aCHAPTER III: OXIDATION OF THE AΒ PEPTIDE ................................................................................. III.C. NMR STUDY OF AΒOX ............................................................................................................... III.C.1. Experimental section ....................................................................................................... III.C.2. Results ............................................................................................................................. III.C.3. Summary .........................................................................................................................
IV.D. CONCLUSION ......................................................................................................................... REFERENCES ......................................................................................................................................... SUPPORTING INFORMATION .................................................................................................................... .B. FRENCH SUMMARY.................................................................................................................. V.C. SUPPORTING INFORMATION ...................................................................................................... REFERENCES ......................................................................................................................................... VI.A. COMMUNICATION................................................................................................................... VI.B. FRENCH SUMMARY.................................................................................................................. VI.C. SUPPORTING INFORMATION ...................................................................................................... REFERENCES ......................................................................................................................................... ANNEX I .............................................................................................................................................. ANNEX II .............................................................................................................................................
1. Article .............................................................................................................................. IV.A.2. French summary ...........................................................................................................
IV.B. ZN COORDINATION WITH AΒOX ................................................................................................. IV.B.1. Experimental section..................................................................................................... IV.B.2. Results ........................................................................................................................... IV.C. AGGREGATION OF AΒOX .......................................................................................................... IV.C.1. Experimental section ..................................................................................................... IV.C.2. Results ........................................................................................................................... a. Aggregation of Aβ and Aβox ........................................................................................................ b. Morphology of Aβ aggregates ...................................................................................................... IV.C.3. Outlook ......................................................................................................................... CHAPTER V: CHARACTERIZATION OF A REDOX COMPETENT CU BINDING MODE IN THE « IN-BETWEEN » STATE……………………………………………………………………………………………………………………………………. .127 V.A. ARTICLE ................................................................................................................................ VCHAPTER VI: PRO VERSUS ANTIOXIDANT PROPERTIES OF ASCORBATE ........................................ GENERAL CONCLUSION ................................................................................................................... ANNEX III ............................................................................................................................................ RESUME ..............................................................................................................................................
Le cerveau humain est un organe très surprenant. Alors qu'il ne pèse que 2 % du poids total du corps et qu'il est constitué majoritairement d'eau (75%), il assure les fonctions cognitives et motrices du corps et traite les informations provenant de la vue, de l'ouïe, de l'odorat, du toucher et du goût. Il est l'une des parties du corps dont l'activité métabolique est la plus intense, et utilise au repos, à lui tout seul, 20% de l'oxygène consommé par l'organisme entier. Il est constitué de 100 milliards de neurones qui communiquent entre eux grâce aux neurotransmetteurs, messagers chimiques qui traversent les synapses. Le cerveau étant un organe extrêmement complexe et multitâche, les origines biologiques et chimiques de ses nombreuses fonctions ne sont pas encore toutes parfaitement connues.
De nombreuses pathologies sont liées à un dysfonctionnement métabolique du cerveau, mais pour la plupart d'entre elles, leur étiologie est inconnue. C'est le cas des maladies neurodégénératives, ou démences, qui touchent actuellement plus de 45 millions de personnes à travers le monde. En raison de l'augmentation de l'espérance de vie liée aux avancées de la médecine, le nombre de personnes atteintes de démence ne fait que s'accroître. Parmi les différentes maladies neurodégénératives, la maladie d'Alzheimer, découverte il y a plus d'un siècle, est la plus répandue. Bien que la cause de son développement soit encore inconnue à l'heure actuelle, deux types de lésions cérébrales sont observés chez les patients : (i) les enchevêtrements neurofibrillaires, ayant pour origine l'hyperphosphorylation de la protéine Tau et (ii) la formation de plaques amyloïdes extracellulaires. Les recherches scientifiques sont donc principalement dirigées vers l'étude de ces deux caractéristiques de la maladie, que ce soit d'un point de vue mécanistique, pour comprendre l'étiologie de la maladie, ou d'un point de vue thérapeutique, pour tenter de trouver un médicament efficace. Dans ce cadre global, le travail présenté ici s'est focalisé sur la problématique des plaques amyloïdes -appelées aussi plaques séniles -et plus précisément sur son composant principal, le peptide amyloïde-bêta (Aβ). Aβ est un peptide composé de 40 à 42 acides aminés, naturellement présent sous forme monomérique dans le cerveau. Dans le cas de la maladie d'Alzheimer, il est retrouvé sous forme agrégée dans les plaques amyloïdes. Ces dernières peuvent être formées dans l'espace inter-synaptique et empêcher le bon fonctionnement des neurones en obstruant le passage des neurotransmetteurs entre les synapses de deux cellules nerveuses. Un lien entre maladie d'Alzheimer et stress oxydant a également été démontré. En plus de sa capacité d'agrégation dans des conditions spécifiques à la maladie d'Alzheimer, Aβ est également à l'origine de la production d'espèces réactives de l'oxygène (ROS) car il est capable de chélater des ions métalliques ayant des propriétés oxydo-réductrices, ions cuivre ou fer par exemple. En présence d'un agent réducteur tel que l'ascorbate, naturellement présent dans le cerveau à des concentrations pouvant être localement importantes, le complexe Cu-Aβ formé peut catalyser la production de l'anion superoxyde (O2
•-), du peroxyde d'hydrogène (H2O2) et du radical hydroxyle (HO • ) à partir du dioxygène. Ces ROS, et plus spécifiquement le radical hydroxyle, sont des espèces oxydantes réactives qui peuvent endommager les biomolécules environnantes (lipides, protéines, ADN). Lors de la production de ROS, le peptide Le chapitre IV se concentre sur l'étude des conséquences de l'oxydation de Aβ. Le mode de coordination du peptide oxydé (Aβox) avec les ions métalliques Cu(I) et Cu(II), catalyseurs de la production de ROS, est étudié par XANES et RPE respectivement, et l'effet de l'oxydation de Aβ sur la production de ROS est étudié par spectroscopie de fluorescence. La coordination du peptide Aβox avec l'ion Zn(II) est également étudiée par XANES et les conséquences de l'oxydation de Aβ sur le phénomène d'agrégation sont investiguées par spectroscopie de fluorescence et par microscopie électronique à transmission (TEM).
Les principes généraux de chaque technique sont rappelés, et les conditions d'utilisation
(matériel et méthodes) pour notre sujet d'étude sont présentées.
Aβ, lié au cuivre, subit également des attaques oxydantes. Dans ce contexte général, le projet de thèse a consisté à caractériser l'oxydation du peptide Aβ lors de la production de ROS, catalysée par le cuivre, et en l'étude des conséquences des dommages d'oxydation subis par Aβ sur la coordination d'ions métalliques, la production de ROS et l'agrégation du peptide oxydé. Le premier chapitre situe le contexte du projet de thèse. La maladie d'Alzheimer y est décrite avec les différentes étapes connues de son développement, ainsi que quelques approches diagnostiques et thérapeutiques. Les lésions cérébrales associées à la maladie sont présentées et une attention particulière est portée aux plaques amyloïdes et à la caractérisation du peptide Aβ qui fait l'objet de notre étude. Les différents événements impliquant le peptide Aβ y sont décrits : la formation des plaques amyloïdes, la production d'espèces réactives de l'oxygène (ROS) en présence d'ions métalliques pouvant entrainer l'oxydation des biomolécules et du peptide Aβ. Enfin, les sites d'oxydation du peptide Aβ décrits dans la littérature sont présentés. Le second chapitre expose la méthodologie du projet de thèse. La préparation d'échantillons, étape essentielle pour l'étude des peptides, y est présentée. Différentes techniques spectroscopies ont été utilisées : la Spectrométrie de Masse (MS), les spectroscopies UV-Visible et de fluorescence, la Résonance Magnétique Nucléaire du proton ( 1 H RMN) ainsi que la Résonance Paramagnétique Electronique (RPE) et l'absorption des rayons X (XANES). L'oxydation du peptide Aβ40 catalysée par le cuivre est étudiée dans le chapitre III par MS haute résolution et MS en tandem. Les acides aminés oxydés sont identifiés et la cinétique d'oxydation de chacun d'eux est présentée. Une étude du peptide oxydé est également réalisée par 1 H RMN. Dans le chapitre V, la production de ROS catalysée par le cuivre est étudiée par spectroscopie de fluorescence, avec une série de peptides Aβ ayant subi des modifications (troncation, mutation d'un acide aminé, blocage de l'amine N-terminale), afin de déterminer quels acides aminés sont liés au cuivre pendant la production de ROS. Enfin, le chapitre VI se focalise sur l'ascorbate, molécule connue pour ses propriétés antioxydantes, mais qui participe également à la production de ROS catalysée par un métal. Afin d'évaluer ses effets pro-et anti-oxydants dans le cadre de la maladie d'Alzheimer, les oxydations des molécules environnantes et du peptide Aβ lors de la production de ROS catalysée par le cuivre sont évaluées par spectroscopie de fluorescence et MS respectivement, pour différentes concentrations d'ascorbate.
Chapter I Chapter I: Context of the project I.A. Alzheimer's Disease In 1907, Aloïs Alzheimer related in the article "Über eine eigenartige Erkankung der
oxidation
High Performance Liquid Chromatography / High Resolution Mass Spectrometry
(LC/HRMS) analysis was performed on a LTQ-Orbitrap XL mass spectrometer (ThermoFisher
Scientific, Les Ulis, France) coupled to an Ultimate 3000 LC System (Dionex, Voisins-le-
Bretonneux, France). Sample (10 µL of Aβ tryptic digest) was injected onto the column
(Phenomenex, Synergi Fusion RP-C18, 250 × 1 mm, 4 µm), at room temperature. Gradient
elution was carried out with formic acid 0.1% (mobile phase A) and acetonitrile/water (80/20
v/v) formic acid 0.1% (mobile phase B) at a flow-rate of 50 µL.min -1 . The mobile phase gradient
was programmed with the following time course: 12% mobile phase B at 0 min, held 3 minutes,
linear increase to 100% B at 15 min, held 4 min, linear decrease to 12% B at 20 min and held
5 min. The mass spectrometer was used as a detector, the Orbitrap cell operating in the full-
scan mode at a resolution power of 60 000.
II.C. UV-Visible spectroscopy
II.C.1. General principles
Table III .A-1: Theoretical monoisotopic masses (m/z) of mono-and doubly-protonated ions of the tryptic peptides of non-oxidized and oxidized Aβ40 detected. +16 and +32 account for the formal addition of one and two oxygen atoms during oxidation.
III
Position Peptide [M+H] + [M+2H] 2+
1-5 DAEFR 637.2945 319.1512
1-5 DAEFR+16 653.2895 327.1486
1-5 DAEFRdd* 592.2731 296.6405
1-5 DAEFRox** 548.2469 274.6273
6-16 HDSGYEVHHQK 1336.6034 668.8056
6-16 HDSGYEVHHQK+16 1352.5983 676.8031
6-16 HDSGYEVHHQK+32 1368.5932 684.8005
17-28 LVFFAEDVGSNK 1325.6741 663.3410
17-28 LVFFAEDVGSNK+16 1341.6690 671.3384
29-40 GAIIGLMVGGVV 1085.6392 543.3235
29-40 GAIIGLMVGGVV+16 1101.6342 551.3210
* DAEFRdd: oxidative decarboxylation and deamination of Asp1
** DAEFRox: oxidative cleavage of Asp1
b. Characterization of the oxidation sites by LC-MS/MS
Table S2 : Identification of the oxidized residues of Aβ, when oxidized in the presence of H2O2.
S2
1-5 DAEFRdd* 592.2731 296.6405 198.0963
DAEFRox** 548.2469 274.6274 183.4208
6-16 HDSGYEVHHQK+16 1352.5983 676.8031 451.5380
* DAEFRdd: oxidative decarboxylation and deamination of Asp1
** DAEFRox: oxidative cleavage of Asp1
• . Il a donc été déduit qu'à la fois l'amine terminale et l'oxygène du groupe carboxylate de l'Asp1 sont engagés dans la coordination du cuivre dans l'état IB. Il a également été montré que le nombre d'histidines présents dans la séquence peptidique a un impact sur la vitesse de production des radicaux hydroxyles. Moins il y a d'histidines, plus les HO • sont produits rapidement. De plus, cet effet est accru lorsque la séquence peptidique ne contient qu'une seule histidine. Le mode de coordination du Cu(I) avec les peptides ne possédant qu'une histidine, a été étudié par RMN du proton. Il a été montré que Cu(I) est coordiné par l'amine N-terminale de l'Asp1, le cycle imidazole de l'His et un groupe carboxylate. Toutes ces informations ont permis la proposition d'un mode de coordination unique des ions Cu(I) et Cu(II) avec le peptide Aβ dans l'état « in-between », responsable de la formation de ROS. L'ion métallique serait lié à l'amine N-terminale et au groupe carboxylate de l'Asp1, ainsi qu'à une histidine (voir Figure V.B-3). La séquence de Aβ comprenant 3 histidines, celles-ci sont supposées être en échange dynamique, comme cela est le cas dans les états au repos.
• et HO • Figure VI.B-1,
voie verte), limitant ainsi de possibles dommages qui résulteraient de l'oxydation de
biomolécules environnantes.
Table S1 : monoisotopic apparent masses (m/z) of mono-and multi-protonated ions of non-oxidized Aβ28. m/z values in blue are used for Aβ28 detection in HRMS.
S1
Table S2 : Remaining non-oxidized Aβ28 peptide (%) after the addition of Cu II (50 µM) and ascorbate (0 mM, 1 mM or 10 mM with only 1 mM consumed). Percentages are calculated with the peak areas of the chromatograms traces (see Figure S11).
S2 His13 et His14) sont suffisamment endommagées. Ils sont donc plutôt les victimes d'une fuite lors de la production de HO • par le système Aβ-Cu, les radicaux ciblant avant tout les ligands du cuivre. Concernant l'influence de l'oxydation sur l'agrégation du peptide Aβ, les résultats préliminaires ont montré que le peptide oxydé a une faible tendance à agréger et à privilégier la formation de fibres. Cela marque une différence importante avec le peptide non-oxydé qui forme des fibres dans lesquelles il s'arrange en feuillets β. D'autres expériences sont en cours au laboratoire, elles visent à confirmer les premiers résultats obtenus et à mieux comprendre la manière dont le peptide oxydé se comporte en termes d'agrégation. Par l'étude de la production de ROS catalysée par le cuivre en utilisant différents peptides Aβ modifiés, il a été possible de proposer une sphère de coordination du cuivre avec Aβ dans l'état transitoire « in-between », responsable de la production de ROS. Ainsi, une histidine, l'amine N-terminale et le groupe carboxylate de l'aspartate 1 seraient liés au cuivre dans l'état « in-between ». Le peptide Aβ non-modifié possédant 3 résidus histidine, il est probable qu'ils soient tous trois impliquées dans la sphère de coordination du cuivre en échange dynamique, comme c'est déjà le cas pour les complexes Cu(I)-Aβ et Cu(II)-Aβ dans les états dits « au repos ». 'intérêt de cette étude et des résultats qu'elle a permis d'obtenir réside dans une meilleure connaissance des mécanismes et conséquences de l'activité redox du système Cu-Aβ, à rapprocher du lien existant entre stress oxydant et maladie d'Alzheimer. Des perspectives directes peuvent être envisagées, à plus ou moins long terme. Une meilleure connaissance de l'état redox responsable de la production de ROS peut permettre le développement de stratégies thérapeutiques ciblées et efficaces, pouvant conduire à une réduction des dommages oxydatifs qui seraient en cause dans la neurodégénérescence liée à la maladie. D'autre part, et étant donné sa spécificité, l'oxydation du peptide Aβ peut représenter une piste intéressante dans la recherche de biomarqueurs pour le diagnostic précoce de la maladie d'Alzheimer. Il serait ainsi intéressant de pouvoir purifier le peptide Aβ à partir de matériel biologique (sérum, liquide
participe donc à sa propre destruction en contribuant à la production des ROS qui
General conclusion l'endommagent irréversiblement. Annex I
Les travaux présentés dans ce manuscrit concernent l'oxydation du peptide amyloïde-High-Resolution Mass Spectrometry data
bêta (Aβ) lors de la production d'espèces réactives de l'oxygène (ROS) catalysées par les ions
cuivre. Différents aspects ont été étudiés autour de l'oxydation du peptide Aβ : (i)
l'identification des résidus oxydés et les conséquences que peut entraîner cette oxydation quant
à la coordination des ions métalliques, l'agrégation du peptide oxydé et la production de ROS
par le peptide oxydé en présence de cuivre, (ii) la caractérisation du mode de coordination du
complexe Cu-Aβ dans l'état transitoire, appelé état « in-between », responsable de la
production de ROS et donc de l'oxydation de Aβ et (iii) les effets à la fois pro-et anti-oxydants
de l'ascorbate envers les biomolécules et le peptide Aβ lors de la production de ROS catalysée
par le cuivre.
Starting ascorbate (mM) Consumed ascorbate (mM) Remaining Aβ28 (%)
0 0 100%
1 10 1 1 Annexes 1% 4%
Nous avons pu montrer que l'oxydation du peptide Aβ a entraîné un changement de Ainsi, il ressort de cette étude que l'Aspartate 1 et les Histidines du peptide Aβ sont des
coordination des deux ions métalliques Cu(I) et Cu(II) ainsi que de Zn(II). Lorsque le peptide acides aminés clés dans la production de ROS. Impliqués dans la coordination des ions Cu(I)
est suffisamment oxydé, ce changement est à l'origine d'une production de ROS plus rapide et et Cu(II) dans les états au repos, ils le sont également dans l'état transitoire, seul état redox
d'une quantité plus importante de HO • quittant le système Cu-Aβ. L'oxydation de Aβ peut donc activement impliqué dans la production de ROS. Ils sont les premiers ciblés et leur dégradation
L'étude du peptide après oxydation catalysée par le cuivre a permis d'identifier les principaux acides aminés oxydés : l'aspartate 1 ainsi que les histidines
13
et 14. Ces acides aminés étant impliqués dans les coordinations de Cu(I) et/ou Cu(II), il a été proposé qu'ils soient aussi engagés dans la coordination du cuivre pendant la production de ROS. Etant proches du site de production, cela expliquerait leur oxydation ciblée. Des oxydations collatérales ont aussi été détectées sur les phénylalanines 19 et 20 et sur la méthionine 35. Ces acides aminés ne sont pas engagés dans la sphère de coordination des ions métalliques (partie N-terminale), mais sont très sensibles aux attaques du radical hydroxyle. Il a été proposé qu'ils soient ciblés par les ROS quittant le système Cu-Aβ, lorsque les premières cibles des radicaux (Asp1, être considéré comme un événement délétère pour le peptide lui-même, mais également pour les biomolécules environnantes pour lesquelles on observe une augmentation des oxydations. Enfin, les propriétés pro-oxydantes et anti-oxydantes de l'ascorbate ont été étudiées. Antioxydant bien connu du grand public (notamment sous le nom de Vitamine C), l'ascorbate présente un double visage puisqu'il participe également à la production de ROS en transférant un électron au centre métallique, qui peut à son tour réduire l'oxygène moléculaire ou les espèces intermédiaires que constituent les ROS. L'étude a permis de déterminer que son effet dépend de sa concentration. Quelle qu'elle soit, l'ascorbate participe à la production de ROS, mais à forte concentration, il réagit aussi avec les radicaux hydroxyles formés et joue son rôle d'antioxydant vis-à-vis des biomolécules environnantes. Cependant, il n'est pas efficace dans la protection du peptide Aβ. Même à forte concentration dans le milieu, le peptide n'est pas protégé contre les attaques du radical hydroxyle. entraîne des changements importants quant à la coordination des métaux, l'agrégation et de la production de ROS. L'ascorbate, même présent en grande concentration, ne peut faire bénéficier Aβ de ses propriétés antioxydantes. Le peptide Aβ, en se liant à un ion cuivre, Lcéphalo-rachidien, …) pour étudier spécifiquement de possibles dégâts dus à l'oxydation. La production d'anticorps spécifiquement dirigés contre le peptide oxydé représenterait alors une avancée intéressante pour une telle étude. Enfin, une meilleure connaissance du système Cu-Aβ devrait permettre aussi de mieux comprendre le fonctionnement d'autres systèmes impliquant des peptides amyloïdogéniques mis en cause dans d'autres pathologies, tels que l'amyline, pour le diabète de type II, et l'alpha-synucléine, pour la maladie de Parkinson.
Table 1 : Monoisotopic apparent masses (m/z) of mono-and multi-protonated ions of Aβ40, Aβ28, Aβ16 and the tryptic peptides (Aβ1-5, Aβ6-16, Aβ17-28 and Aβ29-40) as well as their oxidized counterparts.
1
Name Sequence [M+H] + [M+2H] 2+ [M+3H] 3+ [M+4H] 4+ [M+5H] 5+
A16 DAEFRHDSGYEVHHQK 1954.8796 977.9437 652.2984 489.4758 391.7822
D dd AEFRHDSGYEVHHQK 1909.8581 955.4330 637.2913 478.2204 382.7779
DAEFRHDSGYEVH +16 HQK 1970.8745 985.9412 657.6300 493.4745 394.9812
DAEFRHDSGYEVH +16 H +16 QK 1986.8694 993.9386 662.9617 497.4732 398.1801
D dd AEFRHDSGYEVH +16 HQK 1925.8530 963.4304 642.6229 482.2191 385.9769
D ox AEFRHDSGYEVHHQK 1865.8319 933.4199 622.6159 467.2138 373.9726
D ox AEFRHDSGYEVH +16 HQK 1881.8268 941.4173 627.9475 471.2126 377.1716
A28 DAEFRHDSGYEVHHQKLVFFAEDVGSNK 3261.5353 1631.2716 1087.8503 816.1397 653.1133
D dd AEFRHDSGYEVHHQKLVFFAEDVGSNK 3216.5138 1608.7608 1072.8432 804.8843 644.1090
DAEFRHDSGYEVH +16 HQKLVFFAEDVGSNK 3277.5302 1639.2690 1093.1820 820.1384 656.3123
DAEFRHDSGYEVHHQKLVFFAEDVGSNK +32 3293.5251 1647.2665 1098.5136 824.1372 659.5113
D dd AEFRHDSGYEVH +16 HQKLVFFAEDVGSNK 3232.5088 1616.7583 1078.1748 808.8831 647.3080
D ox AEFRHDSGYEVHHQKLVFFAEDVGSNK 3172.4876 1586.7477 1058.1678 793.8778 635.3038
D ox AEFRHDSGYEVH +16 HQKLVFFAEDVGSNK 3188.4825 1594.7452 1063.4994 797.8765 638.5028
A40 DAEFRHDSGYEVHHQKLVFFAEDVGSNKGAIIGLMVGGVV 4328.1561 2164.5820 1443.3906 1082.7949 866.4375
D dd AEFRHDSGYEVHHQKLVFF… 4283.1347 2142.0713 1428.3834 1071.5395 857.4332
DAEFRHDSGYEVH +16 HQKLVFF… 4344.1511 2172.5794 1448.7222 1086.7936 869.6365
DAEFRHDSGYEVH +16 H +16 QKLVFF… 4360.1460 2180.5769 1454.0539 1090.7924 872.8355
D dd AEFRHDSGYEVH +16 HQKLVFF… 4299.1296 2150.0687 1433.7151 1075.5383 860.6322
D ox AEFRHDSGYEVHHQKLVFF… 4239.1085 2120.0581 1413.7080 1060.5330 848.6280
D ox AEFRHDSGYEVH +16 HQKLVFF… 4255.1034 2128.0556 1419.0397 1064.5317 851.8269
DAEFR
p HDSGYEVHHQK p Annex II 1 H NMR data
Table 2 : 1 H NMR chemical shifts of Aβ28 at pH 7.4. [1]
2
RésuméRésuméLe cerveau humain est un organe très surprenant. Alors qu'il ne pèse que 2 % du poids total du corps et qu'il est constitué majoritairement d'eau (75%), il assure les fonctions cognitives et motrices du corps et traite les informations provenant de la vue, de l'ouïe, de l'odorat, du toucher et du goût. Il est l'une des parties du corps dont l'activité métabolique est la plus intense, et utilise au repos, à lui tout seul, 20% de l'oxygène consommé par l'organisme entier. Il est constitué de 100 milliards de neurones qui communiquent entre eux grâce aux neurotransmetteurs, messagers chimiques qui traversent les synapses. Le cerveau étant un organe extrêmement complexe et multitâche, les origines biologiques et chimiques de ses nombreuses fonctions ne sont pas encore toutes parfaitement connues. Un lien entre maladie d'Alzheimer et stress oxydant a également été démontré. En plus de sa capacité d'agrégation dans des conditions spécifiques à la maladie d'Alzheimer, Aβ est également à l'origine de la production d'espèces réactives de l'oxygène (ROS) car il est capable de chélater des ions métalliques ayant des propriétés oxydo-réductrices, ions cuivre ou fer par exemple. En présence d'un agent réducteur tel que l'ascorbate, naturellement présent dans le cerveau à des concentrations pouvant être localement importantes, le complexe Cu-Aβ formé peut catalyser la production de l'anion superoxyde (O2 •-), du peroxyde d'hydrogène (H2O2) et du radical hydroxyle (HO • ) à partir du dioxygène (Figure1). Ces ROS, et plus spécifiquement le radical hydroxyle, sont des espèces oxydantes réactives qui peuvent endommager les biomolécules environnantes (lipides, protéines, ADN). Lors de la production de ROS, le peptide Aβ, lié au cuivre, subit également des attaques oxydantes.
Dans ce cadre global, le travail présenté ici s'est focalisé sur la problématique des
plaques amyloïdes -appelées aussi plaques séniles -et plus précisément sur son composant
principal, le peptide amyloïde-bêta (Aβ). Aβ est un peptide composé de 40 à 42 acides aminés,
naturellement présent sous forme monomérique dans le cerveau. Dans le cas de la maladie
De nombreuses pathologies sont liées à un dysfonctionnement métabolique du cerveau, mais pour la plupart, leur étiologie est inconnue. C'est le cas des maladies neurodégénératives, ou démences, qui touchent actuellement plus de 45 millions de personnes à travers le monde.
En raison de l'augmentation de l'espérance de vie liée aux avancées de la médecine, le nombre de personnes atteintes de démence ne fait que s'accroître. Parmi les différentes maladies neurodégénératives, la maladie d'Alzheimer, découverte il y a plus d'un siècle, est la plus répandue. Bien que la cause de son développement soit encore inconnue à l'heure actuelle, deux types de lésions cérébrales sont observés chez les patients : (i) les enchevêtrements neurofibrillaires, ayant pour origine l'hyperphosphorylation de la protéine Tau et (ii) la formation de plaques amyloïdes extracellulaires. Les recherches scientifiques sont donc principalement dirigées vers l'étude de ces deux caractéristiques de la maladie, que ce soit d'un point de vue mécanistique, pour comprendre l'étiologie de la maladie, ou d'un point de vue thérapeutique, pour tenter de trouver un médicament efficace. d'Alzheimer, il est retrouvé sous forme agrégée dans les plaques amyloïdes. Ces dernières peuvent être formées dans l'espace inter-synaptique et empêcher le bon fonctionnement des neurones en obstruant le passage des neurotransmetteurs entre les synapses de deux cellules nerveuses.
Remerciements
Supporting information
Metal-catalyzed oxidation of Aβ and the resulting reorganization of the Cu binding sites promote ROS production
Clémence Cheignon*, Peter Faller, Denis Testemale, Christelle Hureau and Fabrice Collin*
Supporting Information
Table S3: Linear fitting of HO production curves. Gradient and determination coefficients of fits for each curve of Figures 7 and8, for the two linear parts.
First linear part
Chapter VI: Pro versus antioxidant properties of Ascorbate
The chapter focuses on the study of the prooxidant and antioxidant properties of the ascorbate anion, in the context of Alzheimer's Disease (AD). The chapter is composed of a communication published in the journal Dalton Transactions in August 2016 [1] along with a summary of the article, written in French (requirement of the doctoral school). The supporting information related to the communication is situated at the end of the chapter (Section VI.C).
VI.A. Communication
Material and methods
Chemicals
Cu(II) used was from CuSO4. dissolving the powder in NaOH (50mM) and passing the solution through FPLC to obtain the monomeric fraction. The peptide concentration was then determined in NaOH (50mM) by UVvisible absorption of Tyr10, considered as free tyrosine ((ε293-ε360)=2400 M -1 cm -1 ).
Mass spectrometry
High Performance Liquid Chromatography / High Resolution Mass Spectrometry (HPLC/HRMS) analysis was performed on a LTQ-Orbitrap XL mass spectrometer (ThermoFisher Scientific, Les Ulis, France) coupled to an Ultimate 3000 LC System (Dionex, Voisins-le-Bretonneux, France). The Orbitrap cell was operated in the full-scan mode at a resolution power of 60 000. Samples were washed three times with water prior analysis,by using Amicon 3 kDa centrifugal device (Millipore). Samples (10 µL) were then injected onto
Abstract
Alzheimer's Disease (AD) is the most frequent for of dementia in the elderly. A hallmark of AD is the extracellular formation of senile plaques in the brain of AD subjects, composed of the Amyloid-β peptide (Aβ) under aggregated form with metal ions such as copper ions. Aβ can form a complex with copper ions, able to catalyze reactive oxygen species (ROS) formation in the presence of a reducing agent such as ascorbate. These oxidative species can oxidize the surrounding molecules and the Aβ peptide itself. Being close to the production site of ROS, Aβ is the preferential target, especially for the hydroxyl radical HO • . The aim of this work was to study the ROS production by the Aβ/Cu/ascorbate system, to characterize the oxidation undergone by Aβ and to evaluate the consequences of Aβ oxidation on ROS production, metal ions coordination and aggregation. Several spectroscopic techniques have been used, in particular mass spectrometry (MS), fluorescence spectroscopy, electron paramagnetic resonance (EPR) and X-Ray absorption spectroscopy (XANES).
The oxidation sites of Aβ have been studied by mass spectrometry (MS and MS/MS). Thanks to the use of proteomic tools and high-resolution mass spectrometry (HRMS), the oxidized amino acid residues have been identified. Asp1, His 13 and His14 have been found to be the preferential targets for HO • on Aβ. This result was expected as these residues are involved in copper coordination, from which the ROS are generated.
The impact of Aβ oxidation on Cu(II), Cu(I) and Zn(II) on metal ions coordination, on ROS production and on Aβ aggregation has been studied. Results have shown that Aβ oxidation induces a change of coordination of Zn(II) as well as Cu(II) and Cu(I), leading to an increase of ROS production. Moreover, Aβ oxidation has also an impact on aggregation, as it does not favor fibrils formation.
The Cu-Aβ binding mode during ROS production has been deduced from the study of a series of mutated Aβ peptides. The hypothesis, in which the amino acid residues bound to Cu during the ROS production are the oxidized one (Asp1, His 13 and His14) has been corroborated by the results of this study, the mutation of Asp1 or the two His having an impact on ROS production.
Finally, the pro-and antioxidants effects of ascorbate have been investigated, showing that, on the Cu-Aβ system, ascorbate only has antioxidant properties at high concentration for surrounding molecules, but does not exhibit any protecting effect on Aβ itself. |
01347436 | en | [
"phys.mphy",
"math.math-fa",
"math.math-ap",
"math.math-ca",
"math.math-sp"
] | 2024/03/05 22:32:15 | 2017 | https://hal.science/hal-01347436/file/Heat_trace3.2.pdf | B Iochum
email: [email protected]
T Masson
Heat trace for Laplace type operators with non-scalar symbols
Keywords: Heat kernel, non minimal operator, asymptotic heat trace, Laplace type operator PACS: 11.15.-q, 04.62.+v 2000 MSC: 58J35, 35J47, 81T13, 46L87
For an elliptic selfadjoint operator
acting on a fiber bundle over a compact Riemannian manifold, where u µν , v µ , w are N × N -matrices, we develop a method to compute the heat-trace coefficients a r which allows to get them by a pure computational machinery. It is exemplified in any even dimension by the value of a 1 written both in terms of u µν = g µν u, v µ , w or diffeomorphic and gauge invariants. We also address the question: when is it possible to get explicit formulae for a r ?
Introduction
We consider a compact Riemannian manifold (M, g) without boundary and of dimension d together with the nonminimal differential operator
P := -[u µν (x)∂ µ ∂ ν + v ν (x)∂ ν + w(x)]. (1.1)
which is a differential operator on a smooth vector bundle V over M of fiber C N where u µν , v ν , w are N × N -matrices valued functions. This bundle is endowed with a hermitean metric. We work in a local trivialization of V over an open subset of M which is also a chart on M with coordinates (x µ ). In this trivialization, the adjoint for the hermitean metric corresponds to the adjoint of matrices and the trace on endomorphisms on V becomes the usual trace tr on matrices. Since we want P to be a selfadjoint and elliptic operator on L 2 (M, V ), we first assume that u µν (x) ξ µ ξ ν is a positive definite matrix in M N :
u µν (x) ξ µ ξ ν has only strictly positive eigenvalues for any ξ = 0.
(1.2)
We may assume without loss of generality that u µν = u νµ . In particular u µµ is a positive matrix for each µ and each u µν is selfadjoint.
The asymptotics of the heat-trace
Tr e -tP ∼ t↓0 + ∞ r=0 a r (P ) t r-d/ 2 (1.3) exists by standard techniques (see [START_REF] Gilkey | Invariance theory, the heat equation and the Atiyah-Singer index theorem[END_REF]Section 1.8.1]), so we want to compute these coefficients a r (P ).
While the spectrum of P is a priori inaccessible, the computation of few coefficients of this asymptotics is eventually possible. The related physical context is quite large: the operators P appeared in gauge field theories, string theory or the so-called non-commutative gravity theory (see for instance the references quoted in [START_REF] Avramidi | Gauged gravity via spectral asymptotics of non-Laplace type operators[END_REF][START_REF] Avramidi | Non-Laplace type operators on manifolds with boundary, in: Analysis, geometry and topology of elliptic operators[END_REF][START_REF] Avramidi | Heat kernel method and its applications[END_REF]). The knowledge of the coefficients a r are important in physics. For instance, the one-loop renormalization in dimension four requires a 1 and a 2 . When the principal symbol of P is scalar (u µν = g µν 1 N ), there are essentially two main roads towards the calculation of heat coefficients (with numerous variants): the first is analytical and based on elliptic pseudodifferential operators while the second is more concerned by the geometry of the Riemannian manifold M itself with the search for invariants or conformal covariance. Compared with the flourishing literature existing when the principal symbol is scalar, there are only few works when it is not. One can quote for instance the case of operators acting on differential forms [START_REF] Gilkey | Heat equation asymptotics of "nonminimal" operators on differential forms[END_REF][START_REF] Branson | Heat Equation Asymptotics of Elliptic Operators with Non-scalar Leading Symbol[END_REF][START_REF] Alexandrov | Heat kernel for nonminimal operators on a Kähler manifold[END_REF][START_REF] Wang | Nonminimal operators and non-commutative residue[END_REF]. The first general results are in [START_REF] Avramidi | Heat kernel asymptotics of operators with non-Laplace principal part[END_REF] or in the context of spin geometry using the Dirac operators or Stein-Weiss operators [START_REF] Avramidi | A discrete leading symbol and spectral asymptotics for natural differential operators[END_REF] also motivated by physics [START_REF] Avramidi | Gauged gravity via spectral asymptotics of non-Laplace type operators[END_REF]. See also the approach in [START_REF] Fulling | Kernel asymptotics of exotic second-order operators[END_REF][START_REF] Gusynin | Local heat kernel asymptotics for nonminimal differential operators[END_REF][START_REF] Gusynin | Complete Computation of DeWitt-Seeley-Gilkey Coefficient E4 for Nonminimal Operator on Curved Manifolds[END_REF][START_REF] Gusynin | Heat kernel expansion for nonminimal differential operations and manifolds with torsion[END_REF][START_REF] Kornyak | Heat Invariant E2 for Nonminimal Operator on Manifolds with Torsion[END_REF][START_REF] Ananthanarayan | A note on the heat kernel coefficients for nonminimal operators[END_REF][START_REF] Guendelman | On the heat kernel in covariant background gauge[END_REF][START_REF] Moss | Invariants of the heat equation for non-minimal operators[END_REF][START_REF] Toms | Local momentum space and the vector field[END_REF]. The present work has a natural algebraic flavor inherited from the framework of operators on Hilbert space comprising its own standard analytical part, so is related with the first road. In particular, it gives all ingredients to produce mechanically the heat coefficients. It is also inspired by the geometry à la Connes where P = D 2 for a commutative spectral triple (A, H, D), thus has a deep motivation for noncommutative geometry.
Let us now enter into few technical difficulties. While the formula for a 0 (P ) is easily obtained, the computation of a 1 (P ) is much more involved. To locate some difficulties, we first recall the parametrix approach, namely the use of momentum space coordinates (x, ξ) ∈ T *
x M :
d 2 (x, ξ) = u µν (x) ξ µ ξ ν , d 1 (x, ξ) = -iv µ (x) ξ µ , d 0 (x) = -w(x).
Then we can try to use the generic formula (see [START_REF] Gilkey | Heat equation asymptotics of "nonminimal" operators on differential forms[END_REF])
(-i) |α| α! (∂ α ξ b j )(∂ α x d k ) b 0 .
The functions b 2r , even for r = 1, generate typically terms of the form
tr[A 1 (λ)B 1 A 2 (λ)B 2 A 3 (λ) • • • ]
where all matrices A i (λ) = (d 2 (x, ξ) -λ) -n i commute but do not commute a priori with B i , so that the integral in λ is quite difficult to evaluate in an efficient way. Of course, one can use the spectral decomposition d 2 = i λ i π i (depending on x and ξ) to get,
i 1 ,i 2 ,i 3 ,... λ∈C dλ e -λ (λ i 1 -λ) -n i 1 (λ i 2 -λ) -n i 2 (λ i 3 -λ) -n i 3 • • • tr(π i 1 B 1 π i 2 B 2 π i 3 • • • ). (1.5)
While the λ-integral is easy via residue calculus, the difficulty is then to recombine the sum. This approach is conceptually based on an approximation of the resolvent (P -λ) -1 .
Because of previous difficulties, we are going to follow another strategy, using a purely functional approach for the kernel of e -tP which is based on the Volterra series (see [20, p. 78], [START_REF] Avramidi | Heat kernel method and its applications[END_REF]Section 1.17.2]). This approach is not new and has been used for the same purpose in [START_REF] Avramidi | Heat kernel asymptotics of operators with non-Laplace principal part[END_REF][START_REF] Avramidi | Gauged gravity via spectral asymptotics of non-Laplace type operators[END_REF][START_REF] Avramidi | Non-Laplace type operators on manifolds with boundary, in: Analysis, geometry and topology of elliptic operators[END_REF]. However our strategy is more algebraic and more in the spirit of rearrangement lemmas worked out in [START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF][START_REF] Lesch | Divided differences in noncommutative geometry: rearrangement lemma, functional calculus and expansional formula[END_REF]. In particular we do not go through the spectral decomposition of u µν crucially used in [START_REF] Avramidi | Heat kernel asymptotics of operators with non-Laplace principal part[END_REF] (although in a slightly more general case than the one of Section 4). To explain this strategy, we need first to fix a few notation points.
Let K(t, x, x ) be the kernel of e -tP where P is as in (1.1) and satisfies (1.2). Then
Tr[e -tP ] = dx tr[K(t, x, x)], K(t, x, x) = 1 (2π) d dξ e -ix.ξ (e -tP e ix.ξ ).
When f is a matrix-valued function on M , we get -P (e ix.ξ f )(x) = e ix.ξ [-u µν ξ µ ξ ν + 2iu µν ξ µ ∂ ν + iv µ ξ µ + w(x)]f (x)
= -e ix.ξ [H + K + P ]f (x)
where we used
H(x, ξ) := u µν (x) ξ µ ξ ν , (1.6) K(x, ξ) := -iξ µ [v µ (x) + 2u µν (x) ∂ ν ].
(1.7)
Thus H is the principal symbol of P and it is non-scalar for non-trivial matrices u µν . If 1(x) = 1 is the unit matrix valued function, we get e -tP e ix.ξ = e ix.ξ e -t(H+K+P ) 1, so that, after the change of variables ξ → t 1/2 ξ, the heat kernel can be rewritten as
K(t, x, x) = 1 (2π) d dξ e -t(H+K+P ) 1 = 1 t d/2 1 (2π) d dξ e -H- √ tK-tP 1.
(1.8)
A repetitive application of Duhamel formula (or Lagrange's variation of constant formula) gives the Volterra series (also known to physicists as Born series):
e A+B = e A + ∞ k=1 1 0 ds 1 s 1 0 ds 2 • • • s k-1 0
ds k e (1-s 1 )A B e (s 1 -s 2 )A • • • e (s k-1 -s k )A B e s k A .
Since this series does not necessarily converge for unbounded operators, we use only its first terms to generate the asymptotics (1.3) from (1.8). When A = -H and B = -√ tK -tP , it yields for the integrand of (1.8)
e -H- √ tK-tP 1 = e -H - √ t 1 0
ds 1 e (s 1 -1)H K e -s 1 H + t ds 2 e (s 1 -1)H K e (s 2 -s 1 )H K e -s 2 H -1 0 ds 1 e (s 1 -1)H P e -s 1 H
+ O(t 2 ) 1.
(1.9)
After integration in ξ, the term in √ t is zero since K is linear in ξ while H is quadratic in ξ, so that tr K(t, x, x)
t↓0 1 t d/2 [a 0 (x) + t a 1 (x) + O(t 2 )]
with the local coefficients
a 0 (x) = tr 1 (2π) d dξ e -H(x,ξ) ,
(1.10)
a 1 (x) = tr 1 (2π) d dξ 1 0 ds 1 s 1 0 ds 2 e (s 1 -1)H K e (s 2 -s 1 )H K e -s 2 H -tr 1 (2π) d dξ [ 1 0 ds 1 e (s 1 -1)H P e -s 1 H ] (1.11)
where the function 1 has been absorbed in the last e -s i H . The coefficients a 0 (P ) and a 1 (P ) are obtained after an integration in x. Since we will not perform that integration which converges when manifold M is compact, we restrict to a r (x).
We now briefly explain how we can compute a r (P ). Expanding K and P in a r (x), one shows in Section 2 that all difficulties reduce to compute algebraic expressions like (modulo the trace)
1 (2π) d dξ 1 0 ds 1 s 1 0 ds 2 • • • s k-1 0 ds k e (s 1 -1)H B 1 e (s 2 -s 1 )H B 2 • • • B k e -s k H (1.12)
where the B i are N × N -matrices equal to u µν , v µ , w or their derivatives of order two at most. Moreover, we see (1.12) as an M N -valued operator acting on the variables (B 1 , . . . , B k ) which precisely allows to focus on the integrations on ξ and s i independently of these variables.
Then we first compute the integration in ξ, followed by the iterated integrations in s i . The main result of this section is (2.18) which represents the most general operator used in the computation of a r . We end up Section 2 by a summary of the method. We show that the previously mentioned integrations are manageable in Section 3. Actually, we prove that we can reduce the computations to few universal integrals and count the exact number of them which are necessary to get a r (x) in arbitrary dimension. In Section 4, we reduce to the case u µν = g µν u where u is a positive matrix and explicitly compute the local coefficient a 1 in Theorem 4.3 in terms of (u, v µ , w). Looking after geometric invariants like for instance the scalar curvature of M , we swap the variables (u, v µ , w) with some others based on a given connection A on V . This allows to study the diffeomorphic invariance and covariance under the gauge group of the bundle V . The coefficient a 1 can then be simply written in any even dimension in terms of a covariant derivative (combining A and Christoffel symbols). In Section 5, we use our general results to address the following question: is it possible to get explicit formulae for a r avoiding the spectral decomposition like (1.5)? We show that the answer is negative when d is odd. Finally, the case u µν = g µν u + X µν is considered as an extension of u µν = g µν 1 + X µν which appeared in the literature.
Formal computation of a r (P )
This section is devoted to show that the computation of a r (x) as (1.11) reduces to the one of the terms as in (1.12). Since a point x ∈ M is fixed here, we forget to mention it, but many of the structures below are implicitly defined as functions of x.
For k ∈ N, let ∆ k be the k-simplex
∆ k := {s = (s 0 , • • • s k ) ∈ R k+1 + | 0 ≤ s k ≤ s k-1 ≤ • • • ≤ s 2 ≤ s 1 ≤ s 0 = 1}, ∆ 0 := ∅ by convention.
We use the algebra M N of N × N -complex matrices. Denote by M N [ξ, ∂] the complex vector space of polynomials both in ξ = (ξ µ ) ∈ R d and ∂ = (∂ µ ) which are M N -valued differential operators and polynomial in ξ; for instance, P, K, H ∈ M N [ξ, ∂] with P of order zero in ξ and two in ∂, K of order one in ξ and ∂, and H of order two in ξ and zero in ∂.
For
any k ∈ N, define a map f k (ξ) : M N [ξ, ∂] ⊗ k → M N [ξ, ∂],
evidently related to (1.12), by
f k (ξ)[B 1 ⊗ • • • ⊗ B k ] := ∆ k ds e (s 1 -1)H(ξ) B 1 e (s 2 -s 1 )H(ξ) B 2 • • • B k e -s k H(ξ) , ( 2.1)
f 0 (ξ)[a] := a e -H(ξ) , for a ∈ C =: M ⊗ 0 N . (2.2)
Here, by convention, each ∂ µ in B i ∈ M N [ξ, ∂] acts on all its right remaining terms. Remark that the map ξ → f k (ξ) is even. We first rewrite (1.9) in these notations (omitting the ξ-dependence):
e -H- √ tK-tP 1 = e -H + ∞ k=1 (-1) k f k [( √ tK + tP ) ⊗ • • • ⊗ ( √ tK + tP )] (2.3) = e -H -t 1/2 f 1 [K] + t(f 2 [K ⊗ K] -f 1 [P ]) + t 3/2 (f 2 [K ⊗ P ] + f 2 [P ⊗ K] -f 3 [K ⊗ K ⊗ K]) + t 2 (f 2 [P ⊗ P ] -f 3 [K ⊗ K ⊗ P ] -f 3 [K ⊗ P ⊗ K] -f 3 [P ⊗ K ⊗ K]) + O(t 2 ).
Since all powers of t in (2n + 1)/2 have odd powers of ξ µ 1 • • • ξ µp (with odd p), the ξ-integrals in (1.12) will be zero since f k is even in ξ, so only
a 0 (x) = tr 1 (2π) d dξ f 0 [1], (2.4
)
a 1 (x) = tr 1 (2π) d dξ (f 2 [K ⊗ K] -f 1 [P ]), (2.5
)
a 2 (x) = tr 1 (2π) d dξ (f 2 [P ⊗ P ] -f 3 [K ⊗ K ⊗ P ] -f 3 [K ⊗ P ⊗ K] -f 3 [P ⊗ K ⊗ K])
etc survive.
Our first (important) step is to erase the differential operator aspect of K and P as variables of f k to obtain variables in the space
M N [ξ] of M N -valued polynomials in ξ: because a ∂ contained in B i will apply on e (s i+1 -s i )H B i+1 • • • B k e -s k H
, by a direct use of Leibniz rule and the fact that
∂ e -sH = - s 0 ds 1 e (s 1 -s)H (∂H) e -s 1 H , ( 2.6)
we obtain the following
Lemma 2.1 When all B j are in M N [ξ, ∂], the functions f k for k ∈ N * satisfy f k (ξ)[B 1 ⊗ • • • ⊗ B i ∂ ⊗ • • • ⊗ B k ] = k j=i+1 f k (ξ)[B 1 ⊗ • • • ⊗ (∂B j ) ⊗ • • • ⊗ B k ] - k j=i f k+1 (ξ)[B 1 ⊗ • • • ⊗ B j ⊗ (∂H) ⊗ B j+1 ⊗ • • • ⊗ B k ]. (2.7)
Proof By definition (omitting the ξ-dependence)
f k [B 1 ⊗ • • • ⊗ B i ∂ ⊗ • • • ⊗ B k ] = ∆ k ds e (s 1 -1)H B 1 e (s 2 -s 1 )H B 2 • • • B i ∂(e (s i+1 -s i )H B i+1 • • • B k e -s k H ).
The derivation ∂ acts on each factor in the parenthesis: -On the argument B j , j ≥ i + 1, which gives the first term of (2.7).
-On a factor e (s j+1 -s j )H for i ≤ j ≤ k -1, we use (2.6) ∂ e (s j+1 -s j )H = -s j -s j+1 0 ds e s +s j+1 -s j )H (∂H) e -s H = -
s j s j+1
ds e (s-s j )H (∂H) e (s j+1 -s)H with s = s + s j+1 , so that in the integral, one obtains the term
- 1 0 ds 1 s 1 0 ds 2 • • • s k-1 0 s j s j+1 ds e (s 1 -1)H B 1 e (s 2 -s 1 )H B 2 • • • • • • B j e (s-s j )H (∂H) e (s j+1 -s)H B j+1 • • • B k e -s k H ).
Since, as directly checked,
1 0 ds 1 • • • s k-1 0 ds k s j s j +1 ds = ∆ k+1 ds with s j = s j for j ≤ i -1, s i = s and s j = s j-1 for j ≥ i + 1, this term is -f k+1 [B 1 ⊗ • • • ⊗ B j ⊗ (∂H) ⊗ B j+1 ⊗ • • • ⊗ B k ].
-Finally, on the factor e -s k H , one has ∂ e -s k H = -s k 0 e (s-s k )H (∂H) e -sH which gives the last term:
-f k+1 [B 1 ⊗ • • • ⊗ B k ⊗ (∂H)]. Thus (2.3) reduces to compute f k [B 1 ⊗ • • • ⊗ B k ] where B i ∈ M N [ξ].
Our second step is now to take care of the ξ-dependence: by hypothesis, each B i in the tensor product
B 1 ⊗ • • • ⊗ B k has the form B µ 1 ...µ i ξ µ 1 • • • ξ µ i with B µ 1 ...µ i ∈ M N , so that B 1 ⊗ • • • ⊗ B k is a sum of terms like B µ 1 ...µ k ξ µ 1 • • • ξ µ where B µ 1 ...µ k ∈ M ⊗ k N .
As a consequence, by linearity of f k in each variable, computation of a r requires only to evaluate terms like
1 (2π) d dξ ξ µ 1 • • • ξ µ f k (ξ)[ B µ 1 ...µ k ] ∈ M N with B µ 1 ...µ k ∈ M ⊗ k N , (2.8)
and we may assume that = 2p, p ∈ N. The next step in our strategy is now to rewrite the f k appearing in (2.8) in a way which is independent of the variables B µ 1 ...µ k , a rewriting obtained in (2.11). Then the driving idea is to show firstly that such f k can be computed and secondly that its repeated action on all variables which pop up by linearity (repeat that K has two terms while P has three terms augmented by the action of derivatives, see for instance (1.11)) is therefore a direct computational machinery. For such rewriting we need a few definitions justified on the way.
For k ∈ N, define the (finite-dimensional) Hilbert spaces
H k := M ⊗ k N , H 0 := C,
endowed with the scalar product
A 1 ⊗ • • • ⊗ A k , B 1 ⊗ • • • ⊗ B k H k := tr(A * 1 B 1 ) • • • tr(A * k B k ), a 0 , b 0 H 0 := a 0 b 0 ,
so each M N is seen with its Hilbert-Schmidt norm and
A 1 ⊗ • • • ⊗ A k 2 H k = k j=1 tr(A * j A j )
. We look at (2.8) as the action of the operator
1 (2π) d dξ ξ µ 1 • • • ξ µ l f k (ξ) acting on the finite dimensional Hilbert space H k .
Denote by B(E, F ) the set of bounded linear operators between the vector spaces E and F and let B(E) := B(E, E). For k ∈ N, let
H k := H k+1 , so H 0 = M N , m : H k → M N , m(B 0 ⊗ • • • ⊗ B k ) := B 0 • • • B k (multiplication of matrices), κ : H k → H k , κ(B 1 ⊗ • • • ⊗ B k ) := 1 ⊗ B 1 ⊗ • • • ⊗ B k , ι : M ⊗ k+1 N → B(H k , M N ), ι(A 0 ⊗ • • • ⊗ A k )[B 1 ⊗ • • • ⊗ B k ] := A 0 B 1 A 1 • • • B k A k , ι : M N → B(C, M N ), ι(A 0 )[a] := aA 0 , ρ : M ⊗ k+1 N → B( H k ), ρ(A 0 ⊗ • • • ⊗ A k )[B 0 ⊗ • • • ⊗ B k ] := B 0 A 0 ⊗ • • • ⊗ B k A k .
For A ∈ M N and k ∈ N, define the operators
R i (A) : H k → H k for i = 0, . . . , k R i (A)[B 0 ⊗ • • • ⊗ B k ] := B 0 ⊗ • • • ⊗ B i A ⊗ • • • ⊗ B k . Thus ρ(A 0 ⊗ • • • ⊗ A k ) = R 0 (A 0 ) • • • R k (A k ). (2.9)
As shown in Proposition A.1, ι is an isomorphism. The links between the three spaces
M ⊗ k+1 N , B( H k ) and B(H k , M N ) are summarized in the following commutative diagram where (m • κ * )(C)[B 1 ⊗ • • • ⊗ B k ] = m(C[1 ⊗ B 1 ⊗ • • • ⊗ B k ]): B( H k ) m • κ * M ⊗k+1 N ρ 4 4 ι * * B(H k , M N ) (2.10) For any matrix A ∈ M N and s ∈ ∆ k , define c k (s, A) := (1 -s 1 ) A ⊗ 1 ⊗ • • • ⊗ 1 + (s 1 -s 2 ) 1 ⊗ A ⊗ 1 ⊗ • • • ⊗ 1 + • • • + (s k-1 -s k ) 1 ⊗ • • • ⊗ A ⊗ 1 + s k 1 ⊗ • • • ⊗ 1 ⊗ A, c 0 (s, A) := A
where the tensor products have k + 1 terms, so that c k (s, A) ∈ M ⊗ k+1 N . This allows a compact notation since now
f k (ξ) = ∆ k ds ι[e -ξαξ β c k (s,u αβ ) ] ∈ B(H k , M N ), with f 0 (ξ) = ι(e -ξαξ β u αβ ), (2.11)
and these integrals converge because the integrand is continuous and the domain ∆ k is compact.
Since we want to use operator algebra techniques, with the help of
c k (s, A) ∈ M ⊗ k+1 N
, it is useful to lift the computation of (2.11) to the (finite dimensional C * -algebra) B( H k ) as viewed in diagram (2.10). Thus, we define
C k (s, A) := ρ(c k (s, A)) ∈ B( H k ),
and then, by (2.9)
C k (s, A) = (1 -s 1 ) R 0 (A) + (s 1 -s 2 ) R 1 (A) + • • • + (s k-1 -s k ) R k-1 (A) + s k R k (A).
Remark 2.2 All these distinctions between H k and H k or c k (s, A) and C k (s, A) seem innocent so tedious. But we will see later on that the distinctions between the different spaces in (2.10) play a conceptual role in the difficulty to compute the coefficients a r . Essentially, the computations and results take place in B( H k ) and not necessarily in the subspace M ⊗k+1 N ⊂ B( H k ) (see (2.18) for instance).
Given a diagonalizable matrix
A = C diag(λ 1 , . . . , λ n ) C -1 ∈ M N , let C ij := CE ij C -1 for i, j = 1, . . . , n
where the E ij form the elementary basis of M N defined by [E ij ] kl := δ ik δ jl . We have the easily proved result:
Lemma 2.3 We have i) R i (A 1 A 2 ) = R i (A 2 )R i (A 1 ) and [R i (A 1 ), R j (A 2 )] = 0 when i = j. ii) R i (A) * = R i (A * ). iii) When A is diagonalizable, AC ij = λ i C ij and C ij A = λ j C ij . Thus, all operators R i (A) on H k for any k ∈ N have common eigenvectors R i (A)[C i 0 j 0 ⊗ • • • ⊗ C i k j k ] = λ j i C i 0 j 0 ⊗ • • • ⊗ C i k j k ,
and same spectra as A.
In particular, there are strictly positive operators if A is a strictly positive matrix. This means that C k (s, A) ≥ 0 if A ≥ 0 and s ∈ ∆ k , and this justifies the previous lift. Now, evaluating (2.11) amounts to compute the following operators in B( H k ):
T k,p (x) := 1 (2π) d ∆ k ds dξ ξ µ 1 • • • ξ µ 2p e -ξαξ β C k (s,u αβ (x)) : H k → H k , p ∈ N, k ∈ N. (2.12) T 0,0 (x) := 1 (2π) d dξ e -ξαξ β u αβ (x) ∈ M N ρ(M N ) ⊂ B( H 0 ), (2.13)
where T k,p depends on x through u αβ only. Their interest stems from the fact they are independent of arguments B 0 ⊗ • • • ⊗ B k ∈ H k on which they are applied, so are the corner stones of this work. Using (2.10), the precise link between T k,p and f k (ξ) is
m • κ * • T k,p = 1 (2π) d dξ ξ µ 1 • • • ξ µ 2p f k (ξ). (2.14)
The fact that T k,p is a bounded operator is justified by the following Lemma 2. [START_REF] Avramidi | Heat kernel method and its applications[END_REF] The above integrals (2.12) and (2.13) converge and T k,p ∈ B( H k ).
Proof We may assume k ∈ N * since for k = 0, same arguments apply.
For any strictly positive matrix A with minimal eigenvalues λ min (A) > 0, Lemma 2.3 shows that, for any s ∈ ∆ k ,
C k (s, A) ≥ [(1 -s 1 )λ min (A) + (s 1 -s 2 )λ min (A) + • • • + s k λ min (A)] 1 H k = λ min (A) 1 H k .
We claim that the map
ξ ∈ R d → λ min (ξ α ξ β u αβ ) is continuous: the maps ξ ∈ R d → ξ α ξ β u αβ and 0 < a ∈ B( H k ) → inf(spectrum(a)) = a -1 are continuous (the set of invertible matrices is a Lie group). We use spherical coordinates ξ ∈ R d → (|ξ|, σ) ∈ R + × S d-1
, where σ := |ξ| -1 ξ is in the Euclidean sphere S d-1 endowed with its volume form dΩ.
Then λ min (ξ α ξ β u αβ ) = |ξ| 2 λ min (σ α σ β u αβ ) > 0 (remark that σ α σ β u αβ is a strictly positive matrix). Thus c := inf{λ min (σ α σ β u αβ ) | σ ∈ S d-1
} > 0 by compactness of the sphere. The usual operator-norm of H k applied on the above integral T k,p , satisfies
T k,p ≤ ∆ k ds σ∈S d-1 dΩ g (σ) ∞ 0 dr r d-1 r 2p σ µ 1 • • • σ µ 2p e -r 2 c 1 H k ≤ ∆ k ds σ∈S d-1 dΩ g (σ) ∞ 0 dr r d-1+2p e -r 2 c = vol(∆ k ) vol(S d-1 g ) Γ(d/2+p) 2 c -d/2-p .
For the ξ-integration of (2.12), we use again spherical coordinates, but now ξ = r σ with r = (g µν ξ µ ξ ν ) 1/2 , σ = r -1 ξ ∈ S d-1 g (this sphere depends on x ∈ M through g(x)) and define u[σ] := u µν σ µ σ ν which is a positive definite matrix for any σ ∈ S d-1 g . Thus we get
T k,p = 1 (2π) d ∆ k ds S d-1 g dΩ g (σ) σ µ 1 • • • σ µ 2p ∞ 0 dr r d-1+2p e -r 2 C k (s,u[σ]) (2.15) = Γ(d/2+p) 2(2π) d S d-1 g dΩ g (σ) σ µ 1 • • • σ µ 2p ∆ k ds C k (s, u[σ]) -(d/2+p) .
Thus, we have to compute the s-integration
∆ k ds C k (s, u[σ]) -α for α ∈ 1 2 N * .
We do that via functional calculus, using Lemma 2.3 iii), by considering the following integrals
I α,k (r 0 , r 1 , . . . , r k ) := ∆ k ds [(1 -s 1 )r 0 + (s 1 -s 2 )r 1 + • • • + s k r k ] -α = ∆ k ds [r 0 + s 1 (r 1 -r 0 ) + • • • + s k (r k -r k-1 )] -α
(2.16)
I α,0 (r 0 ) := r -α 0 , for α = 0, (2.17)
where 0 = r i ∈ R + corresponds, in the functional calculus, to positive operator R i (u[σ]). Such integrals converge for any α ∈ R and any k ∈ N * , even if it is applied above only to
α = d/2 + k -r ∈ 1 2 N.
Nevertheless for technical reasons explained below, it is best to define I α,k for an arbitrary α ∈ R. In short, the operator T k,p is nothing else than the operator in B( H k )
T k,p = Γ(d/2+p) 2(2π) d S d-1 g dΩ g (σ) σ µ 1 • • • σ µ 2p I d/2+p,k R 0 (u[σ]), R 1 (u[σ]), . . . , R k (u[σ]) . (2.18)
Remark that T k,p depends on x via u[σ] and the metric g.
Remark 2.5
We pause for a while to make a connection with the previous work [START_REF] Avramidi | Heat kernel asymptotics of operators with non-Laplace principal part[END_REF]. There, the main hypothesis on the matrix u µν ξ µ ξ ν is that all its eigenvalues are positive multiples of g µν ξ µ ξ ν for any ξ = 0. Under this hypothesis, we can decompose spectrally
u[σ] = i λ i π i [σ]
where the eigenprojections π i depends on σ but the associated eigenvalues λ i do not. Then, operator functional calculus gives
I d/2+p,k R 0 (u[σ]), . . . , R k (u[σ]) = i 0 ,...,i k I d/2+p,k (λ i 0 , . . . , λ i k ) R 0 (π i 0 [σ]) • • • R k (π i k [σ]) (2.19)
and
T k,p = Γ(d/2+p) 2(2π) d i 1 ,...,i k I d/2+p,k (λ i 0 , . . . , λ i k ) S d-1 g dΩ g (σ) σ µ 1 • • • σ µ 2p R 0 (π i 0 [σ]) • • • R k (π i k [σ])
where all
π i 0 [σ], . . . , π i k [σ] commute as operators in B( H k )
. However, we do not try to pursue in that direction since it is not very explicit due to the difficult last integral on the sphere; also we remind that we already gave up in the introduction the use of the eigenprojections for the same reason. Instead, we give for instance a complete computation of a 1 in Section 4 directly in terms of matrices u µν , v µ , w, avoiding this spectral splitting in the particular case where u µν = g µν u for a positive matrix u.
In conclusion, as claimed in the introduction, the above approach really reduces the computation of all a r (x) for an arbitrary integer r to the control of operators T k,p which encode all difficulties since, once known, their actions on an arbitrary variable
B 1 ⊗ • • • ⊗ B k with B i ∈ M n
are purely mechanical. For instance in any dimension of M , the calculus of a 1 (x) needs only to know, T 1,0 , T 2,1 , T 3,2 , T 4,3 . More generally, we have the following Proof As seen in (2.3), using the linearity of f k in each argument, we may assume that in
f k [B 1 ⊗• • •⊗B k ] each argument B i
is equal to K or P , so generates t 1/2 or t in the asymptotic expansion. Let n K and n P the number of K and P for such f k involved in a r (P ). Since a r (P ) is the coefficient of t r , we have 1 2 n K + n P = r and k ≥ r. In particular, n K much be even.
When B i = K = -iξ µ [v µ (x) + 2u µν (x) ∂ ν ],
again by linearity, we may assume that the argument in f (k, p)
:= f k (ξ)[B 1 (ξ) ⊗ • • • ⊗ B k (ξ)
] is a polynomial of order 2p since odd order are cancelled out after the ξ-integration. In such f (k, p), the number of ξ (in the argument) is equal to n K , so that p = 1 2 n K , and the number of derivations ∂ is n K + 2n P .
We count now all f (k, p) involved in the computation of a r (P ). We initiate the process with (k, p) = (n K + n P , 1 2 n K ), so k -p = r and after the successive propagation of ∂ as in Lemma 2.1, we end up with (k , p ) where k -p = r: in (2.7), k → k + 1 while p → p + 1 since ∂H appears as a new argument. So (k , p ) = (k , k -r) and the maximum of k is 2n K + 3n P . Here, n P = 0, . . . , r and n K = 0, . . . , 2r, so that the maximum is for k = 4r. All f (k, k -r) with r ≤ k ≤ 4r will be necessary to compute a r (P ): Let k be such that r ≤ k ≤ 3r. Then a term f (k, k -r) will be obtained by the use of Lemma 2.1 applied
on f r [u µ 1 ν 1 ∂ µ 1 ∂ ν 1 ⊗ • • • ⊗ u µrνr ∂ µr ∂ νr ]
with an action of k -r derivatives on the e sH and the reminder on the B i . The same argument, applied to
f 2r [ξ µ 1 u µ 1 ν 1 ∂ ν 1 ⊗ • • • ⊗ ξ µ 2r u µ 2r ν 2r ∂ ν 2r ], also generates a term f (k, k -r) when 2r ≤ k ≤ 4r.
Finally, remark that we can swap the ξ-dependence of f (k, p) into definition (2.12) of T k,p to end up with integrals which are advantageously independent of arguments B i .
The case r = 0 is peculiar: since k = 0 automatically, we have only to compute T 0,0 in (2.13) which gives a 0 (x) by (1.10). The link between the T 's and the I is given in (2.18). The preceding reasoning is independent of the dimension d.
Of course, in an explicit computation of a r (x), each of these 3r + 1 operators T k,k-r can be used several times since applied on different arguments [START_REF] Moss | Invariants of the heat equation for non-minimal operators[END_REF]) will be explicitly computed in Section 3.
B 1 ⊗ • • • ⊗ B k . The integral I d/2+p,k giving T k , p in (2.
We now list the terms of a 1 (x). Using the shortcuts
v := ξ µ v µ , ūν := ξ µ u µν ,
starting from (2.5) and applying Lemma 2.1, we get
-f 1 [P ] = -f 2 [u µν ⊗ ∂ µ ∂ ν H] + 2f 3 [u µν ⊗ ∂ µ H ⊗ ∂ ν H] -f 2 [v µ ⊗ ∂ µ H] + f 1 [w], (2.20) f 2 [K ⊗ K] = -f 2 [v ⊗ v] + 2f 3 [v ⊗ ūν ⊗ ∂ ν H] -2f 2 [ū µ ⊗ ∂ µ v] + 2f 3 [ū µ ⊗ ∂ µ H ⊗ v] + 2f 3 [ū µ ⊗ v ⊗ ∂ µ H] + 4f 3 [ū µ ⊗ ∂ µ ūν ⊗ ∂ ν H] + 4f 3 [ū µ ⊗ ūν ⊗ ∂ µ ∂ ν H] -4f 4 [ū µ ⊗ ∂ µ H ⊗ ūν ⊗ ∂ ν H] -4f 4 [ū µ ⊗ ūν ⊗ ∂ µ H ⊗ ∂ ν H] -4f 4 [ū µ ⊗ ūν ⊗ ∂ ν H ⊗ ∂ ν H].
(2.21)
This represents 14 terms to compute for getting a 1 (x).
Summary of the method:
We pause here to summarize the chosen method. To compute a r (x), we first expand K and P in (2.3) in terms of matrix valued differential operators which are arguments of M N -valued operators f k (ξ), and then we remove all derivative operators from the arguments using the generalized Leibniz rule (2.7). This generates a sum of terms like (2.8). Then, the method splits along two independent computational axes: the first one is to collect all the arguments B µ 1 ...µ k produced by (2.7); the second one is to compute the operators obtained by integration of f k (ξ) with respect to ξ, which, thanks to (2.10) and (2.14), requires to compute some operators T k,p . The latter operators are written in (2.18) using spherical coordinates for that ξ-integral, in terms of universal integrals I d/2+p,k defining operators depending only on u µν and the metric g. In the generic situation, the links between T k,p , I d/2+p,k , and f k are given in (2.18) and (2.14), but another link between f k and I d/2+p,k will be given in (4.4) in a particular case where the integrals (2.18) can be fully computed. The last step of the method is to collect the (matrix) traces of evaluations of operators (second axe) on the arguments (first axe): a r (x) is just a sum of such contributions. Moreover, Lemma 2.6 determines the number of integrals I d/2+p,k to compute to get a r (x).
Integral computations of I α,k
We begin with few interesting remarks on I α,k defined in (2.16) and (2.17
I α,k (r 0 , . . . , r k ) = 1 (α-1) (r k-1 -r k ) -1 [I α-1,k-1 (r 0 , . . . , r k-2 , r k ) -I α-1,k-1 (r 0 , . . . , r k-1 )]. (3.1)
(The abandoned case I 1,k is computed in Proposition 3.3.) ii) Symmetry with respect to last two variables:
I α,k (r 0 , . . . , r k-1 , r k ) = I α,k (r 0 , . . . , r k , r k-1 ).
iii) Continuities:
The functions I α,k : (R * + ) k+1 → R * + are continuous for all α ∈ R. For any (r 0 , . . . , r k-1 , r k ) ∈ (R * + ) k+1 , the map α ∈ R + → I α,k (r 0 , . . . , r k-1 , r k ) is continuous. iv) Special values: for any α ∈ R and k ∈ N, I α,k (r 0 , • • • , r 0 k+1 ) = 1 k! r -α 0 . (3.2) Proof i) In I α,k (r 0 , . . . , r k ) = 1 0 ds 1 s 1 0 ds 2 • • • s k-1 0 ds k [r 0 +s 1 (r 1 -r 0 )+• • •+s k (r k -r k-1 )] -α
, the last integral is equal to which gives the claimed relation. One checks directly from the definition (2.16) of I α+1,1 (r 0 , r 1 ) that (3.1) is satisfied for the given I α,0 .
ii
) I α,1 (r 0 , r 1 ) = 1 0 ds r 0 + s(r 1 -r 0 )] -α = 1 0 ds [r 1 + s (r 0 -r 1 )] -α = I α,1 (r 1 , r 0 )
after the change of variable s → s = 1 -s. The symmetry follows now using (3.1) by a recurrence process.
iii) The map g(s, r
) := [(1 -s 1 )r 0 + (s 1 -s 2 )r 1 + • • • + s k r k ] -α > 0 is continuous at the point r ∈ (R * + ) k+1
and uniformly bounded in a small ball B around r since then we have g(s, r) ≤ max(r -|α| min , r |α| max ) where r min or max := min or max{r i | r ∈ B} > 0. Thus, since the integration domain ∆ k is compact, by Lebesgue's dominated convergence theorem, we get the continuity of I α,k . It remains to prove the continuity of
α → I α,k (r 0 , . . . , r k-1 , r k ): If r min := min r i > 0, then (1-s 1 )r 0 +(s 1 -s 2 )r 1 +• • •+s k r k ≥ r min , so that ((1-s 1 )r 0 +(s 1 -s 2 )r 1 +• • •+s k r k ) -α ≤ r -α min and choosing a = min(1/2, r min ), r -α min ≤ a -α ≤ a -β 1 for α ∈ (β 0 , β 1 )
, we can apply again Lebesgue's dominated convergence theorem.
(iv) Using the very definition (2.16),
I α,k (r 0 , . . . , r 0 ) = ∆ k ds r -α 0 = vol(∆ k )r -α 0 = 1 k! r -α 0 .
From Lemma 2.6 and (2.18), the computation of a r (P ), for r ≥ 1 (since a 0 is known already by (4.5)) in spatial dimension d ≥ 1, requires all I d/2+k-r,k for r ≤ k ≤ 4r. The sequence I d/2,r , I d/2+1,r+1 , . . . , I d/2+3r,4r belongs to the same recursive relation (3.1), except if there is a s ∈ N such that d/2 + s = 1, which can only happen with d = 2 and s = 0 (see Case 1 below). The computation of this sequence requires then to compute I d/2,r as the root of (3.1).
The function I d/2,r can itself be computed using (3.1) when this relation is relevant. Case 1: d is even and d/2 ≤ r, recursive sequence (3.1) fails at I 1,r-d/2+1 :
I 1,r-d/2+1 → I 2,r-d/2+2 → • • • → I d/2,r → I d/2+1,r+1 → • • • → I d/2+3r,4r used to compute ar(x) . (3.3)
Case 2: d is even and r < d/2, relation (3.1) never fails and
I d/2-r,0 → I d/2-r+1,1 → I d/2-r+2,2 → • • • → I d/2,r → I d/2+1,r+1 → • • • → I d/
I d/2-r,0 → I d/2-r+1,1 → I d/2-r+2,2 → • • • → I d/2,r → I d/2+1,r+1 → • • • → I d/2+3r,4r
used to compute ar(x) .
(3.5)
In the latter case, the root is I α,0 with α = d/2 -r half-integer, positive or negative and both situation have to be considered separately. The recursive relation (3.1), which for I α,k follows from the integration on the k-simplex ∆ k , has a generic solution:
Proposition 3.2 Given α 0 ∈ R, k 0 ∈ N and a function F : R * + → R * + , let the function J α 0 +k 0 ,k 0 : (R * + ) k+1 → R * + be defined by J α 0 +k 0 ,k 0 (r 0 , . . . , r k ) := c α 0 +k 0 ,k 0 k 0 i=0 k 0 j=0 j =i (r i -r j ) -1 F (r i ).
i) Ascending chain: Then, all functions J α 0 +k,k obtained by applying the recurrence formula (3.1) for any k ∈ N, k ≥ k 0 have the same form:
J α 0 +k,k (r 0 , . . . , r k ) := c α 0 +k,k k i=0 k j=0 j =i (r i -r j ) -1 F (r i ) (3.6) with c α 0 +k,k = (-1) k-k 0 (α 0 +k 0 )•••(α 0 +k-1) c α 0 +k 0 ,k 0 for k > k 0 . (3.7)
ii) Descending chain: when α 0 ∈ R\{-N}, the functions J α 0 +k,k defined by (3.6) for k ∈ N * starting with k 0 = 0, the root F (r 0 ) = J α 0 ,0 (r 0 ) and
c α 0 +k,k = (-1) k α 0 (α 0 +1)•••(α 0 +k-1) (3.8) satisfy (3.1). Proof i) It is sufficient to show that X := 1 α-1 (r -1 -r ) -1 [J α-1, -1 (r 0 , . . . , r -2 , r ) -J α-1, -1 (r 0 , . . . , r -1 )].
has precisely the form (3.6) for = k 0 + 1 and α = α 0 + k 0 + 1. We have
X = c α-1, -1 α-1 (r -1 -r ) -1 -2 i=0 -2 j=0 j =i (r i -r j ) -1 (r i -r ) -1 F (r i ) + -2 j=0 (r -r i ) -1 F (r ) - -2 i=0 -2 j=0 j =i (r i -r j ) -1 (r i -r -1 ) -1 F (r i ) - -2 j=0 (r -1 -r i ) -1 F (r -1 ) . (3.9)
We can combine the two sums on i = 0, . . . , -2 as:
-2 j=0 -2 j=0 j =i (r i -r j ) -1 [(r i -r ) -1 -(r i -r -1 ) -1 ] F (r i ) = (r -r -1 ) -2 j=0 j=0 j =i (r i -r j ) -1 F (r i ).
Including (r -1 -r ) -1 , the others terms in (3.9) correspond to [ j=0 j =i
(r i -r j ) -1 ] F (r i ) for i = -1 and i = (up to a sign), so that X = - c α-1, -1 α-1 j=0 j=0 j =i (r i -r j ) -1 F (r i )
which yields c α 0 +k 0 +1,k 0 +1 = -1 α 0 +k 0 c α 0 +k 0 ,k 0 and so the claim (3.7). ii) It is the same argument with k 0 = 0 and the hypothesis α 0 / ∈ -N guarantees the existence of (3.8) and moreover c α 0 ,0 = 1. Proposition 3.2 exhibits the general solution of Cases 2 and 3 in (3.4) and (3.5) (with α 0 = d/2-r, for d even and α 0 > 0, or for d odd), with, for both, F (r 0 ) = I d/2-r,0 (r 0 ) = r -α 0 0 , so that
I d/2-r+k,k (r 0 , . . . , r k ) = (-1) k (d/2-r)(d/2-r+1)•••(d/2-r+k-1) k i=0 k j=0 j =i (r i -r j ) -1 r -(d/2-r) i . (3.10)
To control the reminder Case 1 of chain (3.3) where α 0 ∈ -N (so for d even and α 0 = d/2 -r ≤ 0), we need to compute the functions
I k-α 0 ,k for α 0 ∈ {0, 1, • • • , k -1}
. This is done below and shows surprisingly enough that the generic solution of Proposition 3.2 holds true also for a different function F (r 0 ). Actually, this is simple consequence of the fact that, despite its presentation in (3.10), the RHS has no poles as function of r.
Corollary 3.3 Case 1: d even and d/2 ≤ r (so α 0 = d/2 -r ≤ 0). For any k ∈ N * , and = r -d/2 ∈ {0, 1, • • • , k -1} I k-,k (r 0 , . . . , r k ) = (-1) k--1 (k--1)! ! k i=0 k j=0 j =i (r i -r j ) -1 r i log r i . ( 3
I k-,k (r 0 , . . . , r k ) = lim r→m+ I m-r+k,k (r 0 , . . . , r n+k ) = lim r→m+ (-1) k (m-r)•••(m-r+k-1) k i=0 k j=0 j =i (r i -r j ) -1 r -(m-r) i = [ lim r→m+ (-1) k-1 (m-r)•••(m-r+ -1) 1 (m-r+ +1)•••(m-r+k-1) ] lim r→m+ k i=0 k j=0 j =i (r i -r j ) -1 r i 1 r-(m+ ) r r-(m+ ) i = (-1) k--1 (k--1)! ! k i=0 k j=0 j =i (r i -r j ) -1 r i log r i
where we used (A.1) for the second limit in last equality.
The next propositions compute explicitly Case 3 and then Case 2, and the result is not written as in (3.10) where denominators in r i -r j appear. This allows to deduce algorithmically (i.e. without any integration) the sequence
I d/2,
I +1/2,0 (r 0 ) = r --1/2 0 , I +3/2,1 (r 0 , r 1 ) = 2 2 +1 ( √ r 0 √ r 1 ) -2 -1 ( √ r 0 + √ r 1 ) -1 0≤l 1 ≤2 √ r 0 l 1 √ r 1 2 -l 1 ,
while if d/2 -r = --1/2 with ∈ N, the root and its follower are
I --1/2,0 (r 0 ) = r +1/2 0 , I -+1/2,1 (r 0 , r 1 ) = 2 2 +1 ( √ r 0 + √ r 1 ) -1 0≤l 1 ≤2 √ r 0 l 1 √ r 1 2 -l 1 .
Proof Using (3.1) and (2.17), we get when ≥ 0
I +3/2,1 (r 0 , r 1 ) = 1 +1/2 (r 0 -r 1 ) -1 [I +1/2,0 (r 1 ) -I +1/2,0 (r 0 )] = 2 2 +1 ( √ r 0 - √ r 1 ) -1 ( √ r 0 + √ r 1 ) -1 [r --1/2 1 -r --1/2 0 ]
where the term in bracket is r
--1/2 1 -r --1/2 0 = ( √ r 0 √ r 1 ) -2 -1 [r 2 +1 0 -r 2 +1 1 ] = ( √ r 0 √ r 1 ) -2 -1 ( √ r 0 - √ r 1 ) 0≤l 1 ≤2 √ r 0 l 1 √ r 1 2 -l 1 ,
which gives the result. Similar proof for the other equality.
This proposition exhibits only the two first terms of the recurrence chain in Case 3: similar formulae can be obtained at any level in which no (r i -r j ) -1 factors appear. Unfortunately, they are far more involved.
I n,k (r 0 , . . . , r k ) = (r 0 •••r k ) -1 (n-1)•••(n-k) 0≤l k ≤l k-1 ≤••• •••≤l 1 ≤n-(k+1) r l 1 -(n-(k+1)) 0 r l 2 -l 1 1 • • • r l k -l k-1 k-1 r -l k k (3.12) = (r 0 •••r k ) -(n-k) (n-1)•••(n-k) 0≤l k ≤l k-1 ≤••• •••≤l 1 ≤n-(k+1) r l 1 0 r l 2 +(n-(k+1))-l 1 1 • • • r l k +(n-(k+1))-l k-1 k-1 r (n-(k+1))-l k k . (3.13)
In (3.13), all exponents in the sum are positive while they are negative in (3.12). In particular
I n+k,k (r 0 , . . . , r k ) = (r 0 •••r k ) -n (n+k-1)•••(n+1)n 0≤l k ≤l k-1 ≤••• •••≤l 1 ≤n-1 r l 1 0 r l 2 +(n-1)-l 1 1 • • • r l k +(n-1)-l k-1 k-1 r (n-1)-l k k . ( 3.14)
Proof The first and second equalities follow directly from the third that we prove now. Equality (3.14) is true for k = 1 (the case k = 0 is just the convention (2.17)) since
I n,1 (r 0 , r 1 ) = 1 0 ds 1 (r 0 + s 1 (r 1 -r 0 ) -n = 1 n-1 (r 0 -r 1 ) -1 [r -n+1 1 -r -n+1 0 ] = 1 n-1 (r 0 r 1 ) -n+1 n-2 l 1 =0 r l 1 0 r n-2-l 1 1 .
Assuming (3.14) holds true for l = 0, . . . , k -1, formula (3.1) gives
I n+k,k (r 0 , . . . , r k ) = 1 n+k-1 (r k-1 -r k ) -1 I n+k-1,k-1 (r 0 , . . . , r k-2 , r k ) -I n+k-1,k-1 (r 0 , . . . , r k-2 , r k-1 ) .
The term in bracket is
(r 0 •••r k-2 r k ) -n (n+k-2)•••n 0≤l k-1 ≤l k-2 ≤••• •••≤l 1 ≤n-1 r l 1 0 r l 2 +(n-1)-l 1 1 • • • r l k-1 +(n-1)-l k-2 k-2 r n-1-l k-1 k -(r 0 •••r k-2 r k-1 ) -n (n+k-2)•••n 0≤l k-1 ≤l k-2 ≤••• •••≤l 1 ≤n-1 r l 1 0 r l 2 +(n-1)-l 1 1 • • • r l k-1 +(n-1)-l k-2 k-2 r n-1-l k-1 k-1
.
Thus
I n+k,k (r 0 , . . . , r k+1 ) = (r 0 •••r k-2 ) -n (n+k-1)•••n 0≤l k-1 ≤l k-2 ≤••• •••≤l 1 ≤n-1 r l 1 0 r l 2 +(n-1)-l 1 1 • • • r l k-1 +(n-1)-l k-2 k-2 (r k-1 -r k ) -1 r -n k r n-1-l k-1 k -r -n k-1 r (n-1)-l k-1 k-1 .
Since the last line is equal to
(r k-1 r k ) -1-l k-1 0≤l k ≤l k-1 r l k k-1 r l k-1 -l k k = (r k-1 r k ) -n 0≤l k ≤l k-1 r (n-1)+l k -l k-1 k-1 r (n-1)-l k k
we have proved (3.14).
The interest of (3.14) is the fact that in (2.18) we have the following: for
B 0 ⊗ • • • ⊗ B k ∈ H k , I n+k,k R 0 (u[σ]), . . . , R k (u[σ]) [B 0 ⊗ • • • ⊗ B k ] = 1 (n+k-1)•••(n+1)n 0≤l k ≤l k-1 ≤••• •••≤l 1 ≤n-1 B 0 u[σ] l 1 -n ⊗ B 1 u[σ] l 2 -l 1 -1 ⊗ • • • • • • ⊗ B k-1 u[σ] l k -l k-1 -1 ⊗ B k u[σ] -l k -1 ; (3.15)
or viewed as an operator in
B(H k , M N ) (see diagram (2.10)): m • κ * • I n+k,k R 0 (u[σ]), . . . , R k (u[σ]) [B 1 ⊗ • • • ⊗ B k ] = 1 (n+k-1)•••(n+1)n 0≤l k ≤l k-1 ≤••• •••≤l 1 ≤n-1 u[σ] l 1 -n B 1 u[σ] l 2 -l 1 -1 B 2 • • • B k-1 u[σ] l k -l k-1 -1 B k u[σ] -l k -1 .
While, if one wants to use directly (3.10) on
B 0 ⊗ • • • ⊗ B k , we face the difficulty to evaluate [R i (u) -R j (u)] -1 [B 0 ⊗ • • • ⊗ B k ] in H k .
Another defect of (3.10) shared by (3.11) is that it suggests an improper behavior of integrals I n+1,k+n when two variables r i are equal. But the continuity proved in Proposition 3.1 shows that this is just an artifact.
An example for u µν = g µν u
Here, we explicitly compute a 1 (x) assuming P satisfies (1.1) and (4.1). Given a strictly positive matrix u(x) ∈ M N where x ∈ (M, g), we satisfy Hypothesis 1.2 with
u µν (x) := g µν (x) u(x). ( 4.1)
This implies that
H(x, ξ) = |ξ| 2 g(x) u(x) where |ξ| 2 g(x) := g µν (x) ξ µ ξ ν .
Of course the fact that u[σ] = u is then independent of σ, simplifies considerably (2.18) since the integral in ξ can be performed. Thus we assume (4.1) from now on and (2.18) becomes
T k,p = g d G(g) µ 1 ...µ 2p I d/2+p,k R 0 (u), R 1 (u), . . . , R k (u) ∈ B( H k ) (4.2)
with (see [23, Section 1.1])
g d := 1 (2π) d R d dξ e -|ξ| 2 g(x) = √ |g| 2 d π d/2 , G(g) µ 1 ...µ 2p := 1 (2π) d g d dξ ξ µ 1 • • • ξ µ 2p e -g αβ ξαξ β = 1 2 2p p! ρ∈S 2p g µ ρ(1) µ ρ(2) • • • g µ ρ(2p-1) µ ρ(2p) = (2p)! 2 2p p! g (µ 1 µ 2 ...µ 2p ) (4.3)
where |g| := det(g µν ), S 2p is the symmetric group of permutations on 2p elements and the parenthesis in the index of g is the complete symmetrization over all indices. Recall from Lemma 2.3 that if u has a spectral decomposition
u = N -1 i=0 r i E i ,
each R j (u) has the same spectrum as u.
Using the shortcuts
I d/2+p,k := I d/2+p,k R 0 (u), . . . , R k (u) ,
the formula (2.8) becomes simply
1 (2π) d dξ ξ µ 1 • • • ξ µ 2p f k (ξ)[ B µ 1 ...µ 2p k ] = g d (m • κ * • I d/2+p,k )[G(g) µ 1 ...µ 2p B µ 1 ...µ 2p k ] = g d (m • I d/2+p,k )[1 ⊗ G(g) µ 1 ...µ 2p B µ 1 ...µ 2p k ]. (4.4)
In particular, it is possible to compute the dimension-free contractions G(g)
µ 1 ...µ 2p B µ 1 ...µ 2p k
before evaluating the result in the I d/2+p,k 's. For a 0 (x), we get
a 0 (x) = tr 1 (2π) d dξ e -u(x)|ξ| 2 g(x) = g d (x) tr[u(x) -d/2 ]. (4.5)
In the sequel, we use frequently the following
Lemma 4.1 For any n 1 , n 2 , n 3 ∈ N, A 1 , A 2 ∈ M N , α ∈ R + , and k = n 1 + n 2 + n 3 + 2, define X(A 1 , A 2 ) := tr(m • I α,k [1 ⊗ u ⊗ • • • ⊗ u n 1 ⊗A 1 u ⊗ • • • ⊗ u n 2 ⊗A 2 ⊗ u ⊗ • • • ⊗ u n 3 ]).
we have
X(A 1 , A 2 ) = r 0 ,r 1 r n 1 +n 3 0 r n 2 1 I α,k (r 0 , • • • , r 0 n 1 +1 , r 1 , • • • , r 1 n 2 +1 , r 0 , • • • , r 0 n 3 +1 ) tr (E 0 A 1 E 1 A 2 ) (4.6)
where E i is the eigenprojection associated to eigenvalues r i of u.
In particular,
if [A 1 , u] = 0, X(A 1 , A 2 ) = 1 k! tr(u k-α-2 A 1 A 2 ). (4.7)
Proof The number X(A 1 , A 2 ) is equal to
r i I α,k (r 0 , . . . , r k ) tr E 0 uE 1 u • • • E n 1 n 1 +1 A 1 E n 1 +1 uE 1 u • • • E n 1 +n 2 +1 n 2 +1 A 2 E n 1 +n 2 +2 u • • • uE n 1 +n 2 +n 3 +2 n 3 +1 = r i I α,k (r 0 , . . . , r k ) tr E n 1 +n 2 +2 u • • • uE n 1 +n 2 +n 3 +2 E 0 uE 1 u • • • E n 1 A 1 E n 1 +1 uE 1 u • • • E n 1 +n 2 +1 A 2 = r i r n 1 +n 3 0 r n 2 1 I α,k (r 0 , • • • , r 0 n 1 +1 , r 1 , • • • , r 1 n 2 +1 , r 0 , • • • , r 0 n 3 +1
) tr(u
n 1 +n 3 E 0 A 1 u n 2 E 1 A 2 )
yielding (4.6).
The particular case follows from tr(E
0 A 1 E 1 A 2 ) = δ 0,1 tr(E 0 A 1 A 2 ) and I α,k (r 0 , • • • , r 0 k+1 ) = 1 k! r -α 0 . (4.8)
We also quote for further references the elementary Corollary 4.2 For any symmetric tensor S ab = S ba , any A a , A b ∈ M N and any function g : (r 0 , r 1 )
∈ (R * + ) 2 → R, r i g(r 0 , r 1 )S ab tr(π r 0 A a π r 1 A b ) = r i 1 2 [g(r 0 , r 1 ) + g(r 1 , r 0 )] S ab tr(π r 0 A a π r 1 A b ). (4.9)
We now divide the computation of a 1 (x) into several steps.
Collecting all the arguments
As a first step, we begin to collect all terms B µ 1 ...µ 2p k of (4.4) due to the different variables appearing in (2.20) and (2.21), including their signs.
Variable in f 1 : w.
Variables in f 2 without the common factor ξ µ 1 ξ µ 2 and summation over µ 1 , µ 2 :
-u µν ⊗ ∂ µ ∂ ν H → -g µν (∂ µ ∂ ν g µ 1 µ 2 ) u ⊗ u -2g µν (∂ µ g µ 1 µ 2 ) u ⊗ ∂ ν u -g µν g µ 1 µ 2 u ⊗ ∂ µ ∂ ν u -v µ ⊗ ∂ µ H → -(∂ µ g µ 1 µ 2 ) v µ ⊗ u -g µ 1 µ 2 v µ ⊗ ∂ µ u -v ⊗ v → -v µ 1 ⊗ v µ 2 -2ū µ ⊗ ∂ µ v → -2g µµ 1 u ⊗ ∂ µ v µ 2 .
Variables in f 3 without the commun factor Π 4 i=1 ξ µ i and summation over the µ i :
2u µν ⊗ ∂ µ H ⊗ ∂ ν H → +2g µν (∂ µ g µ 1 µ 2 )(∂ ν g µ 3 µ 4 ) u ⊗ u ⊗ u +2g µν (∂ µ g µ 1 µ 2 )g µ 3 µ 4 u ⊗ u ⊗ ∂ ν u +2g µν g µ 1 µ 2 (∂ ν g µ 3 µ 4 ) u ⊗ ∂ µ u ⊗ u +2g µν g µ 1 µ 2 g µ 3 µ 4 u ⊗ ∂ µ u ⊗ ∂ ν u 2v ⊗ ūµ ⊗ ∂ µ H → +2g µµ 2 (∂ µ g µ 3 µ 4 ) v µ 1 ⊗ u ⊗ u + 2g µµ 2 g µ 3 µ 4 v µ 1 ⊗ u ⊗ ∂ µ u 2ū µ ⊗ ∂ µ H ⊗ v → +2g µµ 1 (∂ µ g µ 2 µ 3 ) u ⊗ u ⊗ v µ 4 + 2g µµ 1 g µ 2 µ 3 u ⊗ ∂ µ u ⊗ v µ 4 2ū µ ⊗ v ⊗ ∂ µ H → +2g µµ 1 (∂ µ g µ 3 µ 4 ) u ⊗ v µ 2 ⊗ u + 2g µµ 1 g µ 3 µ 4 u ⊗ v µ 2 ⊗ ∂ µ u 4ū µ ⊗ ∂ µ ūν ⊗ ∂ ν H → +4g µµ 1 (∂ µ g νµ 2 )(∂ ν g µ 3 µ 4 ) u ⊗ u ⊗ u +4g µµ 1 (∂ µ g νµ 2 )g µ 3 µ 4 u ⊗ u ⊗ ∂ ν u +4g µµ 1 g νµ 2 (∂ ν g µ 3 µ 4 ) u ⊗ ∂ µ u ⊗ u +4g µµ 1 g νµ 2 g µ 3 µ 4 u ⊗ ∂ µ u ⊗ ∂ ν u 4ū µ ⊗ ūν ⊗ ∂ µ ∂ ν H → +4g µµ 1 g νµ 2 (∂ µ ∂ ν g µ 3 µ 4 ) u ⊗ u ⊗ u +4g µµ 1 g νµ 2 (∂ µ g µ 3 µ 4 ) u ⊗ u ⊗ ∂ ν u +4g µµ 1 g νµ 2 (∂ ν g µ 3 µ 4 ) u ⊗ u ⊗ ∂ µ u +4g µµ 1 g νµ 2 g µ 3 µ 4 u ⊗ u ⊗ ∂ µ ∂ ν u
Variables in f 4 without the commun factor Π 6 i=1 ξ µ i and summation over the µ i :
-4ū µ ⊗ ∂ µ H ⊗ ūν ⊗ ∂ ν H → -4g µµ 1 (∂ µ g µ 2 µ 3 )g νµ 4 (∂ ν g µ 5 µ 6 ) u ⊗ u ⊗ u ⊗ u -4g µµ 1 (∂ µ g µ 2 µ 3 )g νµ 4 g µ 5 µ 6 u ⊗ u ⊗ u ⊗ ∂ ν u -4g µµ 1 g µ 2 µ 3 g νµ 4 (∂ ν g µ 5 µ 6 ) u ⊗ ∂ µ u ⊗ u ⊗ u -4g µµ 1 g µ 2 µ 3 g νµ 4 g µ 5 µ 6 u ⊗ ∂ µ u ⊗ u ⊗ ∂ ν u -4ū µ ⊗ ūν ⊗ ∂ µ H ⊗ ∂ ν H → -4g µµ 1 g νµ 2 (∂ µ g µ 3 µ 4 )(∂ ν g µ 5 µ 6 ) u ⊗ u ⊗ u ⊗ u -4g µµ 1 g νµ 2 (∂ µ g µ 3 µ 4 )g µ 5 µ 6 u ⊗ u ⊗ u ⊗ ∂ ν u -4g µµ 1 g νµ 2 g µ 3 µ 4 (∂ ν g µ 5 µ 6 ) u ⊗ u ⊗ ∂ µ u ⊗ u -4g µµ 1 g νµ 2 g µ 3 µ 4 g µ 5 µ 6 u ⊗ u ⊗ ∂ µ u ⊗ ∂ ν u -4ū µ ⊗ ūν ⊗ ∂ ν H ⊗ ∂ µ H → -4g µµ 1 g νµ 2 (∂ ν g µ 3 µ 4 )(∂ µ g µ 5 µ 6 ) u ⊗ u ⊗ u ⊗ u -4g µµ 1 g νµ 2 (∂ ν g µ 3 µ 4 )g µ 5 µ 6 u ⊗ u ⊗ u ⊗ ∂ µ u -4g µµ 1 g νµ 2 g µ 3 µ 4 (∂ µ g µ 5 µ 6 ) u ⊗ u ⊗ ∂ ν u ⊗ u -4g µµ 1 g νµ 2 g µ 3 µ 4 g µ 5 µ 6 u ⊗ u ⊗ ∂ ν u ⊗ ∂ µ u.
A second and tedious step is now to do in (4.4) the metric contractions G(g)
µ 1 ...µ 2p B µ 1 ...µ 2p k
for previous terms where the G(g) µ 1 ...µ 2p are given by:
G(g) µ 1 µ 2 = 1 2 g µ 1 µ 2 , (4.10) G(g) µ 1 µ 2 µ 3 µ 4 = 1 4 (g µ 1 µ 2 g µ 3 µ 4 + g µ 1 µ 3 g µ 2 µ 4 + g µ 1 µ 4 g µ 2 µ 3 ) , ( 4.11)
G(g) µ 1 µ 2 µ 3 µ 4 µ 5 µ 6 = 1 8 + g µ 1 µ 2 g µ 3 µ 4 g µ 5 µ 6 + g µ 1 µ 2 g µ 3 µ 5 g µ 4 µ 6 + g µ 1 µ 2 g µ 3 µ 6 g µ 4 µ 5 + g µ 1 µ 3 g µ 2 µ 4 g µ 5 µ 6 + g µ 1 µ 3 g µ 2 µ 5 g µ 4 µ 6 + g µ 1 µ 3 g µ 2 µ 6 g µ 4 µ 5 + g µ 1 µ 4 g µ 2 µ 3 g µ 5 µ 6 + g µ 1 µ 4 g µ 2 µ 5 g µ 3 µ 6 + g µ 1 µ 4 g µ 2 µ 6 g µ 3 µ 5 + g µ 1 µ 5 g µ 2 µ 3 g µ 4 µ 6 + g µ 1 µ 5 g µ 2 µ 4 g µ 3 µ 6 + g µ 1 µ 5 g µ 2 µ 6 g µ 3 µ 4 + g µ 1 µ 6 g µ 2 µ 3 g µ 4 µ 5 + g µ 1 µ 6 g µ 2 µ 4 g µ 3 µ 5 + g µ 1 µ 6 g µ 2 µ 5 g µ 3 µ 4 . (4.12)
Keeping the same order already obtained in the first step, we get after the contactions: Contribution of f 1 variable: w (no contraction since p = 0). Contribution of f 2 variables:
-1 2 g µν g ρσ (∂ µ ∂ ν g ρσ ) u ⊗ u -g µν g ρσ (∂ ν g ρσ ) u ⊗ ∂ µ u -d 2 g µν u ⊗ ∂ µ ∂ ν u -1 2 g ρσ (∂ µ g ρσ ) v µ ⊗ u -d 2 v µ ⊗ ∂ µ u -1 2 g µν v µ ⊗ v ν -u ⊗ ∂ µ v µ .
Contribution of f 3 variables:
1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) u ⊗ u ⊗ u + d+2 2 g µν g ρσ (∂ ν g ρσ ) u ⊗ u ⊗ ∂ µ u + d+2 2 g µν g ρσ (∂ ν g ρσ ) u ⊗ ∂ µ u ⊗ u + d(d+2) 2 g µν u ⊗ ∂ µ u ⊗ ∂ ν u + 1 2 g ρσ (∂ µ g ρσ ) v µ ⊗ u ⊗ u + g µν (∂ ρ g ρν ) v µ ⊗ u ⊗ u + d+2 2 v µ ⊗ u ⊗ ∂ µ u + 1 2 g ρσ (∂ µ g ρσ ) u ⊗ u ⊗ v µ + g µν (∂ ρ g ρν ) u ⊗ u ⊗ v µ + d+2 2 u ⊗ ∂ µ u ⊗ v µ + 1 2 g ρσ (∂ µ g ρσ ) u ⊗ v µ ⊗ u + g µν (∂ ρ g ρν ) u ⊗ v µ ⊗ u + d+2 2 u ⊗ v µ ⊗ ∂ µ u + g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 2g ρσ (∂ µ g νρ )(∂ ν g µσ ) u ⊗ u ⊗ u + (d + 2)(∂ ν g µν ) u ⊗ u ⊗ ∂ µ u + [g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν )] u ⊗ ∂ µ u ⊗ u + (d + 2)g µν u ⊗ ∂ µ u ⊗ ∂ ν u + [g µν g ρσ (∂ µ ∂ ν g ρσ ) + 2(∂ µ ∂ ν g µν )] u ⊗ u ⊗ u + [g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν )] u ⊗ u ⊗ ∂ µ u + [g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν )] u ⊗ u ⊗ ∂ µ u + (d + 2)g µν u ⊗ u ⊗ ∂ µ ∂ ν u.
which, once collected, gives
g µν g ρσ (∂ µ ∂ ν g ρσ ) + 2(∂ µ ∂ ν g µν ) + g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 2g ρσ (∂ µ g νρ )(∂ ν g µσ ) 1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) u ⊗ u ⊗ u + (d + 6) 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν ) u ⊗ u ⊗ ∂ µ u + d+4 2 g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν ) u ⊗ ∂ µ u ⊗ u + (d+2) 2 2 g µν u ⊗ ∂ µ u ⊗ ∂ ν u + (d + 2)g µν u ⊗ u ⊗ ∂ µ ∂ ν u + 1 2 g ρσ (∂ µ g ρσ ) + g µν (∂ ρ g ρν ) (v µ ⊗ u ⊗ u + u ⊗ v µ ⊗ u + u ⊗ u ⊗ v µ ) + d+2 2 (v µ ⊗ u ⊗ ∂ µ u + u ⊗ ∂ µ u ⊗ v µ + u ⊗ v µ ⊗ ∂ µ u)
Contribution of f 4 variables: We use the following symmetry: in previous three terms of f 4 , one goes from the first to the second right terms by the change (µ 2 , µ 3 , µ 4 ) → (µ 3 , µ 4 , µ 2 ) and from the second to the third terms via (µ, ν) → (ν, µ) and (µ 1 , µ 2 ) → (µ 2 , µ 1 ). So after the contraction of the first term and using that symmetry (which explains the factors 3 and 2), we get
3 -1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) -g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -2g ρσ (∂ µ g µν )(∂ ν g ρσ ) -2g ρσ (∂ µ g µρ )(∂ ν g νσ ) -2g ρσ (∂ µ g νρ )(∂ ν g µσ ) u ⊗ u ⊗ u ⊗ u -(d + 4) (∂ µ g µν ) + 1 2 g µν g ρσ (∂ µ g ρσ ) 3 u ⊗ u ⊗ u ⊗ ∂ ν u + 2 u ⊗ u ⊗ ∂ ν u ⊗ u + u ⊗ ∂ ν u ⊗ u ⊗ u -1 2 (d + 4)(d + 2)g µν 2 u ⊗ u ⊗ ∂ µ u ⊗ ∂ ν u + u ⊗ ∂ µ u ⊗ u ⊗ ∂ ν u .
It worth to mention that all results of this section 4.1 are valid in arbitrary dimension d of the manifold.
Application of operators I d/2+p,k
We can now compute in (4.4) the application of
I d/2+p,k on each previous G(g) µ 1 ...µ 2p B µ 1 ...µ 2p k
. We restrict to even dimension d = 2m since we will prove in Section 5.2 that it is impossible to get explicit formulae when d is odd (explicit in the sense that we do want to avoid ending with formulae like (1.5)).
Lemma 2.6 tells us that we have only to apply the four operators given by (3.10) for k = 1, 2, 3, 4:
I m+k-1,k (r 0 , . . . , r k ) = (-1) k (m-1)(m)•••(m+k-2) k i=0 k j=0 j =i (r i -r j ) -1 r -(m-1) i , when m = 1, I k,k (r 0 , . . . , r k ) = (-1) k+1 (k-1)! k i=0 k j=0 j =i (r i -r j ) -1 r -1 i log r i , when m = 1 = lim m→1 I m+k-1,k (r 0 , . . . , r k ) (see Proposition 3.1).
They are applied on the above f k variables which have the form
A 1 ⊗ • • • ⊗ A k where each A i is equal to u, v µ ,
B 1 = B 2 = u. Case 2: (B 1 = ∂ µ u, B 2 = u), (B 1 = ∂ µ ∂ ν u, B 2 = u), (B 1 = v µ , B 2 = u), (B 1 = ∂ µ v µ , B 2 = u). Case 3: (B 1 = v µ , B 2 = v ν ), (B 1 = v µ , B 2 = ∂ µ u), (B 1 = ∂ µ u, B 2 = ∂ ν u).
We use the shortcut
J m+k-1,k [A 1 ⊗ • • • ⊗ A k ] := tr(m • I m+k-1,k [1 ⊗ A 1 ⊗ A 2 ⊗ • • • ⊗ A k ]).
We first give two examples of such computations of J. The first one corresponds to Case 0 and is given by the variable w in f 1 . Its contribution to a 1 is:
J m,1 [w] = tr(m • I m,1 (R 0 (u), R 1 (u))[1 ⊗ w]) = r 0 ,r 1 I m,1 (r 0 , r 1 ) tr(E 0 wE 1 ) = r 0 I m,1 (r 0 , r 0 ) tr(E 0 w) = tr(u -m w) since I m,1 (r 0 , r 0 ) = r -m 0 by (3.
2). Our second example comes from Case 3 and is given by the variable -1 2 g µν v µ ⊗ v ν in f 2 : we have
J m+1,2 [v µ ⊗ v ν ] = tr(m • I m+1,2 (R 0 (u), R 1 (u), R 2 (u))[1 ⊗ v µ ⊗ v ν ]) = r 0 ,r 1 ,r 2 I m+1,2 (r 0 , r 1 , r 2 ) tr(E 0 v µ E 1 v ν E 2 ) = r 0 ,r 1 I m+1,2 (r 0 , r 1 , r 0 ) tr(E 0 v µ E 1 v ν ).
Thus, using Corollary 4.2 and (A.3), its contribution to a 1 is
-1 2 g µν J m+1,2 [v µ ⊗ v ν ] = -gµν 2 r 0 ,r 1 1 2 [I m+1,2 (r 0 , r 1 , r 0 ) + I m+1,2 (r 1 , r 0 , r 1 )] tr(E 0 v µ E 1 v ν ) (4.13) = -1 2 g µν r 0 ,r 1 1 2m m-1 =0 r --1 0 r -m 1 tr(E 0 v µ E 1 v ν ) = -1 4m g µν m-1 =0 tr(u --1 v µ u -m v ν ).
We now consider all contributions. Case 1: Thanks to (3.2)
J m+k-1,k [u ⊗ • • • ⊗ u k ] = 1 k! tr(u -m+1 )
and we get
-1 2 g µν g ρσ (∂ µ ∂ ν g ρσ ) J m+1,2 [u ⊗ u] + g µν g ρσ (∂ µ ∂ ν g ρσ ) + 2(∂ µ ∂ ν g µν ) + g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 2g ρσ (∂ µ g νρ )(∂ ν g µσ ) 1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) J m+2,3 [u ⊗ u ⊗ u] + 3 -1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) -g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -2g ρσ (∂ µ g µν )(∂ ν g ρσ ) -2g ρσ (∂ µ g µρ )(∂ ν g νσ ) -2g ρσ (∂ µ g νρ )(∂ ν g µσ ) J m+3,4 [u ⊗ u ⊗ u ⊗ u] = α tr(u -m+1 )
where .4). So this contribution is
α := 1 3 (∂ µ ∂ ν g µν ) -1 12 g µν g ρσ (∂ µ ∂ ν g ρσ ) + 1 48 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + 1 24 g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -1 12 g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 1 12 g ρσ (∂ µ g νρ )(∂ ν g µσ ) -1 4 g ρσ (∂ µ g µρ )(∂ ν g νσ ). (4.14) Case 2: (B 1 = ∂ µ u, B 2 = u) generating terms in tr(u -m ∂ µ u) with coefficient -1 2 g µν g ρσ (∂ ν g ρσ ) + (2m+6) 3! 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν ) + 1 3! 2m+4 2 g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν ) -6(2m+4) 4! (∂ ν g µν ) + 1 2 g µν g ρσ (∂ ν g ρσ ) = m-2 6 1 2 g µν g ρσ (∂ ν g ρσ ) -(∂ ν g µν ) (B 1 = ∂ µ ∂ ν u, B 2 = u) generating terms in tr(u -m ∂ µ ∂ ν u) with coefficient -2m 2 g µν 1 2! + (2m + 2)g µν 1 3! = -m-2 6 g µν (B 1 = v µ , B 2 = u) generating terms in tr(u -m v µ ) with coefficient -1 2 1 2! g ρσ (∂ µ g ρσ ) + 3 3! 1 2 g ρσ (∂ µ g ρσ ) + g µν (∂ ρ g ρν ) = 1 2 g µν (∂ ρ g ρν ). (B 1 = ∂ µ v µ , B 2 = u) generating a term in tr(u -m ∂ µ v µ ) with coefficient -1 2! = -1 2 . Case 3: (B 1 = v µ , B 2 = ∂ µ u) generating terms in tr[E 0 v µ E 1 (∂ µ u)] with coefficient r 0 ,r 1 -mI m+1,2 (r 0 , r 1 , r 0 ) + 2m+2 2 [r 1 I m+2,3 (r 0 , r 1 , r 1 , r 0 ) + r 1 I m+2,3 (r 1 , r 1 , r 0 , r 1 ) + r 0 I m+2,3 (r 0 , r 0 , r 1 , r 0 )] = 1 2m m-1 =0 (m -2 ) r --1 0 r -m 1 thanks to (A
1 2m m-1 =0 (m -2 ) tr[u --1 v µ u -m (∂ µ u)] (B 1 = ∂ µ u, B 2 = ∂ ν u) generating terms in tr[E 0 (∂ µ u)E 1 (∂ ν u)] with coefficient 2(m + 1) 2 g µν r 0 I m+2,3 (r 0 , r 0 , r 1 , r 0 ) -2(m + 2)(m + 1)g µν [2r 2 0 I m+3,4 (r 0 , r 0 , r 0 , r 1 , r 0 ) + r 0 r 1 I m+3,4 (r 0 , r 0 , r 1 , r 1 , r 0 )] = g µν g 3 (r 0 , r 1 )
with the definition of g 3 in (A.5). Thanks to (A.6), this contribution is
1 6m g µν m-1 =0 (m 2 -2m -3 (m -1 -)) tr[u --1 (∂ µ u)u -m (∂ ν u)].
Main results
The recollection of all contributions (4.4) for a 1 (x) from Cases 0-3 is now ready for the first interesting result:
Theorem 4.3 Assume that P = -(u g µν ∂ µ ∂ ν + v ν ∂ ν + w) is a selfadjoint elliptic operator acting on L 2 (M, V
) for a 2m-dimensional boundaryless Riemannian compact manifold (M, g) and a vector bundle V over M where u, v µ , w are local maps on M with values in M N , with u positive and invertible. Then, its local
a 1 (x) heat-coefficient in (1.3) for x ∈ M is a 1 =g 2m tr(u -m w) + α tr(u -m+1 ) + m-2 6 1 2 g µν g ρσ (∂ ν g ρσ ) -(∂ ν g µν ) tr(u -m ∂ µ u) -m-2 6 g µν tr(u -m ∂ µ ∂ ν u) + 1 2 g µν (∂ ρ g ρν ) tr(u -m v µ ) -1 2 tr(u -m ∂ µ v µ ) -1 4m m-1 =0 g µν tr(u --1 v µ u -m v ν ) + 1 2m m-1 =0 (m -2 ) tr[u --1 v µ u -m (∂ µ u)] + m-1 =0 m-2 6 -(m--1) 2m g µν tr[u --1 (∂ µ u)u -m (∂ ν u)] (4. 15
)
where α is given in (4.14).
Since the operator P is not written in terms of objects which have simple (homogeneous) transformations by change of coordinates and gauge transformation, this result does not make apparent any explicit Riemannian or gauge invariant expressions. This is why we have not used normal coordinates until now. Nevertheless, from Lemma A.5, one can deduce after a long computation: Lemma 4.4 Under a gauge transformation, a 1 (x) given by (4.15) is gauge invariant.
As shown in A.4, with the help of a gauge connection A µ , one can change the variables (u, v µ , w) to variables (u, p µ , q) well adapted to changes of coordinates and gauge transformations (see (A.12) and (A.13)). For u µν = g µν u, (A.14) and (A.15) becomes
v µ = -1 2 g µν g ρσ (∂ ν g ρσ ) + ∂ ν g µν u + g µν (∂ ν u) + g µν (uA ν + A ν u) + p µ (4.16) w = -1 2 g µν g ρσ (∂ ν g ρσ ) + ∂ ν g µν uA µ + g µν (∂ ν u)A µ + g µν u(∂ µ A ν ) + g µν A µ uA ν + p µ A µ + q. ( 4.17)
Relations (4.16) and (4.17) can be injected into (4.15) to get an explicitly diffeomorphism and gauge invariant expression. In order to present the result of this straightforward computation, let us introduce the following notations. Given the Christoffel symbols Γ ρ µν :=
1 2 g ρσ (∂ µ g σν + ∂ ν g σµ -∂ σ g µν ), the Riemann curvature tensor R α βµν := ∂ µ Γ α βν -∂ ν Γ α βµ + Γ α µρ Γ ρ βν -Γ α νρ Γ ρ βµ
, and the Ricci tensor R µν := R ρ µρν , the scalar curvature R := g µν R µν computed in terms of the derivatives of the inverse metric is
R = g µν g ρσ (∂ µ ∂ ν g ρσ ) -(∂ µ ∂ ν g µν ) + g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 1 2 g ρσ (∂ µ g νρ )(∂ ν g µσ ) -1 4 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) -5 4 g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ), (4.18
) and one has
g µν Γ ρ µν = 1 2 g ρσ g αβ (∂ σ g αβ ) -∂ σ g ρσ , Γ σ σρ = -1 2 g αβ (∂ ρ g αβ ).
Let ∇ µ be the (gauge) covariant derivative on V (and its related bundles):
∇ µ s := ∂ µ s + A µ s for any section s of V.
From (A.13), u, p µ and q are sections of the endomorphism vector bundle End(V ) = V * ⊗ V (while v µ and w are not), so that ∇ µ u = ∂ µ u + [A µ , u] (and the same for p µ and q). We now define ∇ µ , which combines ∇ µ and the linear connection induced by the metric g:
∇ µ u := ∂ µ u + [A µ , u] = ∇ µ u, ∇ µ p ρ := ∂ µ p ρ + [A µ , p ρ ] + Γ ρ µν p ν = ∇ µ p ρ + Γ ρ µν p ν ∇ µ ∇ ν u := ∂ µ ∇ ν u + [A µ , ∇ ν u] -Γ ρ µν ∇ ρ u = ∂ µ ∇ ν u + [A µ , ∇ ν u] -Γ ρ µν ∇ ρ u , so that, if ∆ ∇ := g µν ∇ µ ∇ ν is the connection Laplacian ∆ ∇ u = g µν ∇ µ ∇ ν u -1 2 g µν g αβ (∂ µ g αβ ) -∂ µ g µν ∇ ν u, ∇ µ p µ = ∇ µ p µ -1 2 g αβ (∂ µ g αβ )p µ .
Any relation involving u, p µ , q, g and ∇ µ inherits the homogeneous transformations by changes of coordinates and gauge transformations of these objects. Let us state now the result of the computation of a 1 (x) in terms of (u, p µ , q): Theorem 4.5 Assume that P = -(|g| -1/2 ∇ µ |g| 1/2 g µν u∇ ν + p µ ∇ µ + q) is a selfadjoint elliptic operator acting on L 2 (M, V ) for a 2m-dimensional boundaryless Riemannian compact manifold (M, g) and a vector bundle V over M where u, p µ , q are sections of endomorphisms on V with u positive and invertible. Then, its local
a 1 (x) heat-coefficient in (1.3) for x ∈ M is a 1 =g 2m 1 6 R tr[u 1-m ] + tr[u -m q] -m+1 6 tr[u -m ∆ ∇ u] -1 2 tr[u -m ∇ µ p µ ] + m-1 =0 2m 2 -4m+3 12m
-(m--1) 2m g µν tr[u --1 ( ∇ µ u)u -m ( ∇ ν u)] + 1 2m m-1 =0 (m -2 -1) tr[u --1 p µ u -m ( ∇ µ u)] -1 4m m-1 =0 g µν tr[u --1 p µ u -m p ν ] (4.19)
where
g 2m = √ |g| 4 m π m .
Proof This can be checked by an expansion of the RHS of (4.19). A more subtle method goes using normal coordinates in (4.15), (4.16), (4.17), knowing that a 1 (x) is a scalar and (u, p µ , q) are well adapted to change of coordinates. The coefficients in the sum of the second line of this expression have been symmetrized ↔ (m --1) using the trace property and the change of variable → m --1.
Corollary 4.6
The previous formula can be written in a more compact way as
a 1 =g 2m 1 6 R tr[u 1-m ] + tr[u -m q] -m+1 6 tr[u -m ∆ ∇ u] -1 2 tr[u -m ∇ µ p µ ] + 1 4m m-1 =0 g µν tr[u --1 [(2 -1) ∇ µ u -p µ ]u -m [(2 -1) ∇ ν u + p ν ]] + 1 2m m-1 =0 g µν tr[u( ∇ µ u --1 )u( ∇ ν u -m )] -m 2 -2m+3 3m m-1 =0 g µν tr[u --1 ( ∇ µ u)u -m ( ∇ ν u)] (4.20)
where p µ = g µν p ν .
Proof This relation can be obtained by expanding the second and third lines of (4.20) using the combinatorial equality
m-1 =0 g µν tr[u(∂ µ u --1 )u(∂ ν u -m )] = m-1 =0 m + (m --1) g µν tr[u --1 (∂ µ u)u -m (∂ ν u)]
that is a tedious computation.
The above formula (4.20) is not the unique way to write (4.19). Some variations are possible using for instance the relations
g µν tr[u(∂ µ ∂ ν u -m )] = -m g µν tr[u -m (∂ µ ∂ ν u)] + (m + 1) m-1 =0 g µν tr[u --1 (∂ µ u)u -m (∂ ν u)], m-1 =0 g µν tr[u(∂ µ u -)(∂ ν u -m )] = m-1 =0 m-1 2 + (m --1) g µν tr[u --1 (∂ µ u)u -m (∂ ν u)], g µν tr[∂ µ (u -m ∂ ν u)] = g µν tr[u -m (∂ µ ∂ ν u)] - m-1 =0 g µν tr[u --1 (∂ µ u)u -m (∂ ν u)],
which make all appear the expression
g µν tr[u --1 (∂ µ u)u -m (∂ ν u)] in (4.19).
Corollary 4.7 When M has dimension 4, the last two terms of (4.20) compensate and the formula simplifies as
a 1 = |g| 1/2 16 π 2 1 6 R tr[u -1 ] + tr[u -2 q] -1 2 g µν tr[u -2 ∇ µ ∇ ν u] -1 2 tr[u -2 ∇ µ p µ ] + 1 4 g µν tr[u -2 ( ∇ µ u -p µ )u -1 ( ∇ ν u + p ν )] . ( 4.21)
Remark 4.8 For the computation of a r (x) with r ≥ 2, directly in terms of variables (u, p µ , q), the strategy is to use normal coordinates from the very beginning, which simplifies the computation of terms B µ 1 ...µ 2p k of (4.4). Then an equivalent result to Theorem 4.3 would be obtained, but only valid in normal coordinates. Thus, by the change of variables (4.16), (4.17), a final result as Theorem 4.5 could be calculated. Remark 4.9 In the present method, the factor 1 6 R is explicitly and straightforwardly computed from the metric entering u µν = g µν u, as in [START_REF] Avramidi | Non-Laplace type operators on manifolds with boundary, in: Analysis, geometry and topology of elliptic operators[END_REF] for instance. Many methods introduce R using diffeomorphism invariance and compute the coefficient 1 6 using some "conformal perturbation" of P (see [START_REF] Gilkey | Asymptotic formulae in spectral geometry[END_REF]Section 3.3]).
Case of scalar symbol
: u(x) = f (x) 1 N
Let us consider now the specific case u(x) = f (x) 1 N , where f is a nowhere vanishing positive function. Then (4.16) simplifies to
v µ = [-1 2 g µν g ρσ (∂ ν g ρσ ) + ∂ ν g µν ]f 1 N + g µν (∂ ν f ) 1 N + 2f g µν A ν
+ p µ and we can always find A µ such that p µ = 0:
A µ = 1 2 g µν f -1 v ν + [ 1 2 g ρσ (∂ µ g ρσ ) -g µν (∂ ρ g ρν ) -f -1 (∂ µ f )] 1 N .
One can check, using (A.8), that A µ satisfies the correct gauge transformations. This means that P can be written as
P = -(|g| -1/2 ∇ µ |g| 1/2 g µν f ∇ ν + q)
where the only matrix-dependencies are in q and A µ . Since u is in the center, ∇ µ u = (∂ µ f ) 1 N and (4.19) simplifies as
a 1 = g 2m f -m [ N 6 f R + tr[q] -m+1 6 N g µν (∂ µ ∂ ν f ) + m+1 6 N g µν Γ ρ µν (∂ ρ f ) + m 2 -m+1 12 N g µν f -1 (∂ µ f )(∂ ν f )]
in which A µ does not appears. Now, if f is constant, we get the well-known result (see [START_REF] Gilkey | Asymptotic formulae in spectral geometry[END_REF]Section 3.3]):
a 1 = g 2m f -m N 6 R + tr[q] .
About the method
Existence
For the operator P given in (1.1), the method used here assumes only the existence of asymptotics (1.3). This is the case when P is elliptic and selfadjoint. Selfadjointness of P on L 2 (M, V ) is not really a restriction since we remark that given an arbitrary family of u µν satisfying (1.2), skewadjoint matrices ṽµ and a selfadjoint matrix w, we get a formal selfadjoint elliptic operator P defined by (1.1) where
v µ = ṽµ + (∂ ν log|g| 1/2 ) u µν + ∂ ν u µν , w = w + 1 2 [-∂ µ ṽµ + (∂ µ log|g| 1/2 ) ṽµ ].
A crucial step in our method is to be able to compute the integral (2.18) for a general u µν . The case u µν = g µν u considered in Section 4 makes that integral manageable.
On explicit formulae for u µν = g µν u
For u µν = g µν u, the proposed method is a direct computational machinery. Since the method can be computerized, this could help to get a r in Case 2 (d even and r < d/2). Recall the steps: 1) to expand the arguments of the initial f k 's, 2) to contract with the tensor G(g), 3) to apply to the corresponding operators I α,k , 4) to collect all similar terms. Further eventual steps: 5) to change variables to (u, p µ , q), 6) to identify (usual) Riemannian tensors and covariant derivatives (in terms of A µ and Christoffel symbols).
Is it possible to get explicit formulae for a r from the original ingredients (u, v µ , w) of P ? An explicit formula should look like (4.15) or (4.19). This excludes the use of spectral decomposition of u as in (1.5) which could not be recombined as
finite sum tr[h (0) (u)B 1 h (1) (u)B 2 • • • B k h (k) (u)]
where the h i are continuous functions and the B i are equal to u, v µ , w or their derivatives. The obstruction to get such formula could only come from the operators I α,k and not from the arguments B i . Thus the matter is to understand the u-dependence of I α,k .
Let us consider I α,k as a map u → I α,k (u). An operator map u → A(u) ∈ B( H k ) is called u-factorizable (w.r.t. the tensor product) if it can be written as
A(u) = finite sum R 0 (h (0) (u)) R 1 (h (1) (u)) • • • R k (h (k) (u))
where the h (i) are continuous functions on R * + . Lemma 5.1 Let u → A(u) = F (R 0 (u), . . . , R k (u)) the operator map defined by a continuous function F : (R * + ) k+1 → R * + . Then A is u-factorizable iff F is decomposed as
F (r 0 , . . . , r k ) = finite sum h (0) (r 0 )h (1) (r 1 ) • • • h (k) (r k ) (5.1)
for continuous functions h (i) .
Proof Let λ i be the eigenvalues of u and π i the associated eigenprojections. If F is decomposed, then by functional calculus, one gets
A(u) = i 0 ,...,i k F (λ i 0 , . . . , λ i k ) R 0 (π i 0 ) • • • R k (π i k ) = i 0 ,...,i k finite sum h (0) (λ i 0 ) • • • h (k) (λ i k ) R 0 (π i 0 ) • • • R k (π i k ) = finite sum R 0 (h (0) (u)) • • • R k (h (k) (u)).
If A is u-factorizable, then this computation can be seen in the other way around to show that F is decomposed.
The general solutions (3.10) and (3.11) for the operators I α,k are not manifestly u-factorizable because of the factors (r i -r j ) -1 . For Case 2, Proposition 3.5 shows that I α,k is indeed a u-factorizable operator (see also (3.15)).
The explicit expressions of the operators I α,k ∈ B( H k ) don't give a definitive answer about the final formula: for instance, when applied to an argument containing some u's, the expression can simplify a lot (see for example Case 1 of Section 4.2). Moreover, the trace and the multiplication introduce some degrees of freedom in the writing of the final expression. This leads us to consider two operators A and A as equivalent when
tr • m • κ * • A[B 1 ⊗ • • • ⊗ B k ] = tr • m • κ * • A [B 1 ⊗ • • • ⊗ B k ] for any B 1 ⊗ • • • ⊗ B k ∈ H k . The equivalence is reminiscent of the lift from B(H k , M N ) to B( H k ) (see Remark 2.
2) combined with the trace.
Lemma 5.2 With
I α,k (r 0 , . . . , r k-1 ) := I α,k (r 0 , . . . , r k-1 , r 0 ) (= lim r k →r 0 I α,k (r 0 , . . . , r k-1 , r k )), the operator I α,k := I α,k (R 0 (u), . . . , R k-1 (u)) = I α,k (R 0 (u), . . . , R k-1 (u), R 0 (u)) ∈ B( H k ) is equivalent with the original I α,k . Proof For any B 1 ⊗ • • • ⊗ B k ∈ H k , using previous notations, one has tr(• m • κ * • I α,k [B 1 ⊗ • • • ⊗ B k ]) = i 0 ,...,i k I α,k (λ i 0 , . . . , λ i k ) tr π i 0 B 1 π i 1 • • • B k π i k = i 0 ,...,i k I α,k (λ i 0 , . . . , λ i k ) tr π i k π i 0 B 1 π i 1 • • • B k = i 0 ,...,i k I α,k (λ i 0 , . . . , λ i k ) δ i 0 ,i k tr π i 0 B 1 π i 1 • • • B k = i 0 ,...,i k-1 I α,k (λ i 0 , . . . , λ i k-1 ) tr π i 0 B 1 π i 1 • • • B k = tr • m • κ * • I α,k [B 1 ⊗ • • • ⊗ B k ].
The equivalence between I α,k and I α,k seems to be the only generic one we can consider.
We have doubts on the fact that the operators I α,k can be always equivalent to some u-factorizable operators. For instance, Proposition 5. [START_REF] Avramidi | Non-Laplace type operators on manifolds with boundary, in: Analysis, geometry and topology of elliptic operators[END_REF] The contribution to a 1 of (4.13) generates always a non-explicit formula when the dimension d is odd, unless u and v µ have commutation relations.
In fact,
h d (r 1/2 0 , r 1/2 1 ) := 1 2 [I d/2+1,2 (r 0 , r 1 , r 0 ) + I d/2+1,2 (r 1 , r 0 , r 1 )] = 1 d (r 1 r 0 ) -d/2 r d/2 0 -r d/2 1 r 0 -
d (xy) d h d (x, y) = x 2m -y 2m x 2 -y 2 = m-1 =0 x 2m-2 -2 y 2 . Assume now d = 2m + 1. Then, d (xy) d h d (x, y) = x 2m+1 -y 2m+1 x 2 -y 2 = y 2m 1 x+y + m-1 =0 x 2m-1-2 y 2 ,
(for m = 0 there is no sum). This expression is not decomposable since the map (x + y) -1 is not decomposable: Suppose we have such decomposition (x + y) -1 = N =1 h 1, (x) h 2, (y) for N ∈ N * . Let (x i , y i ) 1≤i≤N be N points in (R * + ) 2 and consider the N × N -matrix c i,j := (x i + y j ) -1 . Then
det(c) = N i,j=1 (x i + y j ) -1 1≤i<j≤N (x i -x j )(y i -y j ).
This expression shows that we can choose a family (x i , y i ) 1≤i≤N such that det(c) = 0. With such a family, define the two matrices a i, := h 1, (x i ) and b i, := h 2, (y i ). Then
c i,j = (x i + y j ) -1 = N =1 h 1, (x) h 2, (y j ) = N =1 a i, b j, so that, in matrix notation, c = a • t b, which implies that det(a) = 0 and det(b) = 0. From (x + y j ) = N =1 h 1, (x) b j, , we deduce h 1, (x) = j b -1 j, (x + y j ) -1 and, similarly, h 2, (y) = i a -1 i, (x i + y) -1 . This gives (x + y) -1 = i,j, a -1 i, b -1 j, (x + y j ) -1 (x i + y) -1 .
This expression must hold true on (R * + ) 2 and when x, y → 0 + , the LHS goes to +∞ while the RHS remains bounded. This is a contradiction.
Explicit formulae of a r for scalar symbols (u
(x) = f (x) 1 N )
When u is central, the operator defined by I α,k (r 0 , . . . , r k ) is equivalent to the operator defined by
I α,k (r 0 ) := lim r j →r 0 I α,k (r 0 , . . . , r k ) = 1 k! r -α 0 .
Thus (4.4) reduces to
1 (2π) d dξ ξ µ 1 • • • ξ µ 2p f k (ξ)[ B µ 1 ...µ 2p k ] = g d m[u -d/2-p ⊗ G(g) µ 1 ...µ 2p B µ 1 ...µ 2p k ] = g d G(g) µ 1 ...µ 2p u -d/2-p m[B µ 1 ...µ 2p k
] so the tedious part of the computation of a r is to list all arguments B µ 1 ...µ 2p k and to contract them with G(g) µ 1 ...µ 2p . This can be done with the help of a computer in any dimension for an arbitrary r. All formulae are obviously explicit. An eventual other step is to translate the results in terms of diffeomorphic and gauge invariants.
Application to quantum field theory
Second-order differential operators which are on-minimal have a great importance in physics and have justified their related heat-trace coefficients computation (to quote but a few see almost all references given here). For instance in the interesting work [START_REF] Moss | Invariants of the heat equation for non-minimal operators[END_REF][START_REF] Toms | Local momentum space and the vector field[END_REF], the operators P given in (1.1) are investigated under the restriction
u µν = g µν 1 + ζ X µν ,
where ζ is a parameter (describing for ζ = 0 the minimal theory), under the constraints for the normalized symbol X(σ) := X µν σ µ σ ν with |σ| 2 g = g µν σ µ σ ν = 1 given by
X(σ) 2 = X(σ), for any σ ∈ S 1 g , ( 5.2)
∇ ρ X µν = 0.
(5.3)
Here, ∇ ρ is a covariant derivative involving gauge and Christoffel connections. In covariant form, the operators are
P = -(g µν ∇ µ ∇ ν + ζ X µν ∇ µ ∇ ν + Y ).
Despite the restrictions (5.2)-( 5.3) meaning that the operator X is a projector and the tensorendomorphism u is parallel, this covers the case of operators describing a quantized spin-1 vector fields like
P µ ν = -(δ µ ν ∇ 2 + ζ ∇ µ ∇ ν + Y µ ν )
, or a Yang-Mills fields like
P µ ν = -(δ µ ν D 2 -ζ D µ D ν + R µ ν -2F µ ν )
where D µ := ∇ ν + A µ and A µ , F µν are respectively the gauge and strength fields, or a perturbative gravity (see [START_REF] Moss | Invariants of the heat equation for non-minimal operators[END_REF] for details).
Remark first that
H(x, ξ) = u µν ξ µ ξ ν = |ξ| 2 g [1 + ζ X (ξ/|ξ| g )], so that e -H(x,ξ) = [e -(1+ζ)|ξ| 2 g -e -|ξ| 2 g ] X + e -|ξ| 2 g 1 V .
Thus (1.10) becomes
a 0 (x) = 1 (2π) d dξ [e -(1+ζ)|ξ| 2 g -e -|ξ| 2 g ] tr X + 1 (2π) d e -|ξ| 2 g tr 1 V = [ σ∈S 1 g dΩ(σ) tr X(σ)] [ 1 (2π) d ∞ 0 dr r d-1 (e -(1+ζ)r 2 -e -r 2 )] + g d N = Γ(d/2) 2(2π) d [(1 + ζ) -d/2 -1] [ σ∈S 1 g dΩ(σ) tr X(σ)] + g d N.
One has
g d := 1 (2π) d R d dξ e -|ξ| 2 g(x) = |g| 1/2 2 d π d/2 and σ∈S 1 g dΩ(σ) tr X(σ) = tr(X µν ) σ∈S 1 g dΩ(σ) σ µ σ ν .
Using (4.3), we can get
σ∈S 1 g dΩ(σ) σ µ 1 • • • σ µ 2p = 2 (2π) d g d Γ(d/2+p) G(g) µ 1 ...µ 2p ,
so that we recover [18, (2.34)]:
a 0 (x) = g d N + d -1 g µν tr(X µν ) (1 + ζ) -d/2 -1 .
Now, let us consider the more general case
u µν = g µν u + ζ X µν (5.4)
where u is a strictly positive matrix u(x) ∈ M N as in Section 4, X µν as before and assume [u(x), X µν (x)] = 0 for any x ∈ (M, g) and µ, ν. Previous situation was u = 1 V , the unit matrix in M N . Once Lemma 2.1 has been applied, the difficulty to compute a r (x) is to evaluate the operators T k,p defined by (2.15). Here we have u
[σ] = u + ζ X(σ)
, where the two terms commute. With the notation
X i := R i [ X(σ)],
we have
C k (s, u[σ]) = (1 -s 1 )R 0 [u + ζ X(σ)] + (s 1 -s 2 )R 1 [u + ζ X(σ)] + . . . • • • + (s k-1 -s k )R k-1 [u + ζ X(σ)] + s k R k [u + ζ X(σ)] = C k (s, u) + ζ (1 -s 1 ) X 0 + (s 1 -s 2 ) X 1 + • • • + (s k-1 -s k ) X k-1 + s k X k
so that, using the fact that each X i is a projection with eigenvalues i = 0, 1:
e -r 2 C k (s,u[σ]) = e -r 2 C k (s,u) ( i )∈{0,1} k+1 e -ζr 2 [(1-s 1 ) 0 +(s 1 -s 2 ) 1 +•••+(s k-1 -s k ) k-1 +s k k ] × × X 0 0 (1 -X 0 ) 1-0 • • • X k k (1 -X k ) 1-k . Notice that X i i (1 -X i ) 1-i = [(1 -i )g µν + (2 i -1)R i (X µν )]σ µ σ ν . With the definition I ( i ), ζ α,k (r 0 , r 1 , . . . , r k ) := I α,k (r 0 + ζ 0 , r 1 + ζ 1 , . . . , r k + ζ k ), the operators T k,p of (2.18) become T k,p = Γ(d/2+p) 2(2π) d S d-1 g dΩ g (σ) σ µ 1 • • • σ µ 2p σ α 0 σ β 0 • • • σ α k σ β k × ( i )∈{0,1} k+1
I ( i ), ζ d/2+p,k R 0 (u), R 1 (u), . . . , R k (u) × × [(1 -0 )g α 0 β 0 + (2 0 -1)R 0 (X α 0 β 0 )] • • • [(1 -k )g α k β k + (2 k -1)R k (X α k β k )]. (5.5)
The computations of these operators are attainable using the method given in Section 4, but with some more complicated combinatorial expressions requiring a computer. We will still get explicit formulae for a 1 (x) in any even dimension as in Theorem 4.5. The main combinatorial computation is to make the contractions between G(g) µ 1 ...µ 2p α 0 β 0 ...α k β k from the first line of (5.5) with the operators of the last line applied on variables B µ 1 ...µ 2p k . When u = 1 V in (5.4), the operators in the second line of (5.5) are just the multiplication by the numbers
I d/2+p,k (1 + ζ 0 , 1 + ζ 1 , . . . , 1 + ζ k ).
Thus one gets explicit formulae for all coefficients a r (x) in any dimension.
Conclusion
On the search of heat-trace coefficients for Laplace type operators P with non-scalar symbol, we develop, using functional calculus, a method where we compute some operators T k,p acting on some (finite dimensional) Hilbert space and the arguments on which there are applied. This splitting allows to get general formulae for these operators and so, after a pure computational machinery will yield all coefficients a r since there is no obstructions other than the length of calculations. The method is exemplified when the principal symbol of P has the form g µν u where u is a positive invertible matrix. It gives a 1 in any even dimension which is written both in terms of ingredients of P (analytic approach) or of diffeomorphic and gauge invariants (geometric approach). As just said, the method is yet ready for a computation of a r with r ≥ 2 for calculators patient enough, as well for the case g µν u + ζ X µν as in Section 5.4. Finally, the method answers a natural question about explicit expressions for all coefficients a r : we proved that u-factorizability is always violated when the dimension is odd and it is preserved in even dimension d when d/2 -r > 0. We conjecture this always holds true in all even dimension.
A. Appendix
A.1. Some algebraic results
Let A be a unital associative algebra over C, with unit 1. Denote by (C * (A, A) = ⊕ k≥0 C k (A, A), δ) the Hochschild complex where C k (A, A) is the space of linear maps ω :
A ⊗ k → A and (δω)[b 0 ⊗ • • • ⊗ b k ] = b 0 ω[b 1 ⊗ • • • ⊗ b k ] + k i=0 (-1) i ω[b 0 ⊗ • • • ⊗ b i-1 ⊗ b i+1 ⊗ • • • ⊗ b k ] + (-1) k+1 ω[b 0 ⊗ • • • ⊗ b k-1 ]b k for any ω ∈ C k (A, A) and b 0 ⊗ • • • ⊗ b k ∈ A ⊗ k .
Define the differential complex (T * A = ⊕ k≥0 A ⊗ k+1 , d) with
d(a 0 ⊗ • • • ⊗ a k ) = 1 ⊗ a 0 ⊗ • • • ⊗ a k + k i=0 (-1) i a 0 ⊗ • • • ⊗ a i-1 ⊗ 1 ⊗ a i+1 ⊗ • • • ⊗ a k + (-1) k+1 a 0 ⊗ • • • ⊗ a k ⊗ 1
for any a 0 ⊗ • • • ⊗ a k ∈ A ⊗ k+1 . Both C * (A, A) and T * A are graded differential algebras, the first one for the product
(ω ω )[b 1 ⊗ • • • ⊗ b k+k ] = ω[b 1 ⊗ • • • ⊗ b k ] ω [b k+1 ⊗ • • • ⊗ b k+k ]
and the second one for
(a 0 ⊗ • • • ⊗ a k )(a 0 ⊗ • • • ⊗ a k ) = a 0 ⊗ • • • ⊗ a k a 0 ⊗ • • • ⊗ a k ∈ A ⊗ k+k +1 = T k+k A.
The following result was proved in [START_REF] Masson | Géométrie non commutative et applications à la théorie des champs[END_REF]: Recall that an associative algebra A is central simple if it is simple and its center is C. Central simple algebras have the following properties, proved for instance in [START_REF] Lam | A first course in noncommutative rings[END_REF]: [START_REF] Avramidi | Gauged gravity via spectral asymptotics of non-Laplace type operators[END_REF] If B is a central simple algebra and C is a simple algebra, then B ⊗ C is a simple algebra. If moreover C is central simple, then B ⊗ C is also central simple.
Lemma A.
Proof (of Proposition A.1) A pure combinatorial argument shows that ι is a morphism of graded differential algebras for the structures given above.
Assume that A is central simple. The space A ⊗ k+1 is an associative algebra for the product
(a 0 ⊗ • • • ⊗ a k ) • (a 0 ⊗ • • • ⊗ a k ) = a 0 a 0 ⊗ • • • ⊗ a k a k , which is central simple by Lemma A.2. Let J k = Ker ι ∩ A ⊗ k+1 . Then, for any α = i a 0,i ⊗ • • • ⊗ a k,i ∈ J k , any β = b 0 ⊗ • • • ⊗ b k ∈ A ⊗ k+1 , and any c 1 ⊗ • • • ⊗ c k ∈ A ⊗ k , one has ι(α • β)[c 1 ⊗ • • • ⊗ c k ] = i a 0,i b 0 c 1 a 1,i b 1 c 2 • • • b k-1 c k a k,i b k = ι(α)[b 0 c 1 ⊗ • • • ⊗ b k-1 c k ]b k = 0, so that α • β ∈ J k .
The same argument on the left shows that J k is a two-sided ideal of the algebra A ⊗ k+1 , which is simple. Since ι is non zero (ι(1 ⊗ • • • ⊗ 1) = 0), one must have J k = 0: this proves that ι is injective.
The algebra A = M N is central simple [START_REF] Lam | A first course in noncommutative rings[END_REF], so that ι is injective, and moreover the spaces C k (A, A) and A ⊗ k+1 have the same dimensions: this shows that ι is an isomorphism.
Remark A. [START_REF] Avramidi | Non-Laplace type operators on manifolds with boundary, in: Analysis, geometry and topology of elliptic operators[END_REF] In [START_REF] Masson | Géométrie non commutative et applications à la théorie des champs[END_REF], the graded differential algebras C * (A, A) and T * A are equipped with a natural Cartan operations of the Lie algebra A (where the bracket is the commutator) and it is shown that ι intertwines these Cartan operations.
A.2. Some combinatorial results
Lemma A. [START_REF] Avramidi | Heat kernel method and its applications[END_REF] Given a family a 0 , . . . , a r of different complex numbers, we have Thus det
1 1 • • • 1 a 0 a 1 • • • a r a 2 0 a 2 1 • • • a 2 r . . . . . . • • • . . . a r-1 0 a r-1 1 • • • a r-1 r a s 0 a s 1 • • • a s r = β(s)
after an expansion of the determinant with respect to the last line. But this is zero since the last line coincides with the line s + 1 of the matrix when s = 0, . . . , r -1.
ii) The irreducible fraction expansion of f (z) = 1 (z-a 0 )(z-a 1 )•••(z-ar) is r n=0 Res(f )(an) z-an yielding (A.2).
A.3. Few properties of functions I d/2+k,k
We collect here some special combination of function I d/2+k-r,k . Let g 1 (r 0 , r 1 ) := I d/2+1,2 (r 0 , r 1 , r 0 ). Then
e -λ tr [b 2r (x, ξ, λ)] (1.4) where λ belongs to a anticlockwise curve C around R + and (x, ξ) ∈ T * (M ). Here the functions b 2r are defined recursively by b 0 (x, ξ, λ) := (d 2 (x, ξ) -λ) -1 , b r (x, ξ, λ) := -r=j+|α|+2-k j<r
Lemma 2 . 6
26 For any dimension d of the manifold M and x ∈ M , given r ∈ N, the computation of a r (P ) needs exactly to know each of the 3r + 1 operators T k,k-r where r ≤ k ≤ 4r or equivalently to know I d/2,r , I d/2+1,r+1 , . . . , I d/2+3r,4r .
): Proposition 3 . 1
31 Properties of functions I α,k : i) Recursive formula valid for 1 = α ∈ R and k ∈ N * :
2+3r,4r used to compute ar(x) . (3.4) Case 3: d is odd, relation (3.1) never fails and
. 11 )
11 Proof Let d/2 = m ∈ N * . Then using the continuity of Proposition 3.1 with(3.10)
Proposition 3 . 4
34 r , I d/2+1,r+1 , . . . , I d/2+3r,4r . Case 3: d is odd. If d/2 -r = + 1/2 with ∈ N, the root and its follower are
Proposition 3 . 5
35 Case 2: d even and r < d/2. For k ∈ N * and N n = d/2 -r + k ≥ k + 1,
Proposition A. 1
1 The map ι :T * A → C * (A, A) defined by ι(a 0 ⊗ • • • ⊗ a k )[b 1 ⊗ • • • ⊗ b k ] := a 0 b 1 a 1 b 2 • • • b k a k is a morphism of graded differential algebras.If A is central simple, then ι is injective, and if A = M N (algebra of N × N matrices) then ι is an isomorphism.
2 )
2 =n (a n -a m ) -1 = 0, for any s ∈ {0, 1, . . . , r -=n (a n -a m ) -1 , ∀z ∈ C\{a 0 , . . . , a r }. (A.Proof i) If α(s) := r n=0 a s n r m=0, m =n (a n -a m ) -1 and β(s) := α(s) 0≤l<k≤r (a k -a l ) = r n=0 (-1) r-n a s n r 0≤l<k≤r k =n, l =n (a k -a l )then it is sufficient to show that β(s) = 0 for s = 0, . . . , r-1. Recall first that the determinant of a Vandermonde p × p-matrix is det -b j ).
1 2 [g 1 1 , 1 2d(r 0 r 1 ) 3 2 r
1111132 (r 0 , r 1 ) + g 1 (r 1 , r0 )] = 1 d (r 0 r 1 ) -d/2 r d/2 if d = 2m. (A.3) Let g 2 (r 0 , r 1 ) := -d 2 I d/2+1,2 (r 0 , r 1 , r 0 ) + d+2 2 [r 1 I d/2+2,3 (r 0 , r 1 , r 1 , r 0 ) + r 1 I d/2+2,3 (r 1 , r 1 , r 0 , r 1 ) + r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 0 )]. Then g 2 (r 0 , r 1 ) = -d/2 (d r (r 0 , r 1 ) := (d+2) 2 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 0 ) + (d + 2)(d + 4)[-r 0 2 I d/2+3,4 (r 0 , r 0 , r 0 , r 1 , r 0 ) -1 2 r 0 r 1 I d/2+3,4 (r 0 , r 0 , r 1 , r 1 , r 0 )]. (A.5)
w or their (at most second order) derivatives. Thus we can apply Lemma 4.1 since, at most, only two A i (relabeled indistinctly B 1 , B 2 ) are different from u. This method generates only four cases modulo B 1 ↔ B 2 : Case 0: only one variable, namely w. Case 1:
r 1 and we prove below that the map h d (x, y) for (x, y) ∈ (R * + ) 2 is explicit only when d is even: Lemma 5.4 The function h d (x, y) = finite h (1) (x) h (2) (y) with continuous functions h (i) if and only if d is even. Proof It is equivalent to show that function d (xy) d h d (x, y) has or not such decomposition. Assume that d = 2m. Then we have the decomposition
(α-1) (r k-1 -r k ) -1 [(r 0 + s 1 (r 1 -r 0 ) + • • • + s k-1 (r k-1 -r k-2 ) + s k-1 (r k -r k-1 )) -(α-1) -(r 0 + s 1 (r 1 -r 0 ) + • • • + s k-1 (r k-1 -r k-2 )) -(α-1) ]
Acknowledgments
We would like to thank Andrzej Sitarz and Dimitri Vassilevich for discussions during the early stage of this work and Thomas Krajewski for his help with Lemma A.4.
Then, --→ x , let J ν µ := ∂ µ x µ and |J| := det(J ν µ ), so that
- --→ γs (where "g.t." stands for gauge transformation). The proof of the following lemma is a straightforward computation: Lemma A. [START_REF] Gilkey | Heat equation asymptotics of "nonminimal" operators on differential forms[END_REF] The differential operator P is well defined on sections of V if and only if u µν , v µ and w have the following transformations:
--→ γu µν γ -1 , (A.7)
--→ w, w
--→ γwγ -1 + γv µ (∂ µ γ -1 ) + γu µν (∂ µ ∂ ν γ -1 ). (A.9)
As these relations show, neither v µ nor w have simple transformations under changes of coordinates or gauge transformations. It is possible to parametrize P using structures adapted to changes of coordinates and gauge transformations. Let us fix a (gauge) connection A µ on V and denote by ∇ µ := ∂ µ +A µ its associated covariant derivative on sections of V . For any section s of V , one then has:
--→ γ∇ µ s.
(A.10)
Lemma A. [START_REF] Branson | Heat Equation Asymptotics of Elliptic Operators with Non-scalar Leading Symbol[END_REF] The differential operator
is well defined on sections of V if and only if u µν , p µ and q have the following transformations:
--→ q, (A.12)
--→ γu µν γ -1 , p µ g.t.
--→ γp µ γ -1 , q g.t.
--→ γqγ -1 . (A.13)
It is equal to P when (the u µν are the same in P and Q)
Proof Combining (A.10), (A.12) and (A.13), the operator s → -(p µ ∇ µ +q)s is well behaved under changes of coordinates and gauge transformations. Let X := |g| -1/2 ∇ µ |g| 1/2 u µν ∇ ν (a matrix valued "Laplace-Beltrami operator"). Then, using (A.10) and (A.12), one gets --→ γ|g| 1/2 u µν ∇ ν s (it behaves like a section of V ), so that Xs g.t.
--→ γXs. This proves that Q = -X -(p µ ∇ µ + q) is well defined. The expansion of X gives
2 (∂ µ log|g|)u µν A ν + (∂ µ u µν )A ν + u µν (∂ µ A ν ) + A µ u µν A ν which, combined with the contributions of -(p µ ∇ µ + q), gives (A.14) and (A.15). Contrary to the situation in [1, Section 1.2.1], one cannot take directly p = 0 in (A.11) since we cannot always solve A µ in (A.14) to write it in terms of u µν , v µ , w. |
01573594 | en | [
"math.math-dg",
"phys.mphy",
"math.math-oa"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01573594/file/Heat-trace-2-v2.3.pdf | B Iochum
email: [email protected]
T Masson
Heat asymptotics for nonminimal Laplace type operators and application to noncommutative tori
Keywords: Heat kernel, nonminimal operator, asymptotic heat trace, Laplace type operator, scalar curvature, noncommutative torus PACS: 11.15.-q, 04.62.+v 2000 MSC: 58J35, 35J47, 81T13, 46L87
Let P be a Laplace type operator acting on a smooth hermitean vector bundle V of fiber C N over a compact Riemannian manifold given locally by
where u, v ν , w are M N (C)-valued functions with u(x) positive and invertible. For any a ∈ Γ(End(V )), we consider the asymptotics Tr(a e -tP ) ∼ t↓0 + ∞ r=0 a r (a, P ) t (r-d)/2 where the coefficients a r (a, P ) can be written as an integral of the functions a r (a, P )(x) = tr[a(x)R r (x)]. The computation of R 2 is performed opening the opportunity to calculate the modular scalar curvature for noncommutative tori.
Introduction
As in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], we consider a d-dimensional compact Riemannian manifold (M, g) without boundary, together with a nonminimal Laplace type operator P on a smooth hermitean vector bundle V over M of fiber C N written locally as
P := -[ g µν u(x)∂ µ ∂ ν + v ν (x)∂ ν + w(x) ].
(1.1)
Here u(x) ∈ M N (C) is a positive and invertible matrix valued function and v ν , w are M N (C) matrices valued functions. The operator is expressed in a local trivialization of V over an open subset of M which is also a chart on M with coordinates (x µ ). This trivialization is such that the adjoint for the hermitean metric corresponds to the adjoint of matrices and the trace on endomorphisms on V becomes the usual trace tr on matrices.
For any a ∈ Γ(End(V )), we consider the asymptotics of the heat-trace
Tr(a e -tP ) ∼ where Tr is the operator trace. Each coefficient a r (a, P ) can be written as a r (a, P ) = M a r (a, P )(x) dvol g (x) (1.3) where dvol g (x) := |g| 1/2 dx with |g| := det(g µν ). The functions a r (a, P )(x) can be evaluated (various techniques exist for that) and give expressions of the form a r (a, P )(x) = tr[a(x)R r (x)],
where tr is the trace on matrices and R r is a (local) section of End(V ). The local section R r of End(V ) is uniquely defined by a r (a, P ) = ϕ(aR r ), (1.4) where ϕ(a) := M tr[a(x)] dvol g (x) (1.5) is the natural combined trace on the algebra of sections of End(V ) associated to (M, g) (the integral) and V (the matrix trace). The choice of this trace is not unique, and changing ϕ changes R r . For instance, since M is compact, one can normalize the integral so that the total volume of M is 1, and also the matrix trace such that the trace of the identity matrix is 1. In that case, denoted by 1 the identity operator in Γ(End(V )), the new combined trace ϕ 0 satisfies ϕ 0 (1) = 1. In Section 5 ϕ 0 plays an important role since it corresponds to the unique normalized trace on the noncommutative torus algebra. Another possibility for the choice of ϕ is to use a Riemannian metric on M which is not the tensor g in P , see Remark 2.6. The aim of this paper is to present a way to compute R r by adapting the techniques developed in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]. These techniques were strongly motivated by a need in physics for explicit computations of a r (1, P ), see for instance [START_REF] Avramidi | Gauged gravity via spectral asymptotics of non-Laplace type operators[END_REF][START_REF] Avramidi | Heat kernel asymptotics of operators with non-Laplace principal part[END_REF] and the reference in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] for the existing results on the mathematical side. The idea behind the computation of R 2 is to extract the real matrix content of the coefficient a 2 which is related to the scalar curvature of the manifold M .
In Section 2, two formulas are provided for R 2 (x), both in local coordinates (Theorem 2.3) and in a covariant way (Theorem 2.4) in arbitrary dimension and detailed in low dimensions. In Section 3, some direct applications are also provided, for instance to a conformal like transformed Laplacian. Section 4 is devoted to the details of the computations (see also the ancillary Mathematica [4] notebook file [START_REF] Iochum | Heat asymptotics for nonminimal Laplace type operators and application to noncommutative tori[END_REF]).
In Section 5, another applications is given in noncommutative geometry. Namely, we compute the conformally perturbed scalar curvature of rational noncommutative tori (NCT). Since at rational values θ = p/q of the deformation parameter, the algebras of the NCT are isomorphic to the continuous sections of a bundle over the ordinary tori with fiber in M q (C), they fit perfectly with our previous framework. The irrational case has been widely studied in [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF][START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF][START_REF] Fathizadeh | The Gauss-Bonnet theorem for noncommutative two tori with a general conformal structure[END_REF][START_REF] Dabrowski | Curved noncommutative torus and Gauss-Bonnet[END_REF][START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Azzali | Traces of holomorphic families of operators on the noncommutative torus and on Hilbert Modules[END_REF][START_REF] Sitarz | Wodzicki residue and minimal operators on a noncommutative 4-dimensional torus[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF][START_REF] Dabrowski | An asymmetric noncommutative torus[END_REF][START_REF] Liu | Modular curvature for toric noncommutative manifolds[END_REF][START_REF] Sadeghi | On logarithmic Sobolev inequality and a scalar curvature formula for noncommutative tori[END_REF][START_REF] Connes | The term a 4 in the heat kernel expansion of noncommutative tori[END_REF]. The results presented in these papers can be written without explicit reference to the parameter θ. In the rational case, our results confirm this property. Moreover, our method gives an alternative which avoids the theory of pseudodifferential calculus on the noncommutative tori introduced by Connes [START_REF] Connes | C * -algèbres et géométrie différentielle[END_REF] and detailed in [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Lesch | Modular curvature and Morita equivalence[END_REF]. In Appendix B, in order to compare to the results in [8, Theorem 5.2] and [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF]Theorem 5.4], we perform the change of variables from u to ln(u) and the change of operators from the left multiplication by u to the conjugation by u, formalized as a substitution lemma (Lemma B.1).
The method and the results
In [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], the computation was done for the special case a = 1, for which a lot of simplifications can be used under the trace. We now show that the method described there can be adapted almost without any change to compute the quantities R r . Moreover, we present the method in a way that reduces the number of steps in the computations, using from the beginning covariant derivatives on the vector bundle V .
Notations and preliminary results
In order to start with the covariant form of P (see [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Section A.4]), let us introduce the following notations. We consider a covariant derivative ∇ µ := ∂ µ + η(A µ ), where η is the representation of the Lie algebra of the gauge group of V on any associated vector bundles (mainly V and End(V ) in the following). Let α µ := g ρσ (∂ µ g ρσ ), α µ := g µν α ν = g µν g ρσ (∂ ν g ρσ ),
β µ := g µσ (∂ ρ g ρσ ), β µ := g µν β ν = ∂ ν g µν .
The covariant form of P associated to ∇ (see [1, eq. (A.11)]) is given by
P = -(|g| -1/2 ∇ µ |g| 1/2 g µν u∇ ν + p µ ∇ µ + q) = -g µν u∇ µ ∇ ν -p ν + g µν (∇ µ u) -[ 1 2 α ν -β ν ]u ∇ ν -q, (2.1)
where the last equality is obtained using
g µν 1 2 ∂ µ ln|g|+∂ µ g µν = -1 2 α ν -β ν .
Here, u is as before, and p µ , q are as v µ , w from (1.1), except that they transform homogeneously in a change of trivialization of V . All these (local) functions are M N (C)-valued (as local sections of End(V )), so that η is the adjoint representation:
∇ µ u = ∂ µ u + [A µ , u], ∇ µ p ν = ∂ µ u + [A µ , p ν ], ∇ µ q = ∂ µ u + [A µ , q].
Let us introduce the total covariant derivative ∇ µ , which combines ∇ µ with the Levi-Civita covariant derivative induced by the metric g. It satisfies
∇ µ a ν = ∇ µ a ν + Γ ν µρ a ρ = ∂ µ a ν + [A µ , a ν ] + Γ ν µρ a ρ , ∇ µ g αβ = 0, ∇ µ b ν = ∇ µ b ν -Γ ρ µν b ρ = ∂ µ b ν + [A µ , b ν ] -Γ ρ µν b ρ , ∇ µ g αβ = 0,
for any End(V )-valued tensors a ν and b ν , where Γ ν µρ are the Christoffel symbols of g. Let us store the following relations:
∇ µ u = ∇ µ u, (2.2)
g µν ∇ µ ∇ ν u = g µν (∇ µ ∇ ν u -Γ ρ µν ∇ ρ u) = g µν ∇ µ ∇ ν u -[ 1 2 α µ -β µ ]∇ µ u, ( 2.3)
∇ µ p µ = ∇ µ p µ -1 2 g αβ (∂ µ g αβ )p µ = ∇ µ p µ -1 2 α µ p µ .
Using 1 2 α ρ -β ρ = g µν Γ ρ µν , one then has
P = -(g µν ∇ µ u ∇ ν + p ν ∇ ν + q) = -g µν u ∇ µ ∇ ν -[p ν + g µν ( ∇ µ u)] ∇ ν -q. (2.4)
Notice that in these expressions the total covariant derivative ∇ ν (which is the first to act) will never apply to a tensor valued section of V , so that it could be reduced to the covariant derivative ∇ ν . The writing of P in terms of a covariant derivative ∇ is of course not unique:
Proposition 2.1 Let ∇ µ = ∇ µ + η(φ µ ) be another covariant derivative on V . Then P = -(g µν ∇ µ u ∇ ν + p ν ∇ ν + q ), (2.5)
with
p ν = p ν -g µν (uφ µ + φ µ u), q = q -g µν ( ∇ µ uφ ν ) + g µν uφ µ φ ν -p µ φ µ . (2.6)
In this proposition, φ µ is as p µ : it transforms homogeneously in a change of trivializations of V .
Proof This is a direct computation using relations like
∇ µ u = ∇ µ u + [φ µ , u] and ∇ µ φ ν = ∇ µ φ ν + [φ µ , φ ν ] in (2.5
) and comparing with (2.4).
Corollary 2.2
There is a unique covariant derivative ∇ such that p µ = 0. This implies that we can always write P in the reduced form
P = -(g µν ∇ µ u ∇ ν + q). (2.7)
Proof The first part of (2.6) can be solved in φ µ for the condition p ν = 0. Indeed, using results in [START_REF] Pedersen | On the Operator Equation HT + T H = 2K[END_REF], the positivity and invertibility of u implies that for any ν, the equation u(g µν φ µ )+(g µν φ µ )u = p ν has a unique solution given by
g µν φ µ = 1 2 +∞ -∞ u it-1/2 p ν u -it-1/2 cosh(πt) dt.
So, given any covariant derivative to which p ν is associated as in (2.1), we can shift this covariant derivative with the above solution φ µ to impose p ν = 0.
This result extends the one in [22, Section 1.2.1], which is a key ingredient of the method used there. In the following, we could have started with P written as in (2.7). But, on one hand, we will see that this is not necessary to get R r in terms on u, p µ , q (at least for r = 2). On the other hand, we will see in Section 5 that the covariant derivative which is naturally given by the geometric framework of the rational noncommutative torus does not imply p µ = 0, and we will then apply directly the most general result. Obviously, it could be possible to first establish our result for the reduced expression (2.7) and then to go to the general result using Prop. 2.1. But this would complicate unnecessarily the presentation of the method and our results.
The method
The method described in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] starts with P written as
P = -g µν u∂ µ ∂ ν -v µ ∂ µ -w and leads to -P (e ixξ f ) = -e ixξ H + K + P ]f where H = g µν uξ µ ξ ν and K = -iξ µ v µ + 2g µν u∂ ν .
This can be generalized for a covariant writing of P . Using (2.1), one gets
-P (e ixξ f ) = e ixξ -g µν uξ µ ξ ν + iξ µ p µ + g µν (∇ ν u) -[ 1 2 α ν -β ν ]u + 2g µν u∇ ν + g µν u∇ µ ∇ ν + p ν + g µν (∇ µ u) -[ 1 2 α ν -β ν ]u ∇ ν + q f = -e ixξ [H + K + P ]f, (2.8)
with
H := g µν uξ µ ξ ν , K := -iξ µ p µ + g µν (∇ ν u) -[ 1 2 α µ -β µ ]u + 2g µν u∇ ν .
(2.9)
These relations look like the expressions of H and K given above (see [1, eq. (1.6), (1.7)]) with the replacements
∂ µ → ∇ µ , v µ → p µ + g µν (∇ ν u) -[ 1 2 α µ -β µ ]u, w → q.
(2.10)
As in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], we have Tr[ae -tP ] = dx tr[a(x)K(t, x, x)] with
K(t, x, x) = 1 (2π) d dξ e -ix.ξ (e -tP e ix.ξ ) = 1 (2π) d dξ e -t(H+K+P ) 1 = 1 t d/2 1 (2π) d dξ e -H- √ tK-tP 1.
Here 1 is the constant 1-valued function. Notice that K(t, x, x) is a density, and that |g| -1/2 K(t, x, x) is a true function on M . Using the Lebesgue measure dx instead of dvol g (x) is convenient to establish the previous relation which uses Fourier transforms (this point has not been emphasized in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]).
The asymptotics expansion is obtained by the Volterra series
e A+B = e A + ∞ k=1 ∆ k ds e (1-s 1 )A B e (s 1 -s 2 )A • • • e (s k-1 -s k )A B e s k A .
where
∆ k := {s = (s 1 , . . . , s k ) ∈ R k + | 0 ≤ s k ≤ s k-1 ≤ • • • ≤ s 2 ≤ s 1 ≤ 1} and ∆ 0 := ∅ by convention. For A = -H and B = - √ tK -tP , one gets e -H- √ tK-tP 1 = e -H + ∞ k=1 (-1) k f k [( √ tK + tP ) ⊗ • • • ⊗ ( √ tK + tP )] (2.11)
with
f k (ξ)[B 1 ⊗ • • • ⊗ B k ] := ∆ k ds e (s 1 -1)H(ξ) B 1 e (s 2 -s 1 )H(ξ) B 2 • • • B k e -s k H(ξ) ,
(2.12)
f 0 (ξ)[z] := z e -H(ξ) ,
where B i are matrix-valued differential operators in ∇ µ depending on x and (linearly in) ξ, and z ∈ C. Collecting the powers of t 1/2 , one gets
Tr (a e -tP ) t↓0 t -d/2 ∞ r=0 a r (a, P ) t r/2
Each a r (a, P ) contains an integration along ξ, which kills all the terms in odd power in √ t since K is linear in ξ while H is quadratic in ξ: a 2n+1 (a, P ) = 0 for any n ∈ N. For instance, the first two non-zero local coefficients are 1 a 0 (a, P )
(x) = |g| -1/2 (2π) d tr[a(x) dξ e -H(x,ξ) ], a 2 (a, P )(x) = |g| -1/2 (2π) d tr [a(x) dξ ∆ 2 ds e (s 1 -1)H K e (s 2 -s 1 )H K e -s 2 H ] -1 (2π) d tr [a(x) dξ ∆ 1 ds e (s 1 -1)H P e -s 1 H ]
1 Notice the change with convention in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] : a 2r here corresponds to a r in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF].
(remark the coefficient |g| -1/2 added here to be compatible with (1.3)).
The strategy to compute these coefficient is twofold. First, we get rid of the ∇ µ 's in the arguments B i . This is done using [1, Lemma 2.1], which can be applied here since ∇ µ is a derivation: by iteration of the relation (2.13) we transform each original term into a sum of operators acting on arguments of the form
f k (ξ)[B 1 ⊗ • • • ⊗ B i ∇ µ ⊗ • • • ⊗ B k ] = k j=i+1 f k (ξ)[B 1 ⊗ • • • ⊗ (∇ µ B j ) ⊗ • • • ⊗ B k ] - k j=i f k+1 (ξ)[B 1 ⊗ • • • ⊗ B j ⊗ (∇ µ H) ⊗ B j+1 ⊗ • • • ⊗ B k ],
B 1 ⊗ • • • ⊗ B k = B µ 1 ...µ k ξ µ 1 • • • ξ µ (for different values of k)
where now all the B i are matrix-valued functions (of x and ξ), or, equivalently, the B µ 1 ...µ k are M N (C) ⊗ k -valued functions (of x only). The second step of the strategy is to compute the operators applied to the arguments B µ 1 ...µ k . They all look like
1 (2π) d dξ ξ µ 1 • • • ξ µ f k (ξ)[B µ 1 ...µ k ] ∈ M N (C),
where the f k (ξ) are defined by (2.12) and depend only on u through H. As shown in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], these operators are related to operators T k,p (x) :
M N (C) ⊗ k+1 → M N (C) ⊗ k+1 defined by T k,p (x) := 1 (2π) d ∆ k ds dξ ξ µ 1 • • • ξ µ 2p e -ξ 2 C k (s,u(x)) , (2.14)
T 0,0 (x) := 1 (2π) d dξ e -ξ 2 u(x) ∈ M N (C),
where ξ 2 := g µν ξ µ ξ ν and the C k (s, A) :
M N (C) ⊗ k+1 → M N (C) ⊗ k+1 are the operators C k (s, A)[B 0 ⊗ B 1 ⊗ • • • ⊗ B k ] = (1 -s 1 ) B 0 A ⊗ B 1 ⊗ • • • ⊗ B k + (s 1 -s 2 ) B 0 ⊗ B 1 A ⊗ • • • ⊗ B k + • • • + s k B 0 ⊗ B 1 ⊗ • • • ⊗ B k A.
Denote by m :
M N (C) ⊗ k+1 → M N (C), B 0 ⊗ B 1 ⊗ • • • ⊗ B k → B 0 B 1 • • • B k the matrix multiplication, then 1 (2π) d B 0 dξ ξ µ 1 • • • ξ µ f k (ξ)[B 1 ⊗ • • • ⊗ B k ] = m • T k,p (x)[B 0 ⊗ B 1 ⊗ • • • ⊗ B k ],
so that each function a r (a, P )(x) is expressed formally as a sum
a r (a, P )(x) = |g| -1/2 tr m • T k,p (x)[a(x) ⊗ B 1 (x) ⊗ • • • ⊗ B k (x)] . (2.15)
This sum comes form the collection of the original terms in K and P producing the power t r/2 and the application of [1, Lemma 2.1] i.e. (2.13). This sum relates the r on the LHS to the possible couples (k, p) on the RHS. The B i are matrix-valued functions (of x) expressed in terms of the original constituents of H, K, and P and their covariant derivatives. Let us mention here how the procedure introduced in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] is adapted to the situation where we have the left factor a(x): in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], the relation between the T k,p (x) and the f k (ξ) used a trick which consist to add a B 0 = 1 argument in front of B 1 ⊗ • • • ⊗ B k (the purpose of the κ map defined in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]). Here, 1 is simply replaced by a(x). But, since
m • T k,p (x)[B 0 ⊗ B 1 (x) ⊗ • • • ⊗ B k (x)] = B 0 m • T k,p (x)[1 ⊗ B 1 (x) ⊗ • • • ⊗ B k (x)],
it is now easy to propose an expression for the factor R r as a sum
R r = |g| -1/2 m • T k,p (x)[1 ⊗ B 1 (x) ⊗ • • • ⊗ B k (x)]. (2.16)
One of the main results of [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] is to express the operators T k,p in terms of universal functions through a functional calculus relation involving the spectrum of u (these relations take place at any fixed value of x ∈ M , that we omit from now on). For r i > 0, α ∈ R, and k ∈ N, let
I α,k (r 0 , r 1 , . . . , r k ) := ∆ k ds [(1 -s 1 )r 0 + (s 1 -s 2 )r 1 + • • • + s k r k ] -α = ∆ k ds [r 0 + s 1 (r 1 -r 0 ) + • • • + s k (r k -r k-1 )] -α , so that I α,k (r 0 , . . . , r 0 ) = 1 k! r -α 0 .
In these functions, the arguments r i > 0 are in the spectrum of the positive matrix u.
Denote by R
i (A) : M N (C) ⊗ k+1 → M N (C) ⊗ k+1 the right multiplication on the i-th factor R i (A)[B 0 ⊗ B 1 ⊗ • • • ⊗ B k ] := B 0 ⊗ B 1 ⊗ • • • ⊗ B i A ⊗ • • • ⊗ B k , then T k,p = g d G(g) µ 1 ...µ 2p I d/2+p,k R 0 (u), R 1 (u), . . . , R k (u) ,
with
g d := 1 (2π) d R d dξ e -|ξ| 2 g(x) = |g| 1/2 2 d π d/2 , (2.17) G(g) µ 1 ...µ 2p := 1 (2π) d g d dξ ξ µ 1 • • • ξ µ 2p e -g αβ ξαξ β = 1 2 2p p! ( ρ∈S 2p g µ ρ(1) µ ρ(2) • • • g µ ρ(2p-1) µ ρ(2p) ) = (2p)! 2 2p p! g (µ 1 µ 2 • • • g µ 2p-1 µ 2p ) ,
where S 2p is the symmetric group of permutations on 2p elements and the parenthesis in the index of g is the complete symmetrization over all indices. Notice that the factor |g| 1/2 in g d simplifies with the factor |g| -1/2 in (2.16).
The universal functions I α,k have been studied in [1, Section 3]. They satisfy a recursive formula valid for 1 = α ∈ R and k ∈ N * :
I α,k (r 0 , . . . , r k ) = 1 (α-1) (r k-1 -r k ) -1 [I α-1,k-1 (r 0 , . . . , r k-2 , r k ) -I α-1,k-1 (r 0 , . . . , r k-1 )]. (2.18)
It is possible to give some expressions for the I α,k for any (α, k). They depend on the parity of d.
For d even, the main results are that I n,k are Laurent polynomials for
N n = (d -r)/2 + k ≥ k + 1 (d ≥ r + 2
) and k ∈ N * , while they exhibit a more complicated expression in terms of ln functions for
N n = (d -r)/2 + k ≤ k (d ≤ r).
For d odd, the I n,k can be expressed in terms of square roots of the r i , but without an a priori general expression.
The recursive formula (2.18) can be used to write any I α,k appearing in the computation of the operators T k,p in terms of I α-k+1,1 . The case α = 1 appears in dimension d = 2: the fundamental spectral function is I 1,1 , and a direct computation shows that
I 1,1 (r 0 , r 1 ) = ln r 0 -ln r 1 r 0 -r 1 Using x e x -1 = ∞ n=0 Bn n! x n
, where B n are the Bernoulli numbers, one gets, with x = ln r 0 -ln r 1 ,
r 1 I 1,1 (r 0 , r 1 ) = ∞ n=0 Bn n! [ln r 0 -ln r 1 ] n .
A relation between the Bernoulli numbers and a 2 (a, P ) has already been noticed in the computation of the modular curvature for the noncommutative two torus in [START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF] (see Section 5).
The results for a 2 (a, P )
In the following, we restrict ourselves to the computation of a 2 (a, P ). This section gives the main results of the paper. The computations are detailed in Section 4.
Let us introduce the following notation. For any x ∈ M , denote by
r i = r i (x) > 0 an element in the (discrete) spectrum sp(u) of u = u(x) and by E r i = E r i (x) the associated projection of u. This implies that u = r 0 ∈sp(u) r 0 E r 0 = r 0 E r 0
where in the last expression we omit the summation over r 0 , as will be the case in many expressions given in the following. Notice that 1 = r 0 ∈sp(u) E r 0 and E r 0 E r 1 = δ r 0 ,r 1 E r 0 .
Theorem 2.3
For P given by (1.1), a 2 (a, P
)(x) = tr[a(x)R 2 (x)] with R 2 = 1 2 d π d/2 c r -d/2+1 0 E r 0 + F µ ∂u (r 0 , r 1 ) E r 0 (∂ µ u)E r 1 + g µν F ∂∂u (r 0 , r 1 ) E r 0 (∂ µ ∂ ν u)E r 1 + g µν F ∂u,∂u (r 0 , r 1 , r 2 ) E r 0 (∂ µ u)E r 1 (∂ ν u)E r 2 + F w (r 0 , r 1 ) E r 0 wE r 1 + F v,µ (r 0 , r 1 ) E r 0 v µ E r 1 + F v,∂u (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 (∂ µ u)E r 2 + F ∂u,v (r 0 , r 1 , r 2 ) E r 0 (∂ µ u)E r 1 v µ E r 2 + g µν F v,v (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 v ν E r 2 + F ∂v (r 0 , r 1 ) E r 0 (∂ µ v µ )E r 1 , (2.19)
where the sums over the r 0 , r 1 , r 2 in the spectrum of u are omitted, the spectral functions F are given below, and
c := 1 3 (∂ µ ∂ ν g µν ) -1 12 g µν g ρσ (∂ µ ∂ ν g ρσ ) + 1 48 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + 1 24 g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -1 12 g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 1 12 g ρσ (∂ µ g νρ )(∂ ν g µσ ) -1 4 g ρσ (∂ µ g µρ )(∂ ν g νσ ).
The coefficient c is given here in an arbitrary coordinate system. Since (2.19) is not given in terms of (Riemannian) covariant quantities, c is not expected to have a good behavior under change of coordinates. In normal coordinates, c reduces to the first two terms and is equal to -R/6 where R is the scalar curvature. A covariant approach will be given in Theorem 2.4.
The spectral functions in (2.19) are given in terms of the universal function
I d/2,1 by F w (r 0 , r 1 ) = I d/2,1 (r 0 , r 1 ), F ∂v (r 0 , r 1 ) = 2r 0 I d/2,1 (r 0 , r 0 ) -I d/2,1 (r 0 , r 1 ) d(r 0 -r 1 ) , F ∂∂u (r 0 , r 1 ) = -r 0 4r 0 I d/2,1 (r 0 , r 0 ) + (d -4)r 0 -dr 1 I d/2,1 (r 0 , r 1 ) d(r 0 -r 1 ) 2 , F µ ∂u (r 0 , r 1 ) = [ 1 2 α µ -β µ ]r 0 4r 0 I d/2,1 (r 0 , r 0 ) + (d -4)r 0 -dr 1 I d/2,1 (r 0 , r 1 ) d(r 0 -r 1 ) 2 , F v,µ (r 0 , r 1 ) = -α µ r 0 I d/2,1 (r 0 , r 0 ) -I d/2,1 (r 0 , r 1 ) d(r 0 -r 1 ) -1 2 [ 1 2 α µ -β µ ]I d/2,1 (r 0 , r 1 ), F v,v (r 0 , r 1 , r 2 ) = I d/2,1 (r 0 , r 1 ) -I d/2,1 (r 0 , r 2 ) d(r 1 -r 2 ) , F ∂u,v (r 0 , r 1 , r 2 ) = 2r 0 d [ I d/2,1 (r 0 , r 0 ) (r 0 -r 1 )(r 0 -r 2 ) + I d/2,1 (r 0 , r 1 ) (r 1 -r 0 )(r 1 -r 2 ) + I d/2,1 (r 0 , r 2 ) (r 2 -r 0 )(r 2 -r 1 )
],
F v,∂u (r 0 , r 1 , r 2 ) = -2r 0 I d/2,1 (r 0 , r 0 ) d(r 0 -r 2 )(r 1 -r 2 ) -2r 1 I d/2,1 (r 0 , r 1 ) d(r 1 -r 2 ) 2 - (d -4)r 0 r 1 -(d -2)r 0 r 2 -(d -2)r 1 r 2 + dr 2 2 I d/2,1 (r 0 , r 2 ) d(r 0 -r 2 )(r 1 -r 2 ) 2 , F ∂u,∂u (r 0 , r 1 , r 2 ) = 4r 0 d(r 0 -r 1 )(r 0 -r 2 ) 2 (r 1 -r 2 ) 2 × r 0 (r 1 -r 2 )(r 0 -2r 1 + r 2 )I d/2,1 (r 0 , r 0 ) + r 1 (r 0 -r 2 ) 2 I d/2,1 (r 0 , r 1 ) + 1 2 (r 0 -r 1 ) (d -4)r 0 r 1 -(d -2)r 0 r 2 -dr 1 r 2 + (d + 2)r 2 2 I d/2,1 (r 0 , r 2 ) .
Theorem 2.4 For P given by (2.1), a 2 (a, P
)(x) = tr[a(x)R 2 (x)] with R 2 = 1 2 d π d/2 [ 1 6 R r -d/2+1 0 E r 0 + G q (r 0 , r 1 ) E r 0 qE r 1 + g µν G ∇ ∇u (r 0 , r 1 ) E r 0 ( ∇ µ ∇ ν u)E r 1 + G ∇p (r 0 , r 1 ) E r 0 ( ∇ µ p µ )E r 1 + g µν G ∇u, ∇u (r 0 , r 1 , r 2 ) E r 0 ( ∇ µ u)E r 1 ( ∇ ν u)E r 2 + G p, ∇u (r 0 , r 1 , r 2 ) E r 0 p µ E r 1 ( ∇ µ u)E r 2 + G ∇u,p (r 0 , r 1 , r 2 ) E r 0 ( ∇ µ u)E r 1 p µ E r 2 + G p,p (r 0 , r 1 , r 2 ) E r 0 p µ E r 1 p µ E r 2 ] (2.20)
where the sums over the r 0 , r 1 , r 2 in the spectrum of u are omitted, the spectral functions G are given below, and R is the scalar curvature of g.
The spectral functions in (2.20) are given in terms of the spectral functions F by
G q (r 0 , r 1 ) := F w (r 0 , r 1 ) = G q (r 1 , r 0 ), G ∇ ∇u (r 0 , r 1 ) := F ∂∂u (r 0 , r 1 ) + F ∂v (r 0 , r 1 ) = G ∇ ∇u (r 1 , r 0 ), G ∇p (r 0 , r 1 ) := F ∂v (r 0 , r 1 ), G ∇u, ∇u (r 0 , r 1 , r 2 ) := F ∂u,∂u (r 0 , r 1 , r 2 ) + F v,∂u (r 0 , r 1 , r 2 ) + F ∂u,v (r 0 , r 1 , r 2 ) + F v,v (r 0 , r 1 , r 2 ) = G ∇u, ∇u (r 2 , r 1 , r 0 ), G p, ∇u (r 0 , r 1 , r 2 ) := F v,∂u (r 0 , r 1 , r 2 ) + F v,v (r 0 , r 1 , r 2 ), G ∇u,p (r 0 , r 1 , r 2 ) := F ∂u,v (r 0 , r 1 , r 2 ) + F v,v (r 0 , r 1 , r 2 ) = -G p, ∇u (r 2 , r 1 , r 0 ), G p,p (r 0 , r 1 , r 2 ) := F v,v (r 0 , r 1 , r 2 ) = G p,p (r 2 , r 1 , r 0 ).
As shown in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], the universal spectral functions I α,k are continuous, so that all the spectral functions F and G are also continuous, as can be deduced from their original expressions in terms of functions I α,k given in the list (4.1) and the above relations between the F and the G.
Remark 2.5 (Homogeneity by dilation)
Using (1.2) and (1.4), we get R r (λP ) = λ (r-d)/2 R r (P ) for any λ ∈ R * + . The dilation of P by λ is equivalent to the dilations of u, v µ , w, p µ , q by λ. Using I α,k (λr 0 , . . . , λr k ) = λ -α I α,k (r 0 , . . . , r k ) and the explicit expressions of the spectral functions, all terms in (2. [START_REF] Connes | C * -algèbres et géométrie différentielle[END_REF]) and (2.20) are λ-homogeneous of degree (2 -d)/2.
Remark 2.6
The metric g plays a double role here: it is the metric of the Riemannian manifold (M, g) and it is the non-degenerate tensor which multiply u in P . If one has to consider two operators P 1 and P 2 with tensors g 1 and g 2 on the same manifold M , it may be not natural to take g 1 or g 2 as the Riemannian metric on M . It is possible to consider a metric h on M different to the tensor g associated to P . In that case, we have to replace dvol g in (1.3) by dvol h and K(t, x, x, ) is then a density for h, so that the true function is now |h| -1/2 K(t, x, x), and |h| -1/2 appears in (2.15) and (2.16) in place of |g| -1/2 . Now, the computation of T k,p makes apparent the coefficient g d given by (2.17) where the metric g comes from P . Finally, in (2. [START_REF] Connes | C * -algèbres et géométrie différentielle[END_REF]) and (2.20), the two determinants do not simplify anymore, and one gets an extra factor |g| 1/2 |h| -1/2 in front of R 2 , which is now relative to ϕ h (a) := M tr[a(x)] dvol h (x).
A change of connection as in Prop. 2.1 does not change the value of R 2 . This induces the following relations between the spectral functions G.
Proposition 2.7
The spectral functions G satisfy the relations:
G ∇p (r 0 , r 1 ) = - r 0 G q (r 0 , r 1 ) + (r 0 -r 1 )G ∇ ∇u (r 0 , r 1 ) r 0 + r 1 , G ∇u,p (r 0 , r 1 , r 2 ) = - r 2 G q (r 0 , r 2 ) + (r 0 + 3r 2 )G ∇ ∇u (r 0 , r 2 ) + (r 0 + r 2 )(r 1 -r 2 )G ∇u, ∇u (r 0 , r 1 , r 2 ) (r 0 + r 2 )(r 1 + r 2 ) , G p, ∇u (r 0 , r 1 , r 2 ) = r 0 G q (r 0 , r 2 ) + (3r 0 + r 2 )G ∇ ∇u (r 0 , r 2 ) + (r 0 + r 2 )(r 1 -r 0 )G ∇u, ∇u (r 0 , r 1 , r 2 ) (r 0 + r 2 )(r 1 + r 0 ) , G p,p (r 0 , r 1 , r 2 ) = - r 1 G q (r 0 , r 2 ) -(r 0 -2r 1 + r 2 )G ∇ ∇u (r 0 , r 2 ) + (r 0 -r 1 )(r 1 -r 2 )G ∇u, ∇u (r 0 , r 1 , r 2 ) (r 0 + r 1 )(r 1 + r 2 ) .
Proof Inserting the relations (2.6) into (2.20), all the terms involving φ µ must vanish. This induces the following relations between the G functions:
r 0 G q (r 0 , r 1 ) + (r 0 -r 1 )G ∇ ∇u (r 0 , r 1 ) + (r 0 + r 1 )G ∇p (r 0 , r 1 ) = 0, G ∇p (r 0 , r 2 ) -(r 0 -r 1 )G ∇u,p (r 0 , r 1 , r 2 ) -(r 0 + r 1 )G p,p (r 0 , r 1 , r 2 ) = 0, G q (r 0 , r 2 ) + G ∇p (r 0 , r 2 ) + (r 1 -r 2 )G p, ∇u (r 0 , r 1 , r 2 ) + (r 1 + r 2 )G p,p (r 0 , r 1 , r 2 ) = 0, 2G ∇ ∇u (r 0 , r 2 ) -G ∇p (r 0 , r 2 ) -(r 0 -r 1 )G ∇u, ∇u (r 0 , r 1 , r 2 ) -(r 0 + r 1 )G p, ∇u (r 0 , r 1 , r 2 ) = 0, G q (r 0 , r 2 ) + 2G ∇ ∇u (r 0 , r 2 ) + G ∇p (r 0 , r 2 ) + (r 1 -r 2 )G ∇u, ∇u (r 0 , r 1 , r 2 ) + (r 1 + r 2 )G ∇u,p (r 0 , r 1 , r 2 ) = 0, r 0 G q (r 0 , r 2 ) + (r 0 -2r 1 + r 2 )G ∇ ∇u (r 0 , r 2 ) + (r 0 -r 2 )G ∇p (r 0 , r 2 ) + (r 0 -r 1 )(r 1 -r 2 )G ∇u, ∇u (r 0 , r 1 , r 2 ) + (r 0 + r 1 )(r 1 -r 2 )G p, ∇u (r 0 , r 1 , r 2 ) + (r 0 -r 1 )(r 1 + r 2 )G ∇u,p (r 0 , r 1 , r 2 ) + (r 0 + r 1 )(r 1 + r 2 )G p,p (r 0 , r 1 , r 2 ) = 0.
One can check directly that these relations hold true. From them, one can solve G ∇p , G ∇u,p , G p, ∇u , and G p,p in terms of G q , G ∇ ∇u , and G ∇u, ∇u . This gives the relations of the proposition.
These relations show that the four spectral functions G involved in terms with p µ are deduced from the three spectral functions involving only u and q. This result is not a surprise: from Corollary 2.2 we know that we can start with p µ = 0, so that R 2 is written in terms of the three functions G q , G ∇ ∇u , G ∇u, ∇u only, and then we can change the connection in order to produce the most general expression for R 2 . In other words, among the seven spectral functions G, only three are fundamental.
The spectral functions G can be computed explicitly, and their expressions depends on the value of m. For d = 2 and d = 3, these spectral functions are written in terms of the following functions:
Q 1 (a, b, c) := -3a 3 + a 2 b -6a 2 c + 6abc + ac 2 + bc 2 2(a -b) 2 (a -c) 3 , Q 2 (a, b, c) := 1 2 2 (a -b)(a -c) + a + b (b -a) 2 (b -c) ln(b/a) + a + c (c -a) 2 (c -b) ln(c/a) , Q 3 (a, b, c) := 6 √ abc + a 3/2 + 2b 3/2 + c 3/2 + √ ac ( √ a + √ c) + 2 a( √ b + √ c) + 2b( √ a + √ c) + c( √ a + √ b) , Q 4 (a, b, c) := a + b + c + 2 √ ab + 2 √ ac + √ bc √ bc( √ a + √ b) 2 ( √ a + √ c) 2 ( √ b + √ c) .
Corollary 2.8 (Case d = 2)
In dimension two the spectral functions G can be written in terms of log functions:
R 2 = 1 4 π 1 6 R + ln(r 0 /r 1 ) r 0 -r 1 E r 0 qE r 1 + 1 r 0 -r 1 1 -r 0 ln(r 0 /r 1 ) r 0 -r 1 E r 0 ( ∇ µ p µ )E r 1 - 1 (r 0 -r 1 ) 2 r 0 + r 1 -2r 0 r 1 ln(r 0 /r 1 ) r 0 -r 1 g µν E r 0 ( ∇ µ ∇ ν u)E r 1 + (r 0 + r 2 )(r 0 -2r 1 + r 2 ) (r 0 -r 1 )(r 0 -r 2 ) 2 (r 1 -r 2 ) -Q 1 (r 0 , r 1 , r 2 ) ln(r 0 /r 1 ) -Q 1 (r 2 , r 1 , r 0 ) ln(r 2 /r 1 ) g µν E r 0 ( ∇ µ u)E r 1 ( ∇ ν u)E r 2 + Q 2 (r 0 , r 1 , r 2 ) E r 0 ( ∇ µ u)E r 1 p µ E r 2 -Q 2 (r 2 , r 1 , r 0 ) E r 0 p µ E r 1 ( ∇ µ u)E r 2 + (r 1 -r 2 ) ln(r 0 /r 1 ) + (r 0 -r 1 ) ln(r 2 /r 1 ) 2(r 0 -r 1 )(r 0 -r 2 )(r 1 -r 2 ) E r 0 p µ E r 1 p µ E r 2 .
When d = 2m ≥ 4 is even, all the involved functions are Laurent polynomials as a direct consequence of [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Prop. 3.5]:
G q (r 0 , r 1 ) = 0≤ ≤m-2 1 m-1 r +1-m 0 r --1 1 , G ∇ ∇u (r 0 , r 1 ) = - 0≤ ≤m-2 (m--1)( +1) m(m-1) r +1-m 0 r --1 1 , G ∇p (r 0 , r 1 ) = - 0≤ ≤m-2 m--1 m(m-1) r +1-m 0 r --1 1 , G ∇u, ∇u (r 0 , r 1 , r 2 ) = - 0≤ ≤k≤m-2 (2 +1)(2k-2m+3) 2m(m-1) r k+1-m 0 r -k-1 1 r --1 2 , G p, ∇u (r 0 , r 1 , r 2 ) = - 0≤ ≤k≤m-2 2 +1 2m(m-1) r k+1-m 0 r -k-1 1 r --1 2 , G ∇u,p (r 0 , r 1 , r 2 ) = - 0≤ ≤k≤m-2 2k-2m+3 2m(m-1) r k+1-m 0 r -k-1 1 r --1 2 , G p,p (r 0 , r 1 , r 2 ) = - 0≤ ≤k≤m-2 1 2m(m-1) r k+1-m 0 r -k-1 1 r --1 2 .
This implies the following expressions for R 2 :
Corollary 2.9 (Case d = 2m even and d ≥ 4) Using the expressions of the spectral functions G as Laurent polynomials, one has
R 2 = 1 2 2m π m 1 6 R u -m+1 + 0≤ ≤m-2 1 m-1 u +1-m qu --1 - 0≤ ≤m-2 m--1 m(m-1) u +1-m ( ∇ µ p µ )u --1 - 0≤ ≤m-2 (m--1)( +1) m(m-1)
g µν u +1-m ( ∇ µ ∇ ν u)u --1 - 0≤ ≤k≤m-2 (2 +1)(2k-2m+3) 2m(m-1) g µν u k+1-m ( ∇ µ u)u -k-1 ( ∇ ν u)u --1 - 0≤ ≤k≤m-2 2k-2m+3 2m(m-1) u k+1-m ( ∇ µ u)u -k-1 p µ u --1 - 0≤ ≤k≤m-2 2 +1 2m(m-1) u k+1-m p µ u -k-1 ( ∇ µ u)u --1 - 0≤ ≤k≤m-2 1 2m(m-1) u k+1-m p µ u -k-1 p µ u --1 = 1 2 2m π m 1 6 R u -m+1 + 0≤ ≤m-2 1 m-1 u +1-m qu --1 - 0≤ ≤m-2 (m--1) m(m-1) g µν u +1-m ∇ µ ( + 1) ∇ ν u + p ν u --1 - 0≤ ≤k≤m-2 1 2m(m-1) g µν u k+1-m (2k -2m + 3) ∇ µ u + p µ u -k-1 (2 + 1) ∇ ν u + p ν u --1 .
Corollary 2.10 (Case d = 4) In dimension four these expressions simplify further to:
R 2 = 1 16π 2 [ 1 6 R u -1 + u -1 qu -1 -1 2 u -1 ( ∇ µ p µ )u -1 -1 2 g µν u -1 ( ∇ µ ∇ ν u)u -1 + 1 4 g µν u -1 ( ∇ µ u)u -1 ( ∇ ν u)u -1 + 1 4 u -1 ( ∇ µ u)u -1 p µ u -1 -1 4 u -1 p µ u -1 ( ∇ µ u)u -1 -1 4 u -1 p µ u -1 p µ u -1 ] = 1 16π 2 1 6 R u -1 +u -1 qu -1 -1 2 g µν u -1 [ ∇ µ ( ∇ ν u + p ν )]u -1 + 1 4 g µν u -1 [ ∇ µ u -p µ ]u -1 [ ∇ ν u + p ν ]u -1 .
Corollary 2.11 (Case d = 3) In dimension three the spectral functions G can be written in terms of square roots of the r i (see [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Prop. 3.4]) and this leads to:
R 2 = 1 8π 3/2 1 6 R + 2 √ r 0 r 1 ( √ r 0 + √ r 1 ) E r 0 qE r 1 -2 3 2 √ r 0 + √ r 1 √ r 0 r 1 ( √ r 0 + √ r 1 ) 2 E r 0 ( ∇ µ p µ )E r 1 -2 3 √ r 0 r 1 + ( √ r 0 + √ r 1 ) 2 √ r 0 r 1 ( √ r 0 + √ r 1 ) 3 g µν E r 0 ( ∇ µ ∇ ν u)E r 1 + 2 3 Q 3 (r 0 , r 1 , r 2 ) √ r 1 ( √ r 0 + √ r 1 ) 2 ( √ r 0 + √ r 2 ) 3 ( √ r 1 + √ r 2 ) 2 g µν E r 0 ( ∇ µ u)E r 1 ( ∇ ν u)E r 2 + 2 3 Q 4 (r 0 , r 1 , r 2 ) E r 0 ( ∇ µ u)E r 1 p µ E r 2 -2 3 Q 4 (r 2 , r 1 , r 0 ) E r 0 p µ E r 1 p µ E r 2 -2
To conclude this list of results given at various dimensions, we see that we have explicit expressions of R 2 for d even, and moreover simple generic expressions for d = 2m, d ≥ 4, while for d odd it is difficult to propose a generic expression.
Some direct applications
The case a = 1
When a = 1, for any k ≥ 1, a cyclic operation can be performed under the trace:
f (r 0 , r 1 , . . . , r k ) tr(E r 0 B 1 E r 1 B 2 • • • B k E r k ) = f (r 0 , r 1 , . . . , r k ) tr(E r k E r 0 B 1 E r 1 B 2 • • • B k ) = δ r 0 ,r k f (r 0 , r 1 , . . . , r k ) tr(E r k E r 0 B 1 E r 1 B 2 • • • B k ) = f (r 0 , r 1 , . . . , r 0 ) tr(E r 0 B 1 E r 1 B 2 • • • E r k-1 B k ).
This implies that in all the spectral functions f (r 0 , r 1 , . . . , r k ) one can put r k = r 0 (remember that all the spectral functions are continuous, so that r k → r 0 is well-defined).
In [1, Theorem 4.3], a 2 (1, P ) has been computed for d = 2m even, m ≥ 1. Let us first rewrite this result in terms of spectral functions:
c r -m+1 0 E 0 + r -m 0 E 0 w + m-2 6 [ 1 2 α µ -β µ ] r -m 0 E 0 (∂ µ u) -m-2 6 g µν r -m 0 E 0 (∂ µ ∂ ν u) + 1 2 β µ r -m 0 E 0 v µ -1 2 r -m 0 E 0 (∂ µ v µ ) -1 4m g µν m-1 =0 r --1 0 r -m 1 E 0 v µ E 1 v ν + 1 2m m-1 =0 (m -2 ) r --1 0 r -m 1 E 0 v µ E 1 (∂ µ u) + g µν m-1 =0 [ m-2 6 -(m--1) 2m ] r --1 0 r -m 1 E 0 (∂ µ u)E 1 (∂ ν u).
Notice that:
g µν m-1 =0 r --1 0 r -m 1 tr(E 0 v µ E 1 v ν ) = g µν m-1 =0 r --1 0 r -m 1 tr(E 1 v ν E 0 v µ ) = g µν m-1 =0 r --1 1 r -m 0 tr(E 1 v µ E 0 v ν )
if we change to m --1 in the summation and we use the symmetry of g µν . Then, to show that the spectral functions F reduce to the ones above, one has to use the symmetry r 0 ↔ r 1 for some terms. A direct computation gives
F w (r 0 , r 0 ) = r -m 0 , F µ ∂u (r 0 , r 0 ) = m-2 6 [ 1 2 α µ -β µ ] r -m 0 , F ∂∂u (r 0 , r 0 ) = -m-2 6 r -m 0 , F v,µ (r 0 , r 0 ) = 1 2 β µ r -m 0 , F ∂v (r 0 , r 0 ) = -1 2 r -m 0 , 1 2 [F v,v (r 0 , r 1 , r 0 ) + F v,v (r 1 , r 0 , r 1 )] = -1 4m m-1 =0 r --1 0 r -m 1 , 1 2 [F ∂u,∂u (r 0 , r 1 , r 0 ) + F ∂u,∂u (r 1 , r 0 , r 1 )] = m-1 =0 [ m-2 6 -(m--1) 2m ] r --1 0 r -m 1 , F v,∂u (r 0 , r 1 , r 0 ) + F ∂u,v (r 1 , r 0 , r 1 ) = 1 2m m-1 =0 (m -2 ) r --1 0 r -m 1 .
In the last expression, under the trace we have tr
(E 0 (∂ µ u)E 1 v µ ) = tr(E 1 v µ E 0 (∂ µ u)
), and we need to sum the two functions F v,∂u and F ∂u,v with their correct arguments. These relations show that (2.19) reproduces [1, eq. 4.15] when a = 1.
Minimal Laplace type operators: u = 1
Starting from (2.1) when u = 1 one gets
P = -g µν ∇ µ ∇ ν -(p ν -[ 1 2 α ν -β ν ]) ∇ ν -q
and two simplifications occur in (2.20): all derivatives of u vanish, and the spectrum of u reduces to sp(u) = {1}, so that all spectral functions are taken at r i = 1 and E r i = 1. The result is then
R 2 = 1 2 d π d/2 [ 1 6 R + G q (1, 1) q + G ∇p (1, 1) ∇ µ p µ + G p,p (1, 1, 1) p µ p µ ]. A direct computation gives G q (1, 1) = 1, G ∇p (1, 1) = -1 2 , and G p,p (1, 1, 1) = -1 4 , so that R 2 = 1 2 d π d/2 [ 1 6 R + q -1 2 ∇ µ p µ -1 4 p µ p µ ]. (3.1)
As in Prop. 2.1, we can change ∇ µ to ∇ µ := ∇ µ + φ µ and solve φ µ in order to get p µ = 0. From (2.4) with u = 1, one has
g µν ∇ µ ∇ ν + q = g µν ( ∇ µ + φ µ ) ( ∇ ν + φ ν ) + q = g µν ∇ µ ∇ ν + g µν ( ∇ µ φ ν ) + 2g µν φ ν ∇ µ + g µν φ µ φ ν + q =: g µν ∇ µ ∇ ν + p µ ∇ µ + q
with p µ = 2g µν φ ν and q = q + g µν ( ∇ µ φ ν ) + g µν φ µ φ ν . This is solved for φ µ = 1 2 g µν p ν and implies
q = q -1 2 ∇ µ p µ -1 4 p µ p µ . Injected into (3.1), this gives R 2 = 1 2 d π d/2 [ 1 6 R + q ] as in [23, Theorem 3.3.1].
Conformal like transformed Laplacian
Let us consider a positive invertible element k ∈ Γ(End(V )), a covariant derivative ∇ µ on V and ∆ = -g µν ∇ µ ∇ ν be its associated Laplacian. Motivated by the conformal deformations worked out in [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF][START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF], we consider the operator
P := k∆k = -g µν k 2 ∇ µ ∇ ν -2g µν k( ∇ µ k) ∇ ν + k(∆k).
Actually, it is worthwhile to quote that the conformal change of metrics on the noncommutative tori is not as straightforward as copying the notation of the commutative tori since deep theories in operator algebras are involved. However, the result obtained in this section for the operator P will be used to compute R 2 for the noncommutative tori in Section 5.
The operator P can be written as in (2.4) with
u = k 2 , p ν = g µν k( ∇ µ k) -g µν ( ∇ µ k)k, q = -k(∆k).
Application of Theorem 2.4 gives
R k∆k 2 = 1 2 d π d/2 1 6 R r -d/2+1 0 E r 0 + F k∆k ∆k (r 0 , r 1 ) E r 0 (∆k)E r 1 + F k∆k ∇k∇k (r 0 , r 1 , r 2 )g µν E r 0 (∇ µ k)E r 1 (∇ ν k)E r 2 (3.2)
with
F k∆k ∆k (r 0 , r 1 ) = - √ r 0 G q (r 0 , r 1 ) -( √ r 0 + √ r 1 ) G ∇ ∇u (r 0 , r 1 ) -( √ r 0 - √ r 1 ) G ∇p (r 0 , r 1 ), (3.3) F k∆k ∇k∇k (r 0 , r 1 , r 2 ) = 2G ∇ ∇u (r 0 , r 2 ) + ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u, ∇u (r 0 , r 1 , r 2 ) (3.4)
Using Proposition 2.7, one has
F k∆k ∆k (r 0 , r 1 ) = - √ r 0 r 1 ( √ r 0 + √ r 1 )[G q (r 0 , r 1 ) + 2G ∇ ∇u (r 0 , r 1 )] r 0 + r 1 .
Details of the computations
In this section we give some details on the computations to establish Theorems 2.3 and 2.4. These computations can be done by hand but the reader can also follow [START_REF] Iochum | Heat asymptotics for nonminimal Laplace type operators and application to noncommutative tori[END_REF].
The equation of (2.19) requires to compute the terms in the sum (2.16), which itself requires to compute the arguments
B 1 ⊗ • • • ⊗ B k and the operators T k,p .
For r = 2, the list of arguments has been evaluated in [1, Section 4.1] (starting with the expression (1.1) of P ) as well as their contractions with the tensor G(g) µ 1 ...µ 2p . We make use of these results below. Then the computation of the operators T k,p reduces to the computation of the universal spectral functions I d/2+p,k . As noticed in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], only the values k = 1, 2, 3, 4 have to be considered.
Below is the list of the evaluation of these arguments in the corresponding operators T k,p , where the following functional calculus rule (and its obvious generalizations) is used
f (r 0 , r 1 , . . . , r k ) E r 0 u n 0 E r 1 u n 1 E r 2 • • • u n k-1 E r k = r n 0 +n 1 +•••+n k-1 0 f (r 0 , r 0 , . . . , r 0 ) E r 0 ,
where summations over r i in the spectrum of u are omitted. In the following, the symbol is used to symbolize this evaluation.
For k = 1, there is only one argument:
w I d/2,1 (r 0 , r 1 ) E r 0 wE r 1
For k = 2, one has:
-1 2 g µν g ρσ (∂ µ ∂ ν g ρσ ) u ⊗ u -1 2 g µν g ρσ (∂ µ ∂ ν g ρσ ) r 2 0 I d/2+1,2 (r 0 , r 0 , r 0 ) E r 0 , -g µν g ρσ (∂ ν g ρσ ) u ⊗ ∂ µ u -g µν g ρσ (∂ ν g ρσ ) r 0 I d/2+1,2 (r 0 , r 0 , r 1 ) E r 0 (∂ µ u)E r 1 , -d 2 g µν u ⊗ ∂ µ ∂ ν u -d 2 g µν r 0 I d/2+1,2 (r 0 , r 0 , r 1 ) E r 0 (∂ µ ∂ ν u)E r 1 , -1 2 g ρσ (∂ µ g ρσ ) v µ ⊗ u -1 2 g ρσ (∂ µ g ρσ ) r 1 I d/2+1,2 (r 0 , r 1 , r 1 ) E r 0 v µ E r 1 , -d 2 v µ ⊗ ∂ µ u -d 2 I d/2+1,2 (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 (∂ µ u)E r 2 , -1 2 g µν v µ ⊗ v ν -1 2 g µν I d/2+1,2 (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 v ν E r 2 , -u ⊗ ∂ µ v µ -r 0 I d/2+1,2 (r 0 , r 0 , r 1 ) E r 0 (∂ µ v µ )E r 1 .
For k = 3, one has:
g µν g ρσ (∂ µ ∂ ν g ρσ ) + 2(∂ µ ∂ ν g µν ) + g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 2g ρσ (∂ µ g νρ )(∂ ν g µσ ) + 1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) u ⊗ u ⊗ u g µν g ρσ (∂ µ ∂ ν g ρσ ) + 2(∂ µ ∂ ν g µν ) + g ρσ µ g µν )(∂ ν g ρσ ) + 2g ρσ (∂ µ g νρ )(∂ ν g µσ ) + 1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) r 3 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 0 ) E r 0 , (d + 6)[ 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν )] u ⊗ u ⊗ ∂ µ u (d + 6)[ 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν )] r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 ) E r 0 (∂ µ u)E r 1 , [ d+4 2 g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν )] u ⊗ ∂ µ u ⊗ u [ d+4 2 g µν g ρσ (∂ ν g ρσ ) + 2(∂ ν g µν )] r 0 r 1 I d/2+2,3 (r 0 , r 0 , r 1 , r 1 ) E r 0 (∂ µ u)E r 1 , (d+2) 2 2 g µν u ⊗ ∂ µ u ⊗ ∂ ν u (d+2) 2 2 g µν r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ) E r 0 (∂ µ u)E r 1 (∂ ν u)E r 2 , (d + 2)g µν u ⊗ u ⊗ ∂ µ ∂ ν u (d + 2)g µν r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 ) E r 0 (∂ µ ∂ ν u)E r 1 , [ 1 2 g ρσ (∂ µ g ρσ ) + g µν (∂ ρ g ρν )](v µ ⊗ u ⊗ u + u ⊗ v µ ⊗ u + u ⊗ u ⊗ v µ ) [ 1 2 g ρσ (∂ µ g ρσ ) + g µν (∂ ρ g ρν )] r 2 1 I d/2+2,3 (r 0 , r 1 , r 1 , r 1 ) + r 0 r 1 I d/2+2,3 (r 0 , r 0 , r 1 , r 1 ) + r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 ) E r 0 v µ E r 1 , d+2 2 (v µ ⊗ u ⊗ ∂ µ u) d+2 2 r 1 I d/2+2,3 (r 0 , r 1 , r 1 , r 2 ) E r 0 v µ E r 1 (∂ µ u)E r 2 , d+2 2 (u ⊗ ∂ µ u ⊗ v µ ) d+2 2 r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ) E r 0 (∂ µ u)E r 1 v µ E r 2 , d+2 2 (u ⊗ v µ ⊗ ∂ µ u) d+2 2 r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ) E r 0 v µ E r 1 (∂ µ u)E r 2 .
For k = 4, one has:
3 -1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) -g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -2g ρσ (∂ µ g µν )(∂ ν g ρσ ) -2g ρσ (∂ µ g µρ )(∂ ν g νσ ) -2g ρσ (∂ µ g νρ )(∂ ν g µσ ) u ⊗ u ⊗ u ⊗ u 3 -1 2 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) -g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -2g ρσ (∂ µ g µν )(∂ ν g ρσ ) -2g ρσ (∂ µ g µρ )(∂ ν g νσ ) -2g ρσ (∂ µ g νρ )(∂ ν g µσ ) r 4 0 I d/2+3,4 (r 0 , r 0 , r 0 , r 0 , r 0 ) E r 0 , -(d + 4)[ 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν )][3 u ⊗ u ⊗ u ⊗ ∂ µ u + 2 u ⊗ u ⊗ ∂ µ u ⊗ u + u ⊗ ∂ µ u ⊗ u ⊗ u] -(d + 4)[ 1 2 g µν g ρσ (∂ ν g ρσ ) + (∂ ν g µν )][3r 3 0 I d/2+3,4 (r 0 , r 0 , r 0 , r 0 , r 1 ) + 2r 2 0 r 1 I d/2+3,4 (r 0 , r 0 , r 0 , r 1 , r 1 ) + r 0 r 2 1 I d/2+3,4 (r 0 , r 0 , r 1 , r 1 , r 1 )]E r 0 (∂ µ u)E r 1 , -1 2 (d + 4)(d + 2)g µν (2 u ⊗ u ⊗ ∂ µ u ⊗ ∂ ν u + u ⊗ ∂ µ u ⊗ u ⊗ ∂ ν u) -1 2 (d + 4)(d + 2)g µν [2r 2 0 I d/2+3,4 (r 0 , r 0 , r 0 , r 1 , r 2 ) + r 0 r 1 I d/2+3,4 (r 0 , r 0 , r 1 , r 1 , r 2 )] E r 0 (∂ µ u)E r 1 (∂ ν u)E r 2 .
The coefficient c and the spectral functions F are evaluated by collecting these terms:
c := 1 3 (∂ µ ∂ ν g µν ) -1 12 g µν g ρσ (∂ µ ∂ ν g ρσ ) + 1 48 g µν g ρσ g αβ (∂ µ g ρσ )(∂ ν g αβ ) + 1 24 g µν g ρσ g αβ (∂ µ g ρα )(∂ ν g σβ ) -1 12 g ρσ (∂ µ g µν )(∂ ν g ρσ ) + 1 12 g ρσ (∂ µ g νρ )(∂ ν g µσ ) -1 4 g ρσ (∂ µ g µρ )(∂ ν g νσ ), F w (r 0 , r 1 ) := I d/2,1 (r 0 , r 1 ), F ∂v (r 0 , r 1 ) := -r 0 I d/2+1,2 (r 0 , r 0 , r 1 ), F ∂∂u (r 0 , r 1 ) := -d 2 r 0 I d/2+1,2 (r 0 , r 0 , r 1 ) + (d + 2) r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 ), F µ ∂u (r 0 , r 1 ) := -α µ r 0 I d/2+1,2 (r 0 , r 0 , r 1 ) + (d + 6)[ 1 2 α µ + β µ ] r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 ) + [ d+4 2 α µ + 2β µ ] r 0 r 1 I d/2+2,3 (r 0 , r 0 , r 1 , r 1 ) -(d + 4)[ 1 2 α µ + β µ ][3r 3 0 I d/2+3,4 (r 0 , r 0 , r 0 , r 0 , r 1 ) + 2r 2 0 r 1 I d/2+3,4 (r 0 , r 0 , r 0 , r 1 , r 1 ) + r 0 r 2 1 I d/2+3,4 (r 0 , r 0 , r 1 , r 1 , r 1 )], F v,µ (r 0 , r 1 ) := -1 2 α µ r 1 I d/2+1,2 (r 0 , r 1 , r 1 ) + [ 1 2 α µ + β µ ] [r 2 1 I d/2+2,3 (r 0 , r 1 , r 1 , r 1 ) + r 0 r 1 I d/2+2,3 (r 0 , r 0 , r 1 , r 1 ) + r 2 0 I d/2+2,3 (r 0 , r 0 , r 0 , r 1 )], F v,v (r 0 , r 1 , r 2 ) := -1 2 I d/2+1,2 (r 0 , r 1 , r 2 ), F ∂u,v (r 0 , r 1 , r 2 ) := d+2 2 r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ), F v,∂u (r 0 , r 1 , r 2 ) := -d 2 I d/2+1,2 (r 0 , r 1 , r 2 ) + d+2 2 r 1 I d/2+2,3 (r 0 , r 1 , r 1 , r 2 ) + d+2 2 r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ), F ∂u,∂u (r 0 , r 1 , r 2 ) := (d+2) 2 2 r 0 I d/2+2,3 (r 0 , r 0 , r 1 , r 2 ) -(d+4)(d+2) 2 [2r 2 0 I d/2+3,4 (r 0 , r 0 , r 0 , r 1 , r 2 ) + r 0 r 1 I d/2+3,4 (r 0 , r 0 , r 1 , r 1 , r 2 )]. (4.1)
As in [1, Section 4.3], the strategy to compute (2.20) could be to make the change of variables (u, v µ , w) → (u, p µ , q). Here we use another strategy which simplifies the computation since it is based on (2.8), (2.9), and (2.10).
Indeed, as already noticed, one can apply verbatim the computation of the arguments and their contractions with ∂ µ replaced by ∇ µ , and at the same time, using
p µ + g µν (∇ ν u) -[ 1 2 α µ -β µ ]u in place of v µ ,
and q in place of w. So, (2.19) can be replaced by c r
-d/2+1 0 E r 0 + F µ ∂u (r 0 , r 1 ) E r 0 (∇ µ u)E r 1 + g µν F ∂∂u (r 0 , r 1 ) E r 0 (∇ µ ∇ ν u)E r 1 + g µν F ∂u,∂u (r 0 , r 1 , r 2 ) E r 0 (∇ µ u)E r 1 (∇ ν u)E r 2 + F w (r 0 , r 1 ) E r 0 wE r 1 + F v,µ (r 0 , r 1 ) E r 0 v µ E r 1 + F v,∂u (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 (∇ µ u)E r 2 + F ∂u,v (r 0 , r 1 , r 2 ) aE r 0 (∇ µ u)E r 1 v µ E r 2 + g µν F v,v (r 0 , r 1 , r 2 ) E r 0 v µ E r 1 v ν E r 2 + F ∂v (r 0 , r 1 ) E r 0 (∇ µ v µ )E r 1 .
The next step is to replace ∇ µ by ∇ µ . We use (2.2), (2.3) and
∇ µ v µ = ∇ µ p µ + (∂ µ g µν )(∇ ν u) + g µν (∇ µ ∇ ν u) -[ 1 2 ∂ µ α µ -∂ µ β µ ]u -[ 1 2 α µ -β µ ](∇ µ u) = ∇ µ p µ + g µν ( ∇ µ ∇ ν u) + β µ ( ∇ µ u) -[ 1 2 ∂ µ α µ -∂ µ β µ ]u = ∇ µ p µ + 1 2 α µ p µ + g µν ( ∇ µ ∇ ν u) + β µ ( ∇ µ u) -[ 1 2 ∂ µ α µ -∂ µ β µ ]u.
This leads to a new expression containing the terms in (2.20), with the functions given in the list after Theorem 2.4, to which we have to add the two terms
G µ ∇u (r 0 , r 1 ) E r 0 ( ∇ µ u)E r 1 + G p,µ (r 0 , r 1 )E r 0 p µ E r 1 , with G µ ∇u (r 0 , r 1 ) = F µ ∂u (r 0 , r 1 ) + g µν F v,ν (r 0 , r 1 ) + β µ F ∂v (r 0 , r 1 ) + [ 1 2 α µ -β µ ][F ∂∂u (r 0 , r 1 ) -r 0 F v,∂u (r 0 , r 0 , r 1 ) -r 1 F ∂u,v (r 0 , r 1 , r 1 ) -r 0 F v,v (r 0 , r 0 , r 1 ) -r 1 F v,v (r 0 , r 1 , r 1 )], G p,µ (r 0 , r 1 ) = F v,µ (r 0 , r 1 ) + 1 2 α µ F ∂v (r 0 , r 1 ) -[ 1 2 α µ -β µ ] [r 0 F v,v (r 0 , r 0 , r 1 ) + r 1 F v,v (r 0 , r 1 , r 1 )].
A direct computation performed using the expressions of the spectral functions F in terms of
I d/2,1
shows that G µ ∇u (r 0 , r 1 ) = G p,µ (r 0 , r 1 ) = 0, and the following symmetries G q (r 0 , r 1 ) = G q (r 1 , r 0 ), G ∇ ∇u (r 0 , r 1 ) = G ∇ ∇u (r 1 , r 0 ), G ∇u, ∇u (r 0 , r 1 , r 2 ) = G ∇u, ∇u (r 2 , r 1 , r 0 ), so that, using Prop. 2.7, one gets
G ∇u,p (r 0 , r 1 , r 2 ) = -G p, ∇u (r 2 , r 1 , r 0 ), G p,p (r 0 , r 1 , r 2 ) = G p,p (r 2 , r 1 , r 0 ).
The coefficient in front of r
-d/2+1 0 E r 0 is 1 6 R = c -1 4 α µ β µ + 1 2 β µ β µ + 1 4 ∂ µ α µ -1 2 ∂ µ β µ -1 16 α µ α µ + 1 4 α µ β µ -1 4 β µ β µ ,
where R is the scalar curvature of the metric g.
The spectral functions G can be written in terms of log functions for d = 2 (see [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Cor. 3.3]), as Laurent polynomials for d ≥ 4 even (see [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Prop. 3.5]), and in terms of square roots of r i for d odd (see [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF]Prop. 3.4]). This completes the proof of Corollaries 2.8, 2.9, and 2.11.
Applications to the noncommutative torus
In this section, we first apply Theorem 2.4 to the noncommutative 2-torus at rational values of the deformation parameter θ, for which it is known that we get a geometrical description in terms of sections of a fiber bundle. Some computations of a 2 (a, P ) for specific operators P have been performed at irrational values of θ to determine the so-called scalar curvature (our R 2 ) [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF][START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF][START_REF] Fathizadeh | The Gauss-Bonnet theorem for noncommutative two tori with a general conformal structure[END_REF][START_REF] Dabrowski | Curved noncommutative torus and Gauss-Bonnet[END_REF][START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Sitarz | Wodzicki residue and minimal operators on a noncommutative 4-dimensional torus[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF][START_REF] Dabrowski | An asymmetric noncommutative torus[END_REF][START_REF] Liu | Modular curvature for toric noncommutative manifolds[END_REF][START_REF] Sadeghi | On logarithmic Sobolev inequality and a scalar curvature formula for noncommutative tori[END_REF][START_REF] Connes | The term a 4 in the heat kernel expansion of noncommutative tori[END_REF].
We now show that we can apply our general result at rational values of θ and get the same expressions for the scalar curvature R 2 which appears to be written in terms of θ-independent spectral functions. In particular, its expression is the same for rational and irrational θ.
Let Θ ∈ M d (R) be a skew-symmetric real matrix. The noncommutative d-dimensional torus C(T d Θ ) is defined as the universal unital C * -algebra generated by unitaries U k , k = 1, . . . , d, satisfying the relations
U k U = e 2iπΘ k, U U k . ( 5.1)
This C * -algebra contains, as a dense sub-algebra, the space of smooth elements for the natural action of the d-dimensional torus T d on C(T d Θ ). This sub-algebra is described as elements in C(T d Θ ) with an expansion a =
(k i )∈Z d a k 1 ,...,k d U k 1 1 • • • U k d d
where the sequence (a k 1 ,...,k d ) belongs to the Schwartz space S(Z d ). We denote by C ∞ (T d Θ ) this algebra. The C * -algebra C(T d Θ ) has a unique normalized faithful positive trace t whose restriction on smooth elements is given by t (
(k i )∈Z d a k 1 ,...,k d U k 1 1 • • • U k d d ) := a 0,...,0 (5.2)
This trace satisfies t (1) = 1 where 1 in the unit element of C(T d Θ ). The smooth algebra C ∞ (T d Θ ) has d canonical derivations δ µ , µ = 1, . . . , d, defined on the generators by δ µ (U k ) := δ µ,k iU k .
(5.3)
For any a ∈ C(T d Θ ), one has δ µ (a * ) = (δ µ a) * (real derivations). Denote by H the Hilbert space of the GNS representation of C(T d Θ ) defined by t. Each derivation δ µ defines a unbounded operator on H, denoted also by δ µ , which satisfies δ † µ = -δ µ (here † denotes the adjoint of the operator).
The geometry of the rational noncommutative tori
In the following, we consider the special case of even dimensional noncommutative tori, d = 2m, with
Θ = θ 1 χ • • • 0 0 . . . 0 0 • • • θ m χ , where χ := 0 1 -1 0 , ( 5.4)
for a family of deformation parameters θ 1 , . . . , θ m . Then
C(T 2m Θ ) C(T 2 Θ 1 ) ⊗ • • • ⊗ C(T 2 Θm )
When d = 2 and θ = p/q, where p, q are relatively prime integers and q > 0, it is known that C(T 2 Θ ) Γ(A θ ) is isomorphic to the algebra of continuous sections of a fiber bundle A θ in M q (C) algebras over a 2-torus T 2 B , as recalled in Section A. Similarly, for d = 4, with θ 1 = p 1 /q 1 and θ 2 = p 2 /q 2 , C(T 4 Θ ) is the space of sections of a fiber bundle in M q 1 q 2 (C) algebras over a 4-torus T 2 B,1 × T 2 B,2 . Moreover, in the identification C ∞ (T 2 Θ ) Γ ∞ (A θ ), the two derivations δ µ are the two components of the unique flat connection ∇ µ on A θ .
This geometrical description allows to use the results of Section 2 to compute a 2 (a, P ) for a differential operator on H of the form P = -g µν uδ µ δ ν -[p ν + g µν (δ µ u) -( 12 α ν -β ν )u]δ ν -q
The noncommutative two torus
In this section, we compute the coefficient a 2 (a, P ) on the rational noncommutative two torus for a differential operator P considered in [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF][START_REF] Fathizadeh | The Gauss-Bonnet theorem for noncommutative two tori with a general conformal structure[END_REF][START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF] for the irrational noncommutative two torus. Let us introduce the following notations.
Let τ = τ 1 + iτ 2 ∈ C with non zero imaginary part. We consider the constant metric g defined by
g 11 = 1, g 12 = g 21 = (τ ) = τ 1 , g 22 = |τ | 2 ,
with inverse matrix
g 11 = |τ | 2 (τ ) 2 = |τ | 2 τ 2 2 , g 12 = g 21 = -(τ ) (τ ) 2 = -τ 1 τ 2 2 , g 22 = 1 (τ ) 2 = 1 τ 2 2 .
We will use the constant tensors
1 := 1, 2 := τ, ¯ 1 = 1, ¯ 2 = τ , h µν := ¯ µ ν , which imply h 11 = 1, h 12 = τ , h 21 = τ , h 22 = |τ | 2 .
Then the symmetric part of h µν is the metric, g µν = 1 2 (h µν + h νµ ), and g µν ¯ µ ¯ ν = 0.
On the (GNS) Hilbert space H, consider the following operators δ, δ † and the Laplacian:
δ := ¯ µ δ µ = δ 1 + τ δ 2 , δ † = -µ δ µ = -δ 1 -τ δ 2 ∆ := δ † δ = -µ ¯ ν δ µ δ ν = -h µν δ µ δ ν = -g µν δ µ δ ν . For k ∈ C ∞ (T d Θ )
, k > 0, the operator P is defined as
P := P 1 0 0 P 2 (5.5)
with
P 1 := k∆k = -g µν kδ µ δ ν k = -g µν k 2 δ µ δ ν -2g µν k(δ ν k)δ µ -g µν k(δ µ δ ν k) =: -u 1 g µν δ µ δ ν -v µ 1 δ µ -w 1 , P 2 := δ † k 2 δ = -ν ¯ µ δ ν k 2 δ µ = -¯ µ ν (δ ν k 2 )δ µ -¯ µ ν k 2 δ µ δ ν = -g µν k 2 δ µ δ ν -h µν (δ ν k 2 )δ µ =: -u 2 g µν δ µ δ ν -v µ 2 δ µ -w 2 , so that u 1 = k 2 , v µ 1 = 2g µν k(δ ν k), w 1 = -k(∆k), u 2 = k 2 , v µ 2 = h µν (δ ν k 2 ) = h µν [k(δ ν k) + (δ ν k)k], w 2 = 0.
For the forthcoming computations, since the metric g is constant, we have c = R = α µ = β µ = 0, ∇ µ = ∇ µ = δ µ (the last equality being a property of the geometrical presentation of C ∞ (T d Θ ), as recalled above), F µ ∂u = F v,µ = 0. Here |g| 1/2 = det(g µν ) 1/2 = τ -1 2 . We can write P 1 and P 2 in the covariant form (2.1) with
u 1 = k 2 , p µ 1 = g µν k(δ ν k) -(δ ν k)k , q 1 = -k(∆k), u 2 = k 2 , p µ 2 = (h µν -g µν ) k(δ ν k) + (δ ν k)k , q 2 = 0.
Let f µν := 1 2 (h µν -h νµ ), so that h µν = g µν + f µν and h νµ = g µν -f µν , and define
Q g (a, b, c) := √ a(a √ b + 3a √ c - √ ac - √ abc -2b √ c) (a -b)( √ a - √ b)( √ a - √ c) 3 , Q f (a, b, c) := a( √ b + √ c) (a -b)( √ a - √ b)(a -c) ,
with the following (spectral) functions
F ∆k (r 0 , r 1 ) := r 0 -r 1 - √ r 0 r 1 ln(r 0 /r 1 ) ( √ r 0 - √ r 1 ) 3 , ( 5.6
)
F µν ∂k∂k (r 0 , r 1 , r 2 ) := g µν ( √ r 0 + √ r 2 )( √ r 0 -2 √ r 1 + √ r 2 ) + f µν ( √ r 0 - √ r 2 ) 2 ( √ r 0 - √ r 1 )( √ r 0 - √ r 2 ) 2 ( √ r 1 - √ r 2 ) + [g µν Q g (r 0 , r 1 , r 2 ) -f µν Q f (r 0 , r 1 , r 2 )] ln(r 0 /r 1 ) + [g µν Q g (r 2 , r 1 , r 0 ) -f µν Q f (r 2 , r 1 , r 0 )] ln(r 2 /r 1 ). (5.7)
As in Section B, we use in the following result the simplified notation ϕ(a) instead of ϕ • L • S(a), where ϕ is defined in (1.5), while L and S are defined in Section A. Moreover, ϕ and the trace t defined in (5.2) are related by the normalization (A.3). Proposition 5.1 For the 2-dimensional noncommutative torus at rational values of the deformation parameter θ, one has a 2 (a, P ) = ϕ(aR 2 ) for any a ∈ C(T 2 Θ ) with
R 2 = 1 4 π [F ∆k (r 0 , r 1 ) E r 0 (∆k)E r 1 + F µν ∂k∂k (r 0 , r 1 , r 2 ) E r 0 (δ µ k)E r 1 (δ ν k)E r 2 ]. (5.8)
Since we are in dimension d = 2, the appearance of the log function in this result is expected. It is shown in Appendix B that this result coincides with a previous one in [8, Theorem 5.2] for the irrational noncommutative two torus.
While R 2 does depend on the deformation parameter θ, and in particular if it is irrational or not, the spectral functions F ∆k and F µν ∂k∂k do not. This universality was obtained in [START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF]. Nevertheless, the fact that R 2 can be written in terms of θ-independent spectral functions needs a more conceptual interpretation.
The spectrum of the differential operator P depends on the differential operators δ µ and some multiplication operators by elements of the algebra (written here in terms of k and its derivatives δ µ k, δ µ δ ν k). On the one hand, the spectrum of the closed extension of the operator δ µ in the Hilbert space of the GNS representation consists only of eigenvalues ik µ , k µ ∈ Z, associated to eigenvectors
U k 1 1 • • • U k d d
, so that it does not depend explicitly of θ. On the other hand, the computations of R 2 , performed here or in [START_REF] Connes | The Gauss-Bonnet theorem for the noncommutative two torus[END_REF][START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF][START_REF] Fathizadeh | The Gauss-Bonnet theorem for noncommutative two tori with a general conformal structure[END_REF][START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF], are based on formal manipulations of the product in the algebra, in particular they do not use the defining relations (5.1). This explains why these methods bypass the θ dependency and give rise to some expressions in terms of θ-independent spectral functions. Notice that for specific values of θ, for instance θ = 0 (the commutative case), these expressions can be simplified. So, one has to look at (5.8) as a "θ universal" expression for R 2 .
Proof One has a 2 (a, P ) = a 2 (a, P 1 ) + a 2 (a, P 2 ). Denote by R 2 (resp. R
2 ) the expressions associated to P (resp. P 1 , P 2 ). Then one has R
2 = R (1) 2 + R (2) 2 .
The operator P 1 is a conformal like transformed Laplacian, so the computation of R
2 is a direct consequence of (3.2) in Section 3.3. Here the metric is constant, so that R = 0, and it remains
R (1) 2 =: 1 4 π F (1)∆k (r 0 , r 1 ) E r 0 (∆k)E r 1 + F µν (1)∂k∂k (r 0 , r 1 , r 2 ) E r 0 (δ µ k)E r 1 (δ ν k)E r 2 ,
where, using (3.3) and (3.4),
F (1)∆k (r 0 , r 1 ) = - √ r 0 G q (r 0 , r 1 ) -( √ r 0 + √ r 1 ) G ∇ ∇u (r 0 , r 1 ) -( √ r 0 - √ r 1 ) G ∇p (r 0 , r 1 ), F µν (1)∂k∂k (r 0 , r 1 , r 2 ) = 2g µν G ∇ ∇u (r 0 , r 2 ) + g µν ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u, ∇u (r 0 , r 1 , r 2 )
For the operator P 2 , one applies Theorem 2.4:
R (2) 2 = 1 4 π ( √ r 0 + √ r 1 ) G ∇ ∇u (r 0 , r 1 ) g µν E r 0 (δ µ δ ν k)E r 1 + 2G ∇ ∇u (r 0 , r 2 ) g µν E r 0 (δ µ k)E r 1 (δ ν k)E r 2 + ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u, ∇u (r 0 , r 1 , r 2 ) g µν E r 0 (δ µ k)E r 1 (δ ν k)E r 2 + ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G p, ∇u (r 0 , r 1 , r 2 ) (h νµ -g µν )E r 0 (δ µ k)E r 1 (δ ν k)E r 2 + ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u,p (r 0 , r 1 , r 2 ) (h µν -g µν )E r 0 (δ µ k)E r 1 (δ ν k)E r 2 -( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G p,p (r 0 , r 1 , r 2 ) g µν E r 0 (δ µ k)E r 1 (δ ν k)E r 2 =: 1 4 π [F (2)∆k (r 0 , r 1 ) E r 0 (∆k)E r 1 + F µν (2)∂k∂k (r 0 , r 1 , r 2 ) E r 0 (δ µ k)E r 1 (δ ν k)E r 2 ],
with
F (2)∆k (r 0 , r 1 ) = -( √ r 0 + √ r 1 ) G ∇ ∇u (r 0 , r 1 ), F µν (2)∂k∂k (r 0 , r 1 , r 2 ) = 2g µν G ∇ ∇u (r 0 , r 2 ) + g µν ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u, ∇u (r 0 , r 1 , r 2 )
Then F ∆k := F 1,∆k + F 2,∆k and F µν ∂k∂k := F µν 1,∂k∂k + F µν 2,∂k∂k simplifies as in (5.6) and (5.7). The expression obtained for R 2 shows that it belongs to C(T 2 Θ ) and acts by left multiplication on H.
The noncommutative four torus
Our result applies to the computation of the conformally perturbed scalar curvature on the noncommutative four torus, computed in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF]. In order to do that, as in dimension 2, we perform the computation at rational value of θ as described at the end of Appendix A.
The operator we consider is the one in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF], written as
∆ ϕ := k 2 ∂1 k -2 ∂ 1 k 2 + k 2 ∂ 1 k -2 ∂1 k 2 + k 2 ∂2 k -2 ∂ 2 k 2 + k 2 ∂ 2 k -2 ∂2 k 2
with (in our notations) k 2 := e h , ∂ 1 := -δ 1 + iδ 3 , ∂1 := -δ 1 -iδ 3 , ∂ 2 := -δ 2 + iδ 4 , and ∂2 := -δ 2 -iδ 4 . Indeed, in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF], the derivations are δ µ = -iδ µ . This leads to
∆ ϕ = -2 k 2 g µν δ µ δ ν + g µν (δ ν k 2 )δ µ + g µν (δ µ δ ν k 2 ) -g µν (δ µ k 2 )k -2 (δ ν k 2 ) =: 2P.
The metric g µν is the diagonal one in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF], but in the following computation, we only require g µν to be constant. Let us mention that in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF], the computation of the scalar curvature is done using P defined above (and not ∆ ϕ ), since the symbol in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF]Lemma 3.6] is the one of P . So we will use P in the following. We get P =: -ug µν δ µ δ ν -v µ δ µ -w with u = k 2 , v µ = g µν (δ ν k 2 ), w = g µν (δ µ δ ν k 2 ) -g µν (δ µ k 2 )k -2 (δ ν k 2 ).
Since g is constant, we have as before c = R = α µ = β µ = 0 and ∇ µ = ∇ µ = δ µ and this implies p µ = 0 and q = g µν (δ µ δ ν k 2 ) -g µν (δ µ k 2 )k -2 (δ ν k 2 ) in the covariant form (2.1). We then use the result of Corollary 2.10 to get the conformally perturbed scalar curvature:
R 2 = 1 2 5 π 2 [g µν k -2 (δ µ δ ν k 2 )k -2 -3 2 g µν k -2 (δ µ k 2 )k -2 (δ ν k 2 )k -2 ].
(5.9)
In Appendix B it is shown that we recover the result previously obtained in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF][START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF] for the irrational noncommutative four torus.
Conclusion
In this paper, we have computed in all dimensions the local section R 2 of End(V ) defined by a 2 (a, P ) = M tr[a(x)R 2 (x)] dvol g (x) for any section a of End(V ) for any nonminimal Laplace type operator P = -[g µν u(x)∂ µ ∂ ν + v ν (x)∂ ν + w(x)] (Theorems 2.3 and 2.4). Expressions have been given for R 2 in small dimensions, d = 2, 3, 4 (Corollaries 2.8, 2.10, and 2.11) and for any even dimension d ≥ 2 (Corollary 2.9), where, as expected from the results in [START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF], polynomials expressions can be proposed.
Despite the difficulties, a 4 (a, P ) has been exhibited for d = 2 in [START_REF] Connes | The term a 4 in the heat kernel expansion of noncommutative tori[END_REF] for the 2-dimensional noncommutative torus, leaving open the computation of R 4 . We hope that our method could be used to reach R 4 in any dimension, using a computer algebra system in the more general framework of an arbitrary P , like (1.1).
Our method still applies to more general setting than the NCT at rational values of the deformation parameter, namely to n-homogeneous C * -algebras, which can be characterized in terms of sections of fiber bundles with fiber space M n (C) [START_REF] Fell | The structure of algebras of operator fields[END_REF][START_REF] Blackadar | Operator algebras: theory of C * -algebras and von Neumann algebras[END_REF]. order to describe this algebra, let us consider the two matrices , then
U 0 := 0 1 0 • • • 0 0 0 1 • • • 0 . . . . . . . . . 0 0 0 • • • 1 1 0 0 • • • 0 , V 0 := 1 0 0 • • • 0 0 ξ 1 0 • • • 0 . . . . . . . . . 0 • • • 0 ξ q-2 0 0 0 • • • 0 ξ q-1 = r 0 ,
δ µ k = k g 1 (∆)[δ µ h] = 2k g 1 (∆)[δ µ ln k], ∆k = k g 1 (∆)[∆h] -g µν k m • g 2 (∆ 1 , ∆ 2 )[(δ µ h) ⊗ (δ ν h)] = 2k g 1 (∆)[∆ ln k] -4g µν k m • g 2 (∆ 1 , ∆ 2 )[(δ µ ln k) ⊗ (δ ν ln k)].
Proof With g 1 (y) := 1 2 1 0 ds 1 y s 1 /2 = ( √ y -1) ln -1 y, we get
δ µ k = δ µ e h/2 = 1 0
ds 1 e (1-s 1 )h/2 (δ µ h/2) e s 1 h/2 = 1 2 k(
1 0 ds 1 ∆ s 1 /2 )[δ µ h] = k ∆ 1/2 -1 ln ∆ [δ µ h] = 2k ∆ 1/2 -1 ln ∆ [δ µ ln k] = k g 1 (∆)[δ µ h] = 2k g 1 (∆)[δ µ ln k].
Similarly for the Laplacian, ∆k = -g µν (δ µ δ ν k) = -1 2 g µν δ µ [ ds 2 e (1-s 1 )h/2 (δ ν h)e (s 1 -s 2 )h/2 (δ µ h)e s 2 h/2 ] = -1 2 g µν k[ .
a, P ) t (r-d)/2 . (1.2)
y 1 ,y 2 z 2 F 2 E y 2 z 2 = r 0 ,r 1 ,y 2 z 2 F 2 E y 2 z 2 = r 0 ,r 1 ,y 2 F 2 E y 2 r 1 = r 0 ,r 1 ,r 2 F 2 E r 2 . 2 √ y 1 ( √ y 2 - 1 ) 1 - 1 ) ln y 2 ln y 1
22222222122221211121 (r 0 , r 0 y 1 , r 0 y 1 y 2 ) b 0 E r 0 b 1 E y 1 r 0 E z 2 b (r 0 , r 1 , r 1 y 2 ) b 0 E r 0 b 1 E r 1 E z 2 b (r 0 , r 1 , r 1 y 2 ) b 0 E r 0 b 1 E r 1 b (r 0 , r 1 , r 2 ) b 0 E r 0 b 1 E r 1 bLet k = e h/2 . While the arguments b i mentioned above are δ µ k or ∆k, they are δ µ (ln k) = 1 2 δ µ h and ∆(ln k) = 1 2 ∆h in[START_REF] Connes | Modular curvature for noncommutative two-tori[END_REF]. The second lemma gives the relations between these arguments, compare with[START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF] Lemma 5.1]. Lemma B.2 If g 1 (y) = √ y-1 ln y and g 2 (y 1 , y 2 ) = ln y 1 -( √ y ln y 2 (ln y 1 +ln y 2 )
1 0ds 1 e 1 2 g µν [ 1 0 1 0
11111 (1-s 1 )h/2 (δ ν h)e s 1 h/2 ] = -ds 1 e (1-s 1 )h/2 (δ µ δ ν h)e s 1 h/2 ] ds 2 e (1-s 1 -s 2 )h/2 (δ µ h)e s 2 h/2 (δ ν h)e s 1 h/2 ]
1 0ds 1 ∆ s 1 / 2 ](δ µ δ ν h) -1 4 g µν k m • [ 1 0 ds 1 1-s 1 0 ds 2 ∆(s 1 +s 2 )/2 1 ∆s 1 /2 2 ] 2 ] 1 0 ds 2 y s 1 /2 1 y s 2 /2 2 = 2 √ y 1 (
1112111211221112221 [(δ µ h) ⊗ (δ ν h)] [(δ µ h) ⊗ (δ ν h)] = k g 1 (∆)[∆h] -g µν k m • g 2 (∆ 1 , ∆ 2 )[(δ µ h) ⊗ (δ ν h)] with g 2 (y 1 , y 2 ) := √ y 2 -1) ln y 1 -( √ y 1 -1)ln y 2 ln y 1 ln y 2 (ln y 1 + ln y 2 )
√ r 0 + √ r 1 + √ r 2 √ r 0 r 1 r 2 ( √ r 0 + √ r 1 )( √ r 0 + √ r 2 )( √ r 1 + √ r 2 ) E r 0 p µ E r 1 p µ E r 2 .
+ g µν ( √ r 0 -√ r 1 )( √ r 1 + √ r 2 ) G p, ∇u (r 0 , r 1 , r 2 ) + g µν ( √ r 0 + √ r 1 )( √ r 1 -√ r 2 ) G ∇u,p (r 0 , r 1 , r 2 ) + g µν ( √ r 0 -√ r 1 )( √ r 1 -√ r 2 ) G p,p (r 0 , r 1 , r 2 ).
+ (h νµ -g µν )( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G p, ∇u (r 0 , r 1 , r 2 ) + (h µν -g µν )( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G ∇u,p (r 0 , r 1 , r 2 ) -g µν ( √ r 0 + √ r 1 )( √ r 1 + √ r 2 ) G p,p (r 0 , r 1 , r 2 ).
The constant c computed in[14, eq. (3)] is not c = 1/(2π 2 ) as claimed but c = 1/2 as shown by a direct comparison between [14, eq. (3)] and[11, eq. (5.1)] where a factor π 2 is not written. This explains the factor[START_REF] Iochum | Heat trace for Laplace type operators with non-scalar symbols[END_REF] 2 in our first equality.
Acknowledgements
The authors are indebted to Valentin Zagrebnov for helpful discussions concerning some aspects of this paper and the referee for its pertinent suggestions.
A. Geometrical identification of the noncommutative torus at rational values
For d = 2 and θ := p/q rational, with p, q relatively prime integers with q > 0, it is known (see [START_REF] Gracia-Bondía | Elements of Noncommutative Geometry[END_REF]Prop. 12.2], [START_REF] Dubois-Violette | Smooth *-algebras[END_REF]Sect. 3]) that the algebra C(T 2 Θ ) of the NCT identifies with the algebra Γ(A θ ) of continuous sections of a fiber bundle A θ in M q (C) algebras over a 2-torus T 2 B . Let us describe this identification.
Denote by T 2 P the 2-torus given by identification of opposite sides of the square [0, 2π] 2 . An element in T 2 P is written as (e ix , e iy ) for (x, y) ∈ [0, 2π] 2 . There is a natural action of the (abelian discrete) group G := Z 2 q on T 2 P : (m, n) • (e ix , e iy ) := (e i(x+2πpm/q) , e i(y+2πpn/q) ). The quotient T 2 B := T 2 P /G is the 2-torus constructed by identification of opposite sides of the square [0, 2π/q] 2 . Indeed, there are unique m ∈ Z q and n ∈ Z q such that e i(x+2πpm/q) = e i(x+2π/q) and e i(y+2πpn/q) = e i(y+2π/q) , so that (m, 0) (resp. (0, n)) identifies (e ix , e iy ) with (e i(x+2π/q) , e iy ) (resp. (e ix , e iy ) with (e ix , e i(y+2π/q) )) in T 2 P /G. The quotient map T 2 P → T 2 B is a G-covering. Let us now consider the C * -algebra C(T 2 P , M q (C)) C(T 2 P ) ⊗ M q (C) of matrix-valued continuous functions on T 2 P , in which the space of smooth functions C ∞ (T 2 P , M q (C)) is a dense subalgebra. In with ξ n := e i2πnθ , which satisfy U 0 V 0 = e i2πθ V 0 U 0 , U q 0 = V q 0 = 1 q . For (r, s) ∈ Z 2 q , the U r 0 V s 0 's define a basis of M q (C) such that tr[U r 0 V s 0 ] = q δ (r,s),(0,0) (here tr is the trace on M q (C)). Then a ∈ C ∞ (T 2 P , M q (C)) can be decomposed as
where, with u(x) := e ix and v(x) := e iy , the last decomposition is the Fourier series of the smooth functions a r,s on T 2 P . In particular, a k, ,r,s are rapidly decreasing coefficients in terms of (k, ) ∈ Z 2 . The group G acts on M q (C) by
Let us consider the subalgebra
In the form of (A.1), the G-equivariant elements in C ∞ (T 2 P , M q (C)) are such that their coefficients satisfy a k, ,r,s e i2π(mk+n ) = a k, ,r,s e i2π (mr+ns) for any (m, n) ∈ G, (k, ) ∈ Z 2 and (r, s) ∈ Z 2 q , so that a k, ,r,s = 0 only when mk + n ≡ mr + ns mod q. With (m, n) = (1, 0) and (0, 1) this implies k ≡ r mod q and ≡ s mod q. In (A.1), for a couple (k, ) ∈ Z 2 , there is a unique (r, s) ∈ Z 2 q for which a k, ,r,s = 0 (r and s are the remainders of the Euclidean divisions of k and by q). Then, the only non zero coefficients a k, ,r,s depends only on (k, ) ∈ Z 2 . We denote them by a k, , and a smooth G-equivariant function a ∈ C ∞ G (T 2 P , M q (C)) is then given by the expansion
identifies in a canonical way with the space Γ(A θ ) of continuous sections of the associated fiber bundle
By definition, A θ is the quotient of T 2 P × M q (C) by the equivalence relation ((m, n) • (e ix , e iy ), A) ∼ ((e ix , e iy ), (m, n) • A) for any (m, n) ∈ G. We denote by [(e ix , e iy ), A] ∈ A θ the class of ((e ix , e iy ), A). We denote by S : C G (T 2 P , M q (C)) → Γ(A θ ) the identification, defined by S(a)(x, y) := [(e ix , e iy ), a(e ix , e iy )]. In the GNS construction, C ∞ G (T 2 P , M q (C)) is dense in H and is contained in the domains of the δ µ 's. The fiber of the vector bundle V on which the differential operator P acts is then C N M q (C), i.e. N = q 2 . In the present situation, all used elements in Γ(End(V )) are in fact left multiplications by elements in C G (T 2 P , M q (C)) Γ(A θ ). For instance, the element a in (1.4) will be understood as the left multiplication by an element a ∈ C G (T 2 P , M q (C)).
For A ∈ M q (C), let L(A) be the left multiplication by A on M q (C). Then L(A) has the same spectrum as A, each eigenvalues having q times its original multiplicity. In particular, we have that
where in the LHS tr is the trace of operators on M q (C) and in the RHS tr is the trace on M q (C).
The computation of R 2 uses local trivializations of sections of A θ . Given a section S(a
, we define the local section S(a) loc : (0, 2π/q) 2 → M q (C) by S(a) loc (x, y) := a(e ix , e iy ). Notice that the open subset (0, 2π/q) 2 ⊂ T 2 B is sufficient to describe the continuous section S(a) via its trivialization S(a) loc . The (local) section R 2 relative to ϕ in (1.4) is defined by
where we suppose here that |g| 1/2 is constant (this is the case for the situations considered in the paper). Then, one has
The trace tr[U k 0 V 0 ] is non-zero only when k, are multiple of q, and its value is then q, so that
2π/q 0 dx e iqkx 2π/q 0 dy e iq y = |g| 1/2 q 2 2π q 2 a 0,0 = (2π) 2 |g| 1/2 t(a).
(A.2) Finally we get (when |g| 1/2 is constant)
when applied to any elements in C(T 2 Θ ).
Consider now a 4-dimensional noncommutative torus for Θ = θ 1 χ 0 0 θ 2 χ , as in (5.4), and θ i = p i /q i , p i , q i relatively prime integers, and
where A θ 1 A θ 2 is the external tensor product of the two vector bundles A θ i over the base 2-torus T 2 B,i defined as above. Recall that, with pr i :
B,i . Using the same line of arguments as for the 2-dimensional case, and denoting by g i a constant metric on T 2 B,i , one gets, for any
This procedure can be extended straightforwardly to any even dimension.
B. Comparison with previous results for noncommutative tori
We would like to compare the result (5.8) with [8, Theorem 5.2]. Some transformations are in order, since some conventions are different and the results are presented using different operators. In [8, Theorem 5.2], it is presented relative to the normalized trace t on C(T 2 Θ ), while our result is presented relative to ϕ
Present results are given using functional calculus on the left and right multiplication operators L u and R u where u = k 2 . The corresponding spectral decompositions give L u (a) = r 0 r 0 E r 0 a and R u (a) = r 1 r 1 aE r 1 , where E r i is the projection associated to u for the spectral value r i . In [START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF], another convention is used, namely via functional calculus on the modular operator ∆(a) := k -2 ak 2 . If E ∆ y denotes the projection of ∆ associated to the spectral value y, then
where y := r -1 0 r 1 belongs to the spectrum of ∆.
where f (r 0 , y 1 , . . . , y p ) := F (r 0 , r 0 y 1 , r 0 y 1 y 2 , . . . , r 0 y 1 • • • y p ) is a spectral function of R 0 u and the ∆ i 's.
Using functional calculus notation, this lemma implies
as operators acting on elements b 0 ⊗ • • • ⊗ b p . This result is very analog to the rearrangement lemma [28, Corollary 3.9] without the integral ∞ 0 du in [28, eq. (3.9)].
Proof It is sufficient to show how the combinatorial aspect of the proof works for p = 2. One has
Lemma B.3 For any operators like
Thus the operators on the LHS are respectively associated, modulo the multiplication operator m, to operators defined by the spectral functions
where y 1 , y 2 belong to the spectrum of ∆ and r 0 to the spectrum of u.
Proof For the first relation, we compute the LHS on b 0 ⊗ b 1 using spectral decomposition:
the projections products imply r 0 = z = r 1 = z 1 and y 1 z 1 = yz, so this is equal to
For the second relation, we compute the LHS on b
For the third relation, we compute the LHS on b
and y 2 z 2 = y 2 z 2 , so that:
We can now change (5.8)
and
So, the sum gives
The associated spectral functions are
Another change of convention concerns the derivations of C ∞ (T 2 Θ ): in [START_REF] Fathizadeh | Scalar curvature for the noncommutative two torus[END_REF], δ µ := -iδ µ is used. This implies that their expressions like ( δ µ ln k)( δ ν ln k) correspond to our -(δ µ ln k) ⊗ (δ ν ln k). Notice also their combination δ
Thus for a comparison of the two results, asign has to be taken into account for the G µν All these relations can be checked directly. In particular, the relations on the RHS are independent of the variable r 0 .
In order to compare (5.9) for the noncommutative four torus with [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF]Theorem 5.4], we can use the results in [START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF]. As before, we need the correspondence (A.5) between our trace ϕ and their trace ϕ 0 ≡ t. Here g µν i = δ µν on the base tori T 2 B,i , so that |g i | 1/2 = 1. Denote by R F K the curvature obtained in [START_REF] Fathizadeh | Scalar curvature for noncommutative four-tori[END_REF]Theorem 5.4], which is π 2 times [11, eq. (5.1)]. A comparison between eq. ( 1) and (3) in [START_REF] Fathizadeh | On the scalar curvature for the noncommutative four torus[END_REF] and [11, eq. (5.1)] gives
and the two results coincide. 2 |
01464130 | en | [
"phys.mphy",
"phys.grqc",
"math.math-dg"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01464130/file/The%20dressing%20field%20method-WS.pdf | J Attard
J François
S Lazzarini
T Masson
The dressing field method of gauge symmetry reduction, a review with examples
Keywords: numbers: 02.40.Hw, 11.15.-q, 11.25.Hf
Gauge symmetries are a cornerstone of modern physics but they come with technical difficulties when it comes to quantization, to accurately describe particles phenomenology or to extract observables in general. These shortcomings must be met by essentially finding a way to effectively reduce gauge symmetries. We propose a review of a way to do so which we call the dressing field method. We show how the BRST algebra satisfied by gauge fields, encoding their gauge transformations, is modified. We outline noticeable applications of the method, such as the electroweak sector of the Standard Model and the local twistors of Penrose.
Introduction
To this day, modern Field Theory framework (either classical or quantum), so successful in describing Nature from elementary particles to cosmology, rests on few keystones, one of which being the notion of gauge symmetry. Elementary fields are subject to local transformations which are required to leave invariant the theory (the Lagrangian). These transformations thus form a so-called local symmetry of the theory, known as gauge symmetry. This notion, originates with Weyl's 1918 unified theory resting on the idea of local scale, or gauge, invariance [49; 66; 67]. The heuristic appeal of gauge symmetries is that imposing them on a theory of free fields requires, a minima, the introduction of fundamental interactions through minimal coupling. This is the content of the so-called gauge principle for Field Theory,1 captured by Yang's well-known aphorism: "symmetry dictates interaction" [START_REF] Yang | Selected Papers[END_REF]. 2This is one of the major conceptual breakthrough of the century separating us from Hilbert's lectures on the foundations of mathematics and physics. And the story of the interactions between gauge theories and differential geometry is a highlight of the long history of synergy between mathematics and physics. 3In spite of their great theoretical appeal, gauge theories come with some shortcomings. Prima facie indeed, gauge symmetries forbid mass terms for (at least) the interaction fields, which was known to be in contradiction with the phenomenology of the nuclear interactions. Also, the quantization of gauge theories via Feynman's path integral has its specific problem because integrating on gauge equivalent fields configurations makes it ill-defined. Finally, it is in general not so straightforward to extract observables from a gauge theory since the physical content must be gauge invariant, e.g the abelian (Maxwell-Faraday) field strength or Wilson loops. An issue made acutely clear in General Relativity (GR), where observables must be diffeomorphic invariant. Addressing these shortcomings essentially boils down to finding a way to reduce effectively gauge symmetries, in part or completely. One can think of only a few ways to do so, among which we mention the three most prominent.
First, gauge fixing: one selects a representative in the gauge orbit of each gauge field. This is usually the main approach followed to make contact with physical predictions: one only needs to make sure that the results are independent of the choice of gauge. This is also how a sensible quantization procedure is carried on, for example through the Fadeev-Popov procedure. However, a consistent choice of gauge might not necessarily be possible in all circumstances, a fact known as the Gribov ambiguity [29; 59].
Second, one can try to implement a spontaneous symmetry breaking mechanism (SSBM). This is famously known to be the standard interpretation of the Brout-Englert-Higgs-Guralnik-Hagen-Kibble (BEHGHK) mechanism [10; 30; 31], which historically solved the masses problem for the weak gauge bosons in the electroweak unification of Glashow-Weinberg-Salam, and by extension of the masses of particles in the Standard Model of Particles Physics. We stress that this interpretation presupposes settled the philosophical problem of the ontological status of gauge symmetries: by affirming that a gauge symmetry can be "spontaneously broken", one states that it is a structural feature of reality rather than of our description of it. While this remains quite controversial in philosophy of physics, given the empirical success of the BEHGHK mechanism, a pragmatic mind could consider the debate closed via an inference to the best explanation. We will show here that this conclusion would be hasty.
Finally, one can seek to apply the bundle reduction theorem. This is a result of the fiber bundle theory, widely known to be the geometric underpinning of gauge theories, stating the circumstances under which a bundle with a given structure group can be reduced to a subbundle with a smaller structure group. Some authors have recast the BEHGHK mechanism in light of this theorem [60; 63; 65].
In this paper we propose a brief review of another way to perform gauge symmetry reduction which we call the dressing field method. It is formalized in the language of the differential geometry of fiber bundles and it has a corresponding BRST differential algebraic formulation. The method boils down to the identification of a suitable field in the geometrical setting of a gauge theory that allows to construct partially of fully gauge invariant variables out of the standard gauge fields. This formalizes and unifies several works and approaches encountered in past and recent literature on gauge theories, whose ancestry can be traced back to Dirac's pioneering ideas [15; 16].
The paper is thus organized. In Section 2 we outline the method and state the most interesting results (pointing to the published literature for proofs), one of which being the noticeable fact that the method allows to highlight the existence of gauge fields of non-standard kind; meaning that these implement the gauge principle but are not of the same geometric nature than the objects usually encountered in Yang-Mills theory for instance.
In Sections 3 and 4 we illustrate the scheme by showing how it is applied to the electroweak sector of the standard model and GR. We argue in particular that our treatment provides an alternative interpretation of the BEHGHK mechanism that is more in line with the conclusions of the community of philosophers of physics.
In Section 5 we address the substantial example of the conformal Cartan bundle P(M, H) and connection . Standard formulations of so-called Tractors and Twistors can then be found by applying the dressing field method to this geometry. Furthermore, from this viewpoint they appear to be clear instances of gauge fields of the non-standard kind alluded to above. This fact, as far as we know, has not been recognized.
In our conclusion, Section 6, we indicate other possible applications of the method and stress the obvious remaining open questions to be addressed.
Reduction of gauge symmetries: the dressing field method
As we have stated, the differential geometry of fiber bundles supplemented by the BRST differential algebra are the mathematical underpinning of classical gauge theories. So, this is the language in which we will formalize our approach. Complementary material and detailed proofs can be found in [1; 20; 21; 23].
Let us give the main philosophy of the dressing field method in few words. From a mathematical point of view, a gauge field theory requires some spaces of fields on which the gauge group acts in a definite way. So, to define a gauge field theory, the spaces of fields themselves are not sufficient: one has to specify the actions of the gauge group on them. This implies that the same mathematical space can be considered as a space of different fields, according to the possible actions of the gauge group.
Generally, the action is related to the way the space of fields is constructed. For instance, in the usual geometrical framework of gauge field theories, the primary structure is a principal fiber bundle P, and the gauge group is its group of vertical automorphisms. Then, the sections of an associated vector bundle to P, constructed using the action ρ of the structure group on a vector space V , support an action of the gauge group which is directly related to ρ.
The physical properties of a gauge field theory are generally encoded into a Lagrangian L written in terms of the gauge fields (and their derivatives): it is required to be invariant when the gauge group acts on all the fields involved in its writing.
The main idea behind the dressing field method is to exhibit a very special field (the dressing field) out of the gauge fields in the theory, with a specific gauge action. Then, one performs some change of field variables, very similar to some change of variables in ordinary geometry, by combining in a convenient way (through sums and products when they make sense) the gauge fields with the dressing field. The resulting "dressed fields" (new fields variables of the theory) are then subject to new actions of the gauge group, that can be deduced from the combination of fields. In favorable situations, these dressed fields are invariant under the action of a subgroup of the gauge group: a part of the gauge group does not act anymore on the new fields of the theory, that is, the symmetry has been reduced.
Notice some important facts. Firstly, the dressed fields do not necessarily belong to the original space of fields from which they are constructed. Secondly, in general the dressing (i.e. the combination of the dressing field with a field from the theory) looks like a gauge transformation. But we insist on the fact that it is not a gauge transformation. Finally, the choice of the dressing field relies sometimes on the physical content of the theory, that is on the specific form of the Lagrangian. So, the dressing field method can depend on the mathematical, as well as on the physical content of the theory.
Les us now describe the mathematical principles of the method.
Composite fields
Let P(M, H) be a principal bundle over a manifold M equipped with a connection ω with curvature Ω, and let ϕ be a ρ-equivariant V -valued map on P to be considered as a section of the associated vector bundle
E = P × H V .
The group of vertical automorphisms of P,
Aut v (P) := {Φ : P → P | ∀h ∈ H, ∀p ∈ P, Φ(ph) = Φ(p)h and π • Φ = Φ} is isomorphic to the gauge group H := γ : P → H | R * h γ(p) = h -1 γ(p)h , the isomorphism being Φ(p) = pγ(p). The composition law of Aut v (P), Φ 1 • Φ 2 , corresponds to the product γ 1 γ 2 .
In this geometrical settings, the gauge group H Aut v (P) acts on fields via pull-backs. It acts on itself as η γ := Φ * η = γ -1 ηγ, and on connections ω, curvatures Ω and (ρ, V )-tensorial forms ϕ as,
ω γ := Φ * ω = γ -1 ωγ + γ -1 dγ, ϕ γ := Φ * ϕ = ρ(γ -1 )ϕ, (1)
Ω γ := Φ * Ω = γ -1 Ωγ, (Dϕ) γ := Φ * Dϕ = D γ ϕ γ = ρ(γ -1 )Dϕ.
These are active gauge transformations, formally identical but to be conceptually distinguished from passive gauge transformations relating two local descriptions of the same global objects in local trivializations of the fiber bundles described as follows. Given two local sections σ 1 , σ 1 of P, related as σ 2 = σ 1 h, either over the same open set U of M or over the overlap of two open sets U 1 ∩ U 2 , one finds
σ * 2 ω = h -1 σ * 1 ω h + h -1 dh, σ * 2 ϕ = ρ(h -1 )σ * 1 ϕ, ( 2
)
σ * 2 Ω = h -1 σ * 1 Ω h, σ * 2 Dϕ = ρ(h -1 )σ * 1 Dϕ.
This distinction between active and passive gauge transformations is reminiscent of the distinction between diffeomorphism and coordinates transformations in GR.
The main idea of the dressing field approach to gauge symmetries reduction is stated in the following
Proposition 1 ([20]
) Let K and G be subgroups of H such that K ⊆ G ⊂ H. Note K ⊂ H the gauge subgroup associated with K. Suppose there exists a map
u : P → G satisfying the K-equivariance property R * k u = k -1 u. ( 3
)
Then this map u, that will be called a dressing field, allows to construct through f : P → P defined by f (p) = pu(p), the following composite fields
ω u := f * ω = u -1 ωu + u -1 du, ϕ u := f * ϕ = ρ(u -1 )ϕ. ( 4
)
which are K-invariant and satisfy
Ω u := f * Ω = u -1 Ωu = dω u + 1 2 [ω u , ω u ], D u ϕ u := f * Dϕ = ρ(u -1 )Dϕ = dϕ u + ρ * (ω u )ϕ u .
These composite field are K-horizontal and thus project on the quotient P/K.
The K-invariance of the composite fields (4) is most readily proven. Indeed from the definition (3) one has f (pk) = f (p) so that f factorizes through a map P → P/K and given Φ(p) = pγ(p) with γ ∈ K ⊂ H, one has
Φ * f * = (f • Φ) * = f * .
The dressing field can be equally defined by its K-gauge transformation:
u γ = γ -1 u, for any γ ∈ K ⊂ H. Indeed, given Φ associated to γ ∈ K and (3) : (u γ )(p) := Φ * u(p) = u(Φ(p)) = u(pγ(p)) = γ(p) -1 u(p) = (γ -1 u)(p).
Several comments are in order. First, (4) looks algebraically like (1): this makes easy to check algebraically that the composite fields are K-invariant. Indeed, let χ ∈ {ω, Ω, ϕ, . . .} denote a generic field when performing an operation that applies equally well to any specific one. For two maps α, α with values in H, if one defines χ α algebraically as in [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF], then one has (χ α ) α = χ αα . This is for instance the usual way to compose the actions of two elements of the gauge group. But this relation is independent of the specific action of the gauge group on α and α , which could so belong to different spaces of representation of H.
Then (χ u ) γ = (χ γ ) u γ = (χ γ ) γ -1 u = χ u ,
where the last (and essential) equality is the one emphasized above.
Second, if K = H, then the composited fields ( 4) are H-invariant, the gauge symmetry is fully reduced, and they live on P/H M. This shows that the existence of a global dressing field is a strong constraint on the topology of the bundle P: a K-dressing field means that the bundle is trivial along the K-subgroup, P P/K × K, while a H-dressing field means its triviality,
P M × H [20, Prop. 2].
Third, in the event that G ⊃ H, then one has to assume that the H-bundle P is a subbundle of a G-bundle, and mutatis mutandis the proposition still holds. Such a situation occurs for instance when P is a reduction of a frame bundle (of unspecified order), see Section 4 for an example.
Notice that despite the formal similarity with (1) (or (2)), the composite fields (4) are not gauge transformed fields. Indeed the defining equivariance property (3) of the dressing field implies u / ∈ H, and f / ∈ Aut v (P). As a consequence, in general the composite fields do not belong to the gauge orbits of the original fields: χ u / ∈ O(χ). Another consequence is that the dressing field method must also not be confused with a simple gauge fixing.
Residual gauge symmetry
Suppose there is a normal subgroup K and a subgroup J of H such that any h ∈ H can be uniquely written as h = jk for j ∈ J and k ∈ K. Then H = JK and J H/K, whose Lie algebra is denoted by j. Such a situation occurs for instance with H = J × K. Several examples are based on this structure, see for instance Sections 3 and 5.
The quotient bundle P/K is then a J-principal bundle P = P (M, J), with gauge group J Aut v (P ). The residual gauge symmetry of the composite fields depends on the one hand on that of the gauge fields, and on the other hand on that of the dressing field. A classification of the manifold of possible situations is impractical, but below we provide the general treatment of two most interesting cases.
The composite fields as genuine gauge fields
With the previous decomposition of H, our first case is summarized in this next result.
Proposition 2 Let u be a K-dressing field on P. Suppose its J-equivariance is given by
R * j u = Ad j -1 u, for any j ∈ J. ( 5
)
Then the dressed connection ω u is a J-principal connection on P . That is, for X ∈ j and j ∈ J, ω u satisfies:
ω u (X v ) = X and R * j ω u = Ad j -1 ω u .
Its curvature is given by Ω u . Also, ϕ u is a (ρ, V )-tensorial map on P and can be seen as a section of the associated bundle E = P × J V . The covariant derivative on such sections is given by
D u = d + ρ(ω u ).
From this we immediately deduce the following Corollary 3 The transformation of the composite fields under the residual J -gauge symmetry is found in the usual way to be
(ω u ) γ := Φ * ω u = γ -1 ω u γ + γ -1 dγ , (ϕ u ) γ := Φ * ϕ u = ρ(γ -1 )ϕ u , (Ω u ) γ := Φ * Ω u = γ -1 Ω u γ , (D u ϕ u ) γ := Φ * D u ϕ u = ρ(γ -1 )D u ϕ u , ( 6
)
with Φ ∈ Aut(P ) J γ .
A quick way to convince oneself of this is to observe that for Φ ∈ Aut v (P ) one has, using [START_REF] Bailey | Thomas's structure bundle for conformal, projective and related structures[END_REF],
u γ (p) := (Φ * u)(p) = u(Φ (p)) = u(pγ (p)) = γ (p) -1 u(p)γ (p) = (γ -1 uγ )(p)
. So, using again the generic variable χ one finds that (χ u ) γ = (χ γ ) u γ = (χ γ ) γ -1 uγ = χ uγ , which proves [START_REF] Becchi | An introduction to relativistic processes and the standard model of electroweak interactions[END_REF]. In field theory, the relation u γ = γ -1 uγ can be preferred to (5) as a condition on the dressing field u.
The above results show that when (5) holds, the composite fields (4) are K-invariant but genuine J -gauge fields with residual gauge transformation given by [START_REF] Becchi | An introduction to relativistic processes and the standard model of electroweak interactions[END_REF]. It may then be possible to perform a further dressing operation provided a suitable dressing field exists and satisfies the compatibility condition of being invariant under the K-gauge subgroup just erased. The extension of this scheme to any number of dressing fields can be found in [START_REF] François | Reduction of gauge symmetries: a new geometrical approach[END_REF]. Let us now turn to our next interesting case.
The composite fields as twisted-gauge fields
To define these gauge fields with a new behavior under the action of the gauge group, we need to introduce some definitions. Let G ⊃ G be a Lie group for which the representation (ρ, V ) of G is also a representation of G . Let us assume the existence of a C ∞ -map C : P × J → G , (p, j) → C p (j), satisfying
C p (jj ) = C p (j)C pj (j ). ( 7
)
From this we have that C p (e) = e, with e the identity in both J and G , and C p (j) -1 = C pj (j -1 ). Its differential is
dC |(p,j) = dC(j) |p + dC p|j : T p P ⊕ T j J → T Cp(j) G ,
where ker dC(j) = T j J and ker dC p = T p P, and where dC(j) (resp. dC p ) uses the differential on P (resp. J). Notice that C p (j) -1 dC |(p,j) : T p P ⊕ T j J → T e G = g . We then state the following result.
Proposition 4 Let u be a K-dressing field on P. Suppose its J-equivariance is given by (R * j u)(p) = j -1 u(p)C p (j), with j ∈ J and C a map as above. [START_REF] Brading | Symmetries in Physics: Philosophical Reflections[END_REF] Then ω u satisfies
1. ω u p (X v p ) = c p (X) := d dt C p (e tX )| t=0 , for X ∈ j and X v p ∈ V p P . 2. R * j ω u = C(j) -1 ω u C(j) + C(j) -1 dC(j).
The dressed curvature Ω u is J-horizontal and satisfies
R * j Ω u = C(j) -1 Ω u C(j). Also, ϕ u is a ρ(C)-equivariant map, R * j ϕ u = ρ (C(j)) -1 ϕ u . The first order differential operator D u := d+ρ * (ω u ) is a natural covariant deriva- tive on such ϕ u so that D u ϕ u is a (ρ(C), V )-tensorial form: R * j D u ϕ u = ρ (C(j)) -1 D u ϕ u and (D u ϕ u ) p (X v p ) = 0.
This proposition shows that ω u behaves "almost as a connection": we call it a C-twisted connection 1-form. There is a natural geometric structure to interpret the dressed field ϕ u . Omitting the representation ρ of G on V to simplify notations, we can define the following equivalence relation on P × V : (p, v) ∼ (pj, C p (j) -1 v) for any p ∈ P, v ∈ V , and j ∈ J.
Using the properties of the map C, it is easy to show that this is indeed an equivalence relation. In particular, one has (pjj , C p (jj
) -1 v) ∼ (pjj , C pj (j ) -1 C p (j) -1 v) ∼ (pj, C p (j) -1 v) ∼ (p, v).
Then one can define the quotient vector bundle over M
E = P × C(J) V := (P × V )/∼ (9)
that we call a C(J)-twisted associated vector bundle to P. Notice that when J = {e}, one has E = P × V . Adapting standard arguments in fiber bundle theory, one can show that sections of E are C(J)-equivariant maps ϕ : P → V such that ϕ(pj) = C p (j) -1 ϕ(p) for any p ∈ P, j ∈ J.
The dressing field ϕ u is then a section of E satisfying ϕ u (pk) = ϕ u (p) for any p ∈ P and k ∈ K by construction.
We can now deduce the transformations of the composite fields under the residual gauge group J . Consider Φ ∈ Aut v (P ) J γ, where γ : P → J satisfies γ(pk) = γ(p) and γ(pj) = j -1 γ(p)j for any p ∈ P, k ∈ K and j ∈ J, and define the map C(γ) : P → G , p → C p (γ(p)), given by the compositions
(p) := (Φ * u)(p) = u(pγ(p)) = γ(p) -1 u(p)C p γ(p) = γ -1 uC(γ) (p), that is u γ = γ -1 uC(γ). ( 10
)
This relation can be taken as an alternative to (8) as a condition on the dressing field u. We have then the following proposition.
Proposition 5 Given Φ ∈ Aut v (P ) J γ, the residual gauge transformations of the composite fields are
(ω u ) γ := Φ * ω u = C(γ) -1 ω u C(γ) + C(γ) -1 dC(γ), (ϕ u ) γ := Φ * ϕ u = ρ C(γ) -1 ϕ u , (Ω u ) γ := Φ * Ω u = C(γ) -1 Ω u C(γ), (D u ϕ u ) γ := Φ * D u ϕ u = ρ C(γ) -1 D u ϕ u . ( 11
)
This shows that the composite fields (4) behave as gauge fields of a new kind, on which the implementation of the gauge principle is factorized through the map C. Given [START_REF] Brout | Broken symmetry and the mass of the gauge vector mesons[END_REF] and the usual J -gauge transformations for the standard gauge fields χ, the above results can be obtained by a direct algebraic calculation: (χ u ) γ = (χ γ ) u γ = (χ γ ) γ -1 uC(γ) = χ uC (γ) .
Under a further gauge transformation Ψ ∈ Aut v (P ) η ∈ J , there are two ways to compute the composition Ψ * (Φ * u) of the two actions: first we use the composition inside the gauge group, (Φ
• Ψ)(p) = pγ(p)η(p), so that (Ψ * (Φ * u)) (p) = ((Φ • Ψ) * u) (p) = u (pγ(p)η(p)) = η(p) -1 γ(p) -1 u(p)C p γ(p)η(p)
; secondly, we compute the actions successively,
(Ψ * (Φ * u)) (p) = γ -1 uC(γ) (Ψ(p)) = γ (pη(p)) -1 u (pη(p)) C pη(p) (γ(pη(p)) = η(p) -1 γ(p) -1 η(p) • η(p) -1 u(p)C p (η(p)) • C pη(p) η(p) -1 γ(p)η(p) = η(p) -1 γ(p) -1 u(p)C p (γ(p)η(p)) .
In both cases, Ψ * (Φ * u) = η -1 γ -1 u C (γη), which secures the fact that the actions [START_REF] Cap | Parabolic Geometries I: Background and General Theory[END_REF] of the residual gauge symmetry on the composite fields are well behaved as representations of the residual gauge group, even if C is not a morphism of groups.
Ordinary connections correspond to C p (j) = j for any p ∈ P and j ∈ J, in which case, it is a morphism of groups.
The case of 1-α-cocycles. For a p ∈ P , suppose given
C p : J → G satisfying C p (jj ) = C p (j) α j [C p (j )] for α : J → Aut(G )
a continuous group morphism. Such an object appears in the representation theory of crossed products of C * -algebras and is known as a 1-α-cocycle (see [50; 69]). 4 Then, defining C pj (j ) := α j [C p (j )], one has an example of ( 7), and the above result applies to the 1-α-cocycle C. As a particular case, consider the following Proposition 6 Suppose J is abelian and let A p , B : J → GL n be group morphisms where
R * j A p (j ) = B(j) -1 A p (j )B(j). Then C p := A p B : J → GL n is a 1-α-cocyle with α : J → Aut(GL n ) defined by α j [g] = B(j) -1 gB(j) for any g ∈ GL n .
Using the commutativity of J through B(j)B(j ) = B(jj ) = B(j j) = B(j )B(j), the proposition is proven as
C p (jj ) = A p (jj )B(jj ) = A p (j)A p (j )B(j)B(j ) = A p (j)B(j) B(j) -1 [A p (j )B(j )]B(j) = C p (j) B(j) -1 [C p (j )]B(j).
Notice also that we have C p (jj ) = C p (j j) = C p (j ) B(j ) -1 [C p (j)]B(j ). Such 1-α-cocycles will appear in the case of the conformal Cartan geometry and the associated Tractors and Twistors in Section 5.
Application to the BRST framework
The BRST differential algebra
The BRST differential algebra captures the infinitesimal version of (1). Abstractly (see for instance [START_REF] Dubois-Violette | The Weil-BRS algebra of a Lie algebra and the anomalous terms in gauge theory[END_REF]) it is a bigraded differential algebra generated by {ω, Ω, v, ζ} where v is the so-called ghost and the generators are respectively of degrees (1, 0), (2, 0), (0, 1) and (1, 1). It is endowed with two nilpotent antiderivations d and s, homogeneous of degrees (1, 0) and (0, 1) respectively, with vanishing anticommutator:
d 2 = 0 = s 2 , sd + ds = 0. The algebra is equipped with a bigraded commutator [α, β] := αβ -(-) deg[α]deg[β]
sω = -dv -[ω, v], sΩ = [Ω, v], and sv = -1 2 [v, v]. ( 12
)
When the abstract BRST algebra is realized in a differential geometry framework, the bigrading is according to the de Rham form degree and ghost degree: d is the de Rham differential on P (or M if one works in local trivializations) and s is the de Rham differential on H. The ghost is the Maurer-Cartan form on H so that v ∈
1 (H, LieH), and given ξ ∈ T H, v(ξ) : P → h ∈ LieH [START_REF] Bonora | Some remark on brs transformations, anomalies and the cohomology of the lie algebra of the group of gauge transformations[END_REF]. So in practice the ghost can be seen as a map v : P → h ∈ LieH, a placeholder that takes over the role of the infinitesimal gauge parameter. Thus the first two relations of (12) (and ( 13) below) reproduce the infinitesimal gauge transformations of the gauge fields (1), while the third equation in ( 12) is the Maurer-Cartan structure equation for the gauge group H. The BRST transformation of the section ϕ (of degrees (0, 0)) and its covariant derivative are sϕ = -ρ * (v)ϕ, and sDϕ = -ρ * (v)Dϕ. [START_REF] Chernodub | Non-abelian Supercurrents and Electroweak Theory[END_REF] where ρ * is the representation of the Lie algebra induced by the representation ρ of the group.
The BRST framework provides an algebraic characterization of relevant quantities in gauge theories, such as admissible Lagrangian forms, observables and anomalies, all of which required to belong to the s-cohomology group H * , * (s) of s-closed but not s-exact quantities.
Modified BRST differential algebra
Since the BRST algebra encodes the infinitesimal gauge transformations of the gauge fields, it is expected that the dressing field method modifies it. To see how, let us first consider the following Proposition 7 Consider the BRST algebra ( 12)-( 13) on the initial gauge variables and the ghost v ∈ LieH. Introducing the dressed ghost
v u = u -1 vu + u -1 su, ( 14
)
the composite fields (4) satisfy the modified BRST algebra:
sω u = -D u v u = -dv u -[ω u , v u ], sϕ u = -ρ * (v u )ϕ u , sΩ u = [Ω u , v u ], sv u = -1 2 [v u , v u ].
This result does not rest on the assumption that u is a dressing field.
The result is easily found by expressing the initial gauge variable χ = {ω, Ω, ϕ} in terms of the dressed fields χ u and the dressing field u, and re-injecting in the initial BRST algebra ( 12)- [START_REF] Chernodub | Non-abelian Supercurrents and Electroweak Theory[END_REF]. At no point of the derivation does su need to be explicitly known. It then holds regardless if u is a dressing field or not.
If the ghost v encodes the infinitesimal initial H-gauge symmetry, the dressed ghost v u encodes the infinitesimal residual gauge symmetry. Its concrete expression depends on the BRST transformation of u.
Under the hypothesis K ⊂ H, the ghost decomposes as v = v k + v h/k , and the BRST operator splits accordingly: s = s k + s h/k . If u is a dressing field its BRST transformation is the infinitesimal version of its defining transformation property:
s k u = -v k u. So the dressed ghost is v u = u -1 vu + u -1 su = u -1 (v k + v h/k )u + u -1 (-v k u + s h/k u) = u -1 v h/k u + u -1 s h/k u.
The LieK part of the ghost, v k , has disappeared. This means that s k χ u = 0, which expresses the K-invariance of the composite fields (4).
Residual BRST symmetry
If K ⊂ H is a normal subgroup, then H/K = J is a group with Lie algebra h/k = j. We here provide the BRST treatment of the two cases detailed in Section 2.2.
Suppose the dressing field satisfies the condition [START_REF] Bailey | Thomas's structure bundle for conformal, projective and related structures[END_REF], whose BRST version is
s j u = [u, v j ]. The dressed ghost is then v u = u -1 v j u + u -1 s j u = u -1 v j u + u -1 (uv j -v j u) = v j . ( 15
)
This in turn implies that the new BRST algebra is
sω u = -D u v j = -dv j -[ω u , v j ], sϕ u = -ρ * (v j )ϕ u , sΩ u = [Ω u , v j ], sv j = -1 2 [v j , v j ]. (16)
This is the BRST version of (6), and reflects the fact that the composite fields (4) are genuine J -gauge fields, in particular that ω u is a J-connection. Suppose now that the dressing field satisfies the condition [START_REF] Brading | Symmetries in Physics: Philosophical Reflections[END_REF], whose BRST version is s j u = -v j u + uc p (v j ). The dressed ghost is then
v u = u -1 v j u + u -1 s j u = u -1 v j u + u -1 (-v j u + uc p (v j )) = c p (v j ). ( 17
)
This in turn implies that the new BRST algebra is
sω u = -dc p (v j ) -[ω u , c p (v j )], sϕ u = -ρ * (c p (v j ))ϕ u , sΩ u = [Ω u , c p (v j )], sc p (v j ) = -1 2 [c p (v j ), c p (v j )]. (18)
This is the BRST version of [START_REF] Cap | Parabolic Geometries I: Background and General Theory[END_REF], and reflects the fact that the composite fields (4) instantiate the gauge principle in a satisfactory way.
To conclude we mention that the dressing operation is compatible with Stora's method of altering a BRST algebra so that it describes the action of infinitesimal diffeomorphisms of the base manifold on the gauge fields, in addition to their gauge transformations, as described in [36; 61] for instance: details can be found in [START_REF] François | Becchi-rouet-stora-tyutin structure for the mixed weyldiffeomorphism residual symmetry[END_REF].
Local construction and physics
Until now, we have been focused on the global aspects of the dressing approach on the bundle P to emphasize the geometric nature of the composite fields obtained. Most notably we showed that the composite field can behave as "generalized" gauge fields. But to do physics we need the local representatives on an open subset U ⊂ M of global dressing and composite fields. These are obtained in the usual way from a local section σ : U → P of the bundle. The important properties they thus retain are their gauge invariance and their residual gauge transformations.
If it happens that a dressing field is defined locally on U first, and not directly on P, then the local composite fields χ u are defined in terms of the local dressing field u and local gauge fields χ by (4). The gauge invariance and residual gauge transformations of these local composite fields are derived from the gauge transformations of the local dressing field under the various subgroups of the local gauge group H loc according to (χ u ) γ = (χ γ ) u γ . The BRST treatment for the local objects mirrors exactly the one given for global objects.
This being said, note A = σ * ω and F = σ * Ω for definiteness but keep u and ϕ to denote the local dressing field and sections of the associated vector bundle E. Suppose that the base manifold is equipped with a (r, s)-Lorentzian metric allowing for a Hodge star operator, and that V is equipped with an inner product , . We state the final proposition dealing with gauge theory. Proposition 8 Given the geometry defined by a bundle P(M, H) endowed with ω and the associated bundle E, suppose we have a gauge theory given by the prototypical H loc -invariant Yang-Mills Lagrangian
L(A, ϕ) = 1 2 Tr(F ∧ * F ) + Dϕ, * Dϕ -U ( ϕ ) vol,
where vol is the metric volume form on M, ϕ := | ϕ | 1 /2 and U is a potential term. 5 . If there is a local dressing field u : U → G ⊂ H with K loc -gauge transformation u γ = γ -1 u, then the above Lagrangian is actually a "H loc /K loc -gauge theory" defined in terms of K loc -invariant variables since we have
L(A, ϕ) = L(A u , ϕ u ) = 1 2 Tr(F u ∧ * F u ) + D u ϕ u , * D u ϕ u -U ( ϕ u ) vol .
The relation L(A, ϕ) = L(A u , ϕ u ) is satisfied since, as already noticed, relations (4) look algebraically like gauge transformations (1) under which L is supposed to be invariant in a formal way.
The terminology "H loc /K loc -gauge theory" means that the Lagrangian is written in terms of fields which are invariant under the action of γ : U → K. Since the quotient H/K needs not be a group, the remaining symmetries of the fields might not be described in terms of a group action.
Notice that since u is a dressing field, u / ∈ H loc , so the dressed Lagrangian L(A u , ϕ u ) ought not to be confused with a gauge-fixed Lagrangian L(A γ , ϕ γ ) for some chosen γ ∈ H loc , even if it may happen that γ = u as fields if one forgets about the corresponding representations of the gauge group, a fact that might go unnoticed. As we have stressed in Section 2, the dressing field approach is distinct from both gauge-fixing and spontaneous symmetry breaking as a means to reduce gauge symmetries.
Let us highlight the fact that a dressing field can often be constructed by requiring the gauge invariance of a prescribed "gauge-like condition". Such a condition is given when a local gauge field χ (often the gauge potential) transformed by a field u with value in the symmetry group H, or one of its subgroups, is required to satisfy a functional constraint: Σ(χ u ) = 0. Explicitly solved, this makes u a function of χ, u(χ), thus sometimes called field dependent gauge transformation. However this terminology is valid if and only if u(χ) transforms under the action of γ ∈ H loc as u(χ) γ := u(χ γ ) = γ -1 u(χ)γ, in which case u(χ) ∈ H loc . But if the functional constraint still holds under the action of H loc , or of a subgroup thereof, it follows that (χ γ ) u γ = χ u (or equivalently that sχ u = 0). This in turn suggests that u γ = γ -1 u (or su = -vu) so that u / ∈ H loc but is indeed a dressing field.
This, and the above proposition, generalizes the pioneering idea of Dirac [15; 16] aiming at quantizing QED by rewriting the classical theory in terms of gauge-invariant variables. The idea was rediscovered several times and sometimes termed Dirac variables [37; 54]. They reappeared in various contexts in gauge theory, such as QED [START_REF] Lavelle | Nonlocal symmetry for QED[END_REF], quarks theory in QCD [START_REF] Lavelle | Constituent quarks from QCD[END_REF], the proton spin decomposition controversy [22; 42; 43]. The dressing field approach thus gives a unifying and clarifying framework for these works, and others concerning the BRST treatment of anomalies in QFT [28; 44], Polyakov's "partial gauge fixing" for 2D-quantum gravity [41; 55] or the construction of the Wezz-Zumino functionnal [START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF].
In the following we provide examples of significant applications of the dressing field approach in various contexts: the electroweak sector of the Standard Model, the tetrad vs metric formulation of GR, and tractors and twistors obtained from conformal Cartan geometry.
The electroweak sector of the Standard Model
The aim of the electroweak model is to give a gauge theoretic account of the fact that there is one long range interaction mediated by a massless boson, electromagnetism, together with a short range interaction mediated by massive bosons, the weak interaction. Here we discard the spinors (matter fields) of the theory and consider only the theory describing the gauge potentials and the scalar field. The spinors could be treated along the lines of the following exposition. More details can be found in [21; 46].
Reduction of the SU(2)-symmetry via dressing
The principal bundle of the model is P (M, U (1) × SU (2)) and it is endowed with a connection whose local representative is
A = a + b. Its curvature is F = f a + g b .
The defining representation of the structure group is (C 2 , ), with the left matrix multiplication. The associated vector bundle is E = P × C 2 and we denote by ϕ : U ⊂ M → C 2 a (local) section. The covariant derivative is Dϕ = dϕ + (g a + gb)ϕ, with g , g the coupling constants of U (1) and SU (2) respectively. The action of the gauge group H = U(1) × SU(2) (we drop the subscript "loc" from now on) is,
a α = a + 1 g α -1 dα, b α = b, ϕ α = α -1 ϕ, a β = a, b β = β -1 bβ + 1 g β -1 dβ, ϕ β = β -1 ϕ,
where α ∈ U(1) and β ∈ SU [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach II[END_REF]. The structure of direct product group is clear. The H-invariant Lagrangian form of the theory is,
L(a, b, ϕ) = 1 2 Tr(F ∧ * F ) + Dϕ, * Dϕ -U ( ϕ ) vol, = 1 2 Tr(f a ∧ * f a ) + 1 2 Tr(g b ∧ * g b ) + Dϕ, * Dϕ -µ 2 ϕ, ϕ + λ ϕ, ϕ 2 vol, (19)
where µ, λ ∈ R. This gauge theory describes the interaction of a doublet scalar field ϕ with two gauge potentials a and b. As it stands, nor a nor b can be massive, and indeed L contains no mass term for them. It is not a problem for a since we expect to have at least one massless field to carry the electromagnetic interaction. But the weak interaction is short range, so its associated field must be massive. Hence the necessity to reduce the SU (2) gauge symmetry in the theory in order to allow a mass term for the weak field. Of course we know that this can be achieve via SSBM. Actually the latter is used in conjunction with a gauge fixing, the so-called unitary gauge, see e.g [START_REF] Becchi | An introduction to relativistic processes and the standard model of electroweak interactions[END_REF]. Some authors have given a more geometrical account of the mechanism based on the bundle reductions theorem, see [60; 63; 65].
We now show that the SU(2) symmetry can be erased via the dressing field method. Given the gauge transformations above, we define a dressing field out of the doublet scalar field ϕ by using a polar decomposition ϕ = uη in C 2 with u ∈ SU (2) and η :=
0 ϕ ∈ R + ⊂ C 2 , so that u β = β -1 u, ( 20
)
as can be checked explicitly. Then u is a SU(2)-dressing field that can be used to apply Prop. 1 and to construct the SU(2)-invariant composite fields
A = u -1 Au + 1 g u -1 du = a + (u -1 bu + 1 g u -1 du) =: a + B, F = u -1 F u = f a + u -1 g b u =: f a + G, with G = dB + gB 2 , ϕ = u -1 ϕ = η, and D ϕ = u -1 Dϕ = Dη = dη + (g a + gB)η. ( 21
)
By virtue of Prop. 8, we conclude that the theory defined by the electroweak Lagrangian ( 19) is actually a U(1)-gauge theory described in terms of the above composite fields,
L(a, B, η) = 1 2 Tr( F ∧ * F ) + Dη, * Dη -U (η) vol, = 1 2 Tr(f a ∧ * f a ) + 1 2 Tr(G ∧ * G) + Dη, * Dη -µ 2 η 2 + λη 4 vol . ( 22
)
Notice that by its very definition η β = η α = η, so it is already a fully gauge invariant scalar field which then qualifies as an observable.
Residual U(1)-symmetry
Is a mass term allowed for the SU(2)-invariant field B? To answer one needs to check its U(1)-residual gauge transformation B α , which depends on the U(1)-gauge transformation of the dressing field u. One can check that
u α = u α, where α = α 0 0 α -1 .
We therefore have
B α = (b α ) u α = α -1 u -1 bu α + 1 g α -1 (u -1 du) α + 1 g α -1 d α = α -1 B α + 1 g α -1 d α, G α = (g α b ) u α = α -1 u -1 g b u α = α -1 G α.
In view of this, it would seem that B still cannot have mass terms. But given the decomposition B = B a σ a where σ a are the hermitian Pauli matrices and B a ∈ iR, so that Ba = -B a , we have explicitly
B = B a σ a = B 3 B 1 -iB 2 B 1 + iB 2 -B 3 =: B 3 W - W + -B 3 ,
and
B α = B 3 + 1 g α -1 dα α -2 W - α 2 W + -B 3 -1 g α -1 dα .
The fields W ± transform tensorially under U(1), and so they can be massive: they are the (U (1)-charged) particles detected in the SPS collider in January 1983. The field B 3 transforms as a U (1)-connection, making it another massless field together with the genuine U (1)-connection a. Considering (a, B 3 ) as a doublet, one can perform a natural change of variables
A Z 0 := cos θ W sin θ W -sin θ W cos θ W a B 3 = cos θ W a + sin θ W B 3 cos θ W B 3 -sin θ W a ,
where the so-called Weinberg (or weak mixing) angle is defined by cos θ W = g / g 2 + g 2 and sin θ W = g / g 2 + g 2 . By construction, it is easy to show that the 1-form Z 0 is then fully gauge invariant and can therefore be both massive and observable: it is the neutral weak field whose boson has been detected in the SPS collider in May 1983. Now, still by construction, we have A β = A and A α = A + 1 e α -1 dα with coupling constant e := gg / g 2 + g 2 = g cos θ W = g sin θ W . So A is a U (1)-connection: it is the massless carrier of the electromagnetic interaction and e is the elementary electric charge.
The electroweak theory ( 22) is then expressed in terms of the gauge invariant fields η, Z 0 and of the U (1)gauge fields W ± , A:
L(A, W ± , Z 0 , η) = 1 2 Tr( F ∧ * F ) + Dη, * Dη -U (η) vol = dZ 0 ∧ * dZ 0 + dA ∧ * dA + dW -∧ * dW + + 2g sin θ W dA ∧ * (W -W + ) + cos θ W dZ 0 ∧ * (W -W + ) + dW -∧ * (W + A) + dW -∧ * (W + Z 0 ) + dW + ∧ * (AW -) + dW + ∧ * (Z 0 W -) + 4g 2 sin 2 θ W AW -∧ * (W + A) + cos 2 θ W Z 0 W -∧ * (W + Z 0 ) + sin θ W cos θ W AW -∧ * (W + Z 0 ) + sin θ W cos θ W Z 0 W -∧ * (W + A) + 1 4 W -W + ∧ * (W -W + ) + dη ∧ * dη -g 2 η 2 W + ∧ * W --(g 2 + g 2 )η 2 Z 0 ∧ * Z 0 -µ 2 η 2 + λη 4 vol . ( 23
)
We can read off all possible interactions between the four electroweak fields. Notice that there is no coupling between the fields A and Z 0 , showing the electric neutrality of the Z 0 .
The next natural step is to expand the R + -valued scalar field η around its unique configuration η 0 minimizing the potential U (η), the so-called Vacuum Expectation Value (VEV), as η = η 0 + H where H is the gauge invariant Higgs field. True mass terms for Z 0 , W ± and H depending on η 0 then appear from the couplings of the electroweak fields with η and from the latter's self interaction. The absence of coupling between η and A indicates the masslessness of the latter (the two photons decay channel of the Higgs boson involves intermediary leptons, not treated here).
The theory has two qualitatively distinct phases. In the phase where µ 2 > 0, the VEV vanishes and so do all masses, while in the phase where µ 2 < 0, the VEV is non-vanishing, η 0 = -µ 2 /2λ. The masses of the fields Z 0 , W ± and H are then m Z0 = η 0 (g 2 + g 2 ), m W ± = η 0 g , with ratio m W ± m Z 0 = cos θ W , and m H = η 0 √ 2λ. In this case, [START_REF] François | Residual Weyl symmetry out of conformal geometry and its BRST structure[END_REF] becomes the electroweak Lagrangian form of the Standard Model in the so-called unitary gauge. But keep in mind that, as a result of the dressing field method, no gauge fixing nor SSBM is involved to obtain it.
Discussion
Some differences with the usual viewpoint is worth stressing. The SSBM is usually constructed as follows. At high energy (i.e. in the phase µ 2 > 0) the symmetric VEV ϕ 0 = 0 0 of ϕ ∈ C 2 respect the full SU(2) × U(1) gauge symmetry group so that no gauge potential in the theory can be massive. At low energy (i.e. in the phase µ 2 < 0) the field ϕ must fall somewhere in the space of configurations that minimize the potential U (ϕ). A space which is a circle in C 2 defined by M 0 = ϕ ∈ C 2 | φ1 ϕ 1 + φ2 ϕ 2 = -µ 2 /λ , whose individual points are not invariant under SU(2). Then, once an arbitrary minimum ϕ 0 ∈ M 0 is randomly selected, the gauge group is broken down to U(1) and mass terms for SU (2)-gauge potentials are generated. See e.g [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF]. This usual interpretation takes place in the history of the Universe, and this "phase transition" is a contingent phenomena, since it selects by chance one specific value in M 0 . The Standard Model of Particles Physics (SMPP) then relies on two strong foundations: one is structural (in the mathematical way), it is the Lagrangian of the theory; the other one is contingent, it is the historical aspect of the SSBM.
The dressing field approach allows to clearly distinguish the erasure of SU(2) and the generation of mass terms as two distinct operations, the former being a prerequisite of the latter but not its direct cause, as the textbook interpretation would have it. Notice also that the relevant SU(2)-invariant variables, corresponding to the physical fields (fermions fields are treated in the same manner, see [START_REF] Masson | A remark on the spontaneous symmetry breaking mechanism in the standard model[END_REF]), are identified at the mathematical level of the theory in both phases (i.e. independently of the sign of µ 2 ). The transition between these phases, characterized by different electroweak vacuum, remains a dynamical process parametrized by the sign of µ 2 . 6But we stress that in our scheme there is no arbitrariness in the choice of VEV for η ∈ R + since it is now unique: η 0 = -µ 2 /2λ when µ 2 < 0. In particular, all the bosons Z 0 , A, W + , W -(and fermions fields) can be identified at the level of the theory, without requiring any historical contingent process. In that respect, the contingent aspect of the SMPP is dispelled to the benefit of its unique structural foundation.
The arbitrariness of the polar decomposition ϕ = uη is discussed in [START_REF] Masson | A remark on the spontaneous symmetry breaking mechanism in the standard model[END_REF]: imposing that the final U (1) charges are clearly identified, the field content of the Lagrangian in the new variables is the same up to global transformations involving some rigid transformations of the fields. This implies that the content of the theory in terms of SU(2)-invariant fields takes place at an ontological level, since it does not require any historical arguments.
According to [START_REF] Westenholz | On spontaneous symmetry breakdown and the higgs mechanism[END_REF], the very meaning of the terminology "spontaneous symmetry breaking" lies in the fact that M 0 is not reduced to a point. Granting this reasonable observation, the dressing field approach would then lead to deny the soundness of this terminology to characterize the electroweak model. First because the symmetry reduction is not related to the choice of a VEV in M 0 , then because the latter is reduced to a point. A better characterization would emphasize the link between mass generation and electroweak vacuum phase transition: "mass generation through electroweak vacuum phase transition".
The fact that the dressing field approach to the electroweak model allows to dispense with the idea of spontaneous breaking of a gauge symmetry is perfectly in line with the so-called Elitzur theorem stating that in lattice gauge theory a gauge symmetry cannot be spontaneously broken. An equivalent theorem for gauge field theory has not been proven, but no reason has been given as to why it would fail either.
Furthermore, as we have mentioned in the introduction, the status of gauge symmetries is a disputed question in philosophy of physics. A well argued position considers gauge symmetries as "surplus structures", as philosopher of physics Michael Redhead calls it, that is a redundancy in our mathematical description of reality. They would then have an epistemological status. The idea of a spontaneous breakdown of a gauge symmetry on the other hand, insofar as it implies observable qualitative physical effects (particles acquire masses in a historical process), supports an ontological view of gauge symmetries, making them a structural feature of reality rather than of our description of it. And indeed the part of the philosophy of physics community interested in this problem has struggled to reconcile the empirical success of the electroweak model with their analysis of gauge symmetries (see e.g [8; 9]). Often a workaround if proposed in arguing that a gauge fixing removes the local dependence of the symmetry and that only a global one remains to be broken spontaneously, which by the Goldstone theorem generates Goldstone bosons "eaten up" by the gauge bosons gaining masses in the process.
These efforts of interpretation are enlightened once it is recognized that the notion of spontaneous breaking of gauge symmetry is not pivotal to the empirical success of the electroweak model. Higgs had a glimpse of this fact [START_REF] Higgs | Spontaneous symmetry breakdown without massless bosons[END_REF], and Kibble saw it clearly [START_REF] Kibble | Symmetry breaking in nonAbelian gauge theories[END_REF] (see the paragraph just before the conclusion of his paper). Both had insights by working on toy models, just before the electroweak model was proposed by Weinberg and Salam in 1967. The invariant version of the model was first given in [START_REF] Frohlich | Higgs phenomenon without symmetry breaking order parameter[END_REF] in 1981 (compare Section 6 with our exposition above), but was rediscovered independently by others [13; 19; 33; 39; 46]. The dressing field approach provides a general unifying framework for these works, and achieves the conceptual clarity philosophers of physics have been striving for [25; 62; 64].
From tetrad to metric formulation of General Relativity
Einstein teaches us that gravitation is the dynamics of space-time, the base manifold itself. It deals with spatiotemporal degrees of freedom, not "inner" ones like in Yang-Mills-type gauge theories. In the most general case there exists a notion of torsion, a concept absent in Yang-Mills theories. There are more possible invariants one can use in a Lagrangian due to index contractions impossible in Yang-Mills theories: the actual Lagrangian form for GR is not of Yang-Mills type.
All this issues from the existence in gravitational theories of the soldering form, also known as (co-)tetrad field, which realizes an isomorphism between the tangent space at each point of space-time and the Minkowski space [START_REF] Trautman | Fiber Bundles, Gauge Field and Gravitation[END_REF]. The soldering form can be seen as the formal implementation of Einstein's "happiest thought", the Equivalence Principle, which is the key specific physical feature distinguishing the gravitational interaction from the three others (Yang-Mills) gauge interactions.
So, while Yang-Mills fields are described by Ehresmann connections (principal connections) on a principal bundle, the gravitational field is described by both an Ehresmann connection, the Lorentz/spin connection, and a soldering form. In 1977, McDowell and Mansouri treated the concatenation of the connection and of the soldering form as a single gauge potential [START_REF] Mcdowell | Unified geometric theory of gravity and supergravity[END_REF]. The mathematical foundation of this move is Cartan geometry [70; 71]: the third additional axiom defining a Cartan connection, and distinguishing it from a principal connection, defines an absolute parallelism on P. This in turn induces, in simple cases, a soldering form [START_REF] Sharpe | Differential Geometry: Cartan's Generalization of Klein's Erlangen Program[END_REF]. In other word, the geometry of the bundle P is much more tightly related to the geometry of the base manifold. On can then convincingly argue that Cartan geometry is a very natural framework for classical gravitational theories.
In the following we recast the tetrad formulation of GR in terms of the adequate Cartan geometry, and show that switching to the metric formulation can be seen as an application of the dressing field method.
Reduction of the Lorentz gauge symmetry
The relevant Cartan geometry is based on the Klein model (G, H) given by G = SO(1, 3) R 1,3 , the Poincaré group, and H = SO(1, 3), the Lorentz group, so that the associated homogeneous space is G/H = R 1,3 , the Minkowski space. The infinitesimal Klein pair is (g, h) with g = so(1, 3) ⊕ R 1,3 and h = so(1, 3). The principal bundle of this Cartan geometry is P (M, SO [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF]). The local Cartan connection and its curvature are the 1-forms ∈ 1 (U, g) and Ω ∈ 2 (U, g), which can be written in matrix form
= A θ 0 0 , Ω = R Θ 0 0 = dA + A ∧ A dθ + A ∧ θ 0 0 ,
where A ∈ 1 (U, so) is the spin connection with Riemann curvature 2-form R and torsion Θ = Dθ, and
θ ∈ 1 (U, R 1,3
) is the soldering form. In other words, this Cartan geometry is just the usual Lorentz geometry (with torsion). We can thus consider the Cartan connection as the gravitational gauge potential. The local gauge group is SO := SO(1, 3) and its action by an element γ : U → SO, assuming the matrix form γ = ( S 0 0 1 ), is
γ = γ -1 γ + γ -1 dγ = S -1 AS + S -1 dS S -1 θ 0 0 , Ω γ = γ -1 Ωγ = S -1 RS S -1 Θ 0 0 .
Given these geometrical data, the associated Lagrangian form of GR is given by,
L Pal (A, θ) = -1 32πG Tr R ∧ * (θ ∧ θ t ) = -1 32πG Tr R ∧ * (θ ∧ θ T η) , ( 24
)
with η the metric of R 1,3 and G the gravitational constant. Given S = L, variation w.r.t. θ gives Einstein's equation in vacuum and variation w.r.t. A gives an equation for the torsion which in the vacuum vanishes (even in the presence of matter, the torsion does not propagate).
Looking for a dressing field liable to neutralize the SO-gauge symmetry, given the gauge transformation of the Cartan connection, the tetrad field e = e a in the soldering form θ a = e a µ dx µ is a natural candidate: θ S = S -1 θ implies e S = S -1 e, so that we define u = e 0 0 1 and we get u γ = γ -1 u.
Then u is a SO-dressing field, and notice that its target group G = GL is bigger than the structure group which happen to be also its equivariance group, H = K = SO. 7 We can apply Prop. 1 and construct the SO-invariant composite fields,
= u -1 u + u -1 du = e -1 Ae + e -1 de e -1 θ 0 0 =: Γ dx 0 0 , Ω = u -1 Ωu = e -1 Re e -1 Θ 0 0 =: R T 0 0 ,
where Γ = Γ µ ν = Γ µ ν,ρ dx ρ is the linear connection 1-form on U ⊂ M, and R and T are the Riemann curvature and torsion 2-forms written in the coordinates system {x µ } on U. We have their explicit expressions as functions of the components of the dressed Cartan connection on account of,
Ω = d + ∧ = dΓ d 2 x 0 0 + Γ ∧ Γ Γ ∧ dx 0 0 = dΓ + Γ ∧ Γ Γ ∧ dx 0 0 .
We see clearly that if Γ is symmetric on its lower indices, the torsion vanishes. A Cartan connection always induces a metric on the base manifold U ⊂ M by g(X, Y ) = η θ(X), θ(Y ) , with X, Y ∈ T x U. In component this reads g µν = e µ a η ab e b ν , or in index free notation g = e T ηe. Notice that by definition g is SO-gauge-invariant. It is easy to show that in this formalism, the metricity condition is necessarily satisfied: Dg := ∇g = dg -Γ T g -gΓ = -e T A T η + ηA e = 0. Therefore if T = 0, Γ is the Levi-Civita connection. Now by application of Prop. 8 we see that the classic calculation that allows to switch from the SO-gauge formulation to the metric formulation can be seen as an example of the dressing field method,
L Pal (A, θ) = -1 32πG Tr R ∧ * (θ ∧ θ t ) = -1 32πG Tr Rg ∧ * (dx ∧ dx) = 1 16πG |g|d m x Ricc =: L EH (Γ, g).
The last equation defines the Einstein-Hilbert Lagrangian form, depending on the SO-invariant composite fields Γ and g.
Residual symmetry
The SO-invariant fields g, = (Γ, dx) and Ω = (R, T ) belong to the natural geometry of the base manifold M, i.e. the geometry defined only in terms of its frame bundle and its associated vector bundles. The only residual transformations these fields can display are coordinates transformations. On the overlap of two patches of coordinates {x µ } and {y µ } in a trivializing open set U ⊂ M, the initial gauge fields and Ω, as differential forms, are well defined and invariant. But obviously θ = edx = e dy implies that the tetrad undergoes the transformation e = eG, with G = G µ ν = ∂x µ ∂y ν . The dressing fields then transforms as u = uG, with G = ( G 0 0 1 ), so the composite fields have coordinates transformations,
= u -1 u + u -1 du = G -1 G + G -1 dG = G -1 ΓG + G -1 dG G -1 dx 0 0 =: Γ dy 0 0 , Ω = u -1 Ωu = G -1 ΩG = G -1 RG G -1 T 0 0 =: R T 0 0 , g = e T ηe = G T gG.
This gives the well known transformations of the linear connection, of the metric, Riemann and torsion tensors under general changes of coordinates. Of course the Lagrangian form, L EH , is invariant.
Discussion
The tetrad as a dressing field does not belong to the gauge group SO of the theory. So, strictly speaking, the invariant composite field is not a gauge transformation of the Cartan connection . In particular this means that, contrary to what is sometimes said, Γ is not a gauge transform of the Lorentz connection A. Indeed Γ is an SO-invariant gl-valued 1-form on M, clearly it does not belong to the initial space of connections of the theory. Even if one considers that the gauge symmetry of GR are the coordinates changes, thinking of it as a gauge theory on the frame bundle LM with gauge group GL, the tetrad e a µ still doesn't belong to GL. So one cannot view Γ and A as gauge related. To obtain A from Γ one needs the bundle reduction theorem, which allows to reduce LM to the subbundle P (M, SO [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF]). To recover Γ from A, one needs to think in terms of the dressing field method.
Conformal Cartan geometry, tractors and twistors
In this Section we show how tractors and twistors, which are conformal calculi for torsionless manifolds [5; 53], can be derived from the conformal Cartan geometry via the dressing field method. We thus start by a brief description of this geometry and then we deal with tractors and twistors.
Conformal Cartan geometry in a nutshell
A conformal Cartan geometry (P, ) can be defined over n-manifolds M for any n ≥ 3 and signature (r, s) thanks to the group SO(r + 1, s + 1). We will admit that the base manifold is such that a corresponding spinorial version ( P, ¯ ) exists, based on the group Spin(r + 1, s + 1), so that we have the two-fold covering P 2:1 --→ P. Since we seek to reproduce twistors in signature [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF], as spinors corresponding to tractors, we are here interested in conformal Cartan geometry over 4-manifolds, and thus take advantage of the accidental isomorphism Spin(2, 4) SU [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach II[END_REF][START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach II[END_REF].
We then treat in parallel the conformal Cartan geometry (P(M, H), ) modeled on the Klein model (G, H) and its naturally associated vector bundle E, as well as the spinorial version ( P(M, H), ¯ ) modeled on the Klein model ( Ḡ, H) and its naturally associated vector bundle E. For simplicity we designate them as the real and complex cases respectively. By dressing, the real case will yield tractors and the complex case will yield twistors.
In the real case, we have
G = P SO(2, 4) = M ∈ GL 6 (R) | M T ΣM = Σ, det M = 1 / ± id with Σ = 0 0 -1 0 η 0 -1 0 0
the group metric, η the flat metric of signature [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF], and H is a parabolic subgroup comprising Lorentz, Weyl and conformal boost symmetries: it has the following matrix presentation [11; 58], with W := R * + (Weyl dilation group),
H = K 0 K 1 = z 0 0 0 S 0 0 0 z -1 1 r 1 2 rr t 0 1 4 r t 0 0 1 z ∈ W, S ∈ SO(1, 3), r ∈ R 4 * ,
where K 0 (resp. K 1 ) corresponds to the matrices on the left (resp. right) in the product. Clearly K 0 CO(1, 3) via (S, z) → zS, and K 1 is the abelian group of conformal boosts. Here T is the usual matrix transposition, r t = (rη -1 ) T stands for the η-transposition, and R 4 * is the dual of R 4 . The corresponding Lie algebras (g, h) are graded: [g i , g j ] ⊆ g i+j , i, j = 0, ±1, with the abelian Lie subalgebras [g -1 , g -1 ] = 0 = [g 1 , g 1 ]. They decompose respectively as, g = g -1 ⊕ g 0 ⊕ g 1 R 4 ⊕ co(1, 3) ⊕ R 4 * , with co(1, 3) = so(1, 3) ⊕ R, and h = g 0 ⊕ g 1 co(1, 3) ⊕ R 4 * . In matrix notation we have,
g = ε ι 0 τ s ι t 0 τ t -ε (s -ε1 4 ) ∈ co(1, 3), τ ∈ R 4 , ι ∈ R 4 * ⊃ h = ε ι 0 0 s ι t 0 0 -ε .
The graded structure of the Lie algebras is automatically handled by the matrix commutator.
In order to introduce the complex case, let us first consider the canonical isomorphism of vector spaces between Minkowski space R 4 and hermitian 2 × 2 matrices Herm(2, C) = {M ∈ M 2 (C) | M * = M }, where * means trans-conjugation: R 4 → Herm(2, C), x → x = x a σ a (σ 0 = 1 2 and σ i={1,2,3} are the Pauli matrices).
There is a corresponding double covering group morphism SL(2, C) [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF], S → S (so that S -1 x → S-1 x S-1 * and x t S → S * xt S), and its associated Lie algebra isomorphism so [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF][START_REF] Attard | A note on weyl invariance in gravity and the wess-zumino functional[END_REF] sl(2, C) is denoted by s → s. In the following, the bar notation will relate the "real" and "complex" cases in a natural way by using same letters, so generalizing the above maps.
2:1 --→ SO
For the complex case, we have then Ḡ = SU (2, 2) Spin(2, 4), which is the group preserving the metric Σ = 0 12 12 0 , and H is given in matrix notation by
H = K0 K1 := z 1 /2 S-1 * 0 0 z -1 /2 S 1 2 -ir 0 1 2 z ∈ W, S ∈ SL(2, C), r ∈ Herm(2, C) . ( 25
)
There is a double covering H 2:1 --→ H which reduces to a double covering K0
2:1 --→ K 0 and a natural isomorphism K1 K 1 . Using the bar notation, the Lie algebra isomorphism so(2, 4) = g → su(2, 2) = ḡ is explicitly given by
ḡ = ḡ-1 + ḡ0 + ḡ1 = -(s * -ε 2 1 2 ) -iῑ iτ s -ε 2 1 2 ε ∈ R, s ∈ sl(2, C) τ , ῑ ∈ Herm(2, C) ⊃ h = ḡ0 + ḡ1 . ( 26
)
Once given two Cartan bundles such that P(M, H)
--→ P(M, H), we endow P(M, H) with a conformal Cartan connection whose local representative on U ⊂ M is ∈ 1 (U, g), with curvature Ω ∈ 2 (U, g). In matrix presentation, one has
= a P 0 θ A P t 0 θ t -a and Ω = d + 2 = f C 0 Θ W C t 0 Θ t -f .
In the same way, P(M, H) is endowed with a spinorial Cartan connexion
¯ = -( Ā * -a 2 1 2 ) -i P i θ Ā -a 2 1 2 and Ω = -( W * -f 2 1 2 ) -i C i Θ W -f 2 1 2 .
The soldering part of is θ = e • dx, i.e. θ a := e a µ dx µ . 8 Denote by g the metric of signature (1, 3) on M induced from η via according to g(X, Y
) := η (θ(X), θ(Y )) = θ(X) T ηθ(Y )
, or in a way more familiar to physicists g := e T ηe, so that g µν = e µ a η ab e b ν . The action of H on induces, through θ, a conformal class of metrics c := [g] on M. But (P, ) is not equivalent to (M, c). Nevertheless, there is a distinguished choice, the so-called normal conformal Cartan connection N , which is unique in satisfying the conditions Θ = 0 and W a bad = 0 (which in turn, through the Bianchi identity, implies f = 0), so that (P, N ) is indeed equivalent to a conformal manifold (M, c).
Still, it would be hasty to identify A in or N with the Lorentz connection one is familiar with in physics, and by a way of consequence to take R := dA + A 2 and P as the Riemann and Schouten tensors. Indeed, contrary to expectations, A is invariant under Weyl rescaling and neither R nor P have the well-known Weyl transformations. It turns out that one recovers the spin connection and the mentioned associated tensors only after a dressing operation, as shown in [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF].
Using the natural representation of H on R 6 , we can introduce the associated vector bundle
E = P × H R 6 . A section of E is a H-equivariant map on P whose local expression is ϕ : U ⊂ M → R 6 , given explicitly as column vectors ϕ = ρ σ , with = a ∈ R 4 , and ρ, σ ∈ R.
The covariant derivative induced by the Cartan connection is Dϕ = dϕ + ϕ, with D 2 ϕ = Ωϕ. The group metric Σ defines an invariant bilinear form on sections of E: for any ϕ, ϕ ∈ Γ(E), one has ϕ, ϕ = ϕ T Σϕ = -σρ + T η -ρσ . The covariant derivative D preserves this bilinear form since is g-valued: DΣ = dΣ + T Σ + Σ = 0.
We now follow the same line of constructions in the complex case, using the natural representation C 4 of H to define the associated vector bundle E = P × H C 4 . A section of E is a H-equivariant map on P whose local expression is ψ : U ⊂ M → C 4 given as
ψ = π ω , with π, ω ∈ C 2 dual Weyl spinors.
The covariant derivative is now Dψ = dψ + ¯ ψ, with D2 ψ = Ωψ. The group metric Σ defines an invariant bilinear form on sections of E: for any ψ, ψ ∈ Γ(E), one has ψ, ψ = ψ * Σψ = π * ω + ω * π . Again, the covariant derivative D preserves this bilinear form.
The gauge groups H = K 0 K 1 and H = K0 K1 act on the gauge variables, with γ ∈ H and γ ∈ H, as
γ = γ -1 γ + γ -1 dγ, ϕ γ = γ -1 ϕ, ¯ γ = γ-1 ¯ γ + γ-1 dγ, ψ γ = γ-1 ψ.
This induces the actions Ω γ = γ -1 Ωγ, Ωγ = γ-1 Ωγ, (Dϕ) γ = γ -1 Dϕ, and ( Dψ) γ = γ-1 Dψ. Given γ 0 ∈ K 0 , the soldering part of the gauge transformed Cartan connection γ0 is θ γ0 = zS -1 θ, so that the metric induced by γ0 is g = z 2 g. On the other hand, θ γ1 = θ for γ 1 ∈ K 1 . So, as mentioned above, the action of the gauge group induces a conformal class of metric c on M.
Tractors and twistors: constructive procedure via dressing
It has been noticed that tractor and twistor vector bundles are associated to the conformal Cartan bundle, and that tractor and twistor connections are related to the conformal Cartan connection [5; 26]. However as it stands, the gauge transformations above show that ϕ is not a tractor and that ψ is not a twistor. It turns out that to recover tractors and twistors one needs to erase the conformal boost symmetry K 1 K1 . We outline the procedure below and give the important results. Details can be found in [1; 2].
Given the decompositions H = K 0 K 1 and H = K0 K1 , the most natural choice of dressing field to erase the conformal boost gauge symmetry is u 1 : U → K 1 in the real case and its corresponding element ū1 : U → K1 K 1 in the complex case, given by
u 1 = 1 q 1 2 qq t 0 1 4 q t 0 0 1 , ū1 = 1 2 -iq 0 1 2 .
It turns out that u 1 can be defined via the "gauge-like" constraint Σ( u1 ) := Tr(A u1 -a u1 ) = -na u1 = 0. Indeed, this gives the equation a -qθ = 0, which once solved for q gives q a = a µ e µ a , or in index free notation q = a • e -1 .9 Now, from γ1 one finds that q γ1 = a γ1 • (e γ1 ) -1 = (a -re) • e -1 = q -r. One then checks easily that the constraint Σ( u1 ) = 0 is K 1 -invariant and that u 1 is a dressing field for K 1 : from q γ1 = q -r one shows that u γ1 1 = γ -1 1 u 1 . In the same way, one has ūγ1 1 = γ-1 1 ū1 . With these K 1 -dressing fields, we can apply (the local version of) Prop. 1 and form the K 1 K1 -invariant composite fields in the real and complex cases:
1 := u1 = u -1 1 u 1 + u -1 1 du 1 = 0 P 1 0 θ A 1 P t 1 0 θ t 0 , ¯ 1 = ¯ ū1 = -Ā * 1 -i P1 i θ Ā1 Ω 1 := Ω u1 = u -1 1 Ωu 1 = d 1 + 2 1 , Ω1 = Ωū1 = ū-1 1 Ωū 1 , ϕ 1 := u -1 1 ϕ, D 1 ϕ 1 = dϕ 1 + 1 ϕ 1 = dρ 1 + P 1 1 d 1 + A 1 1 + θρ 1 + P t 1 σ dσ + θ t 1 = ∇ρ 1 + P 1 1 ∇ 1 + θρ 1 + P t 1 σ ∇σ + θ t 1 , ψ 1 := ū-1 1 ψ, D1 ψ 1 = dψ 1 + ¯ 1 ψ 1 = dπ 1 -Ā * 1 π 1 -i P1 ω 1 dω 1 + Ā1 ω 1 + i θπ 1 = ∇π 1 -i P1 ω 1 ∇ω 1 + i θπ 1 ,
with obvious notations. As expected, D 2 1 ϕ 1 = Ω 1 ϕ 1 and D2 1 ψ 1 = Ω1 ψ 1 . Notice also that f 1 = P 1 ∧ θ is the antisymmetric part of the tensor P 1 .
We claim that ϕ 1 is a tractor and that the covariant derivative D 1 is a "generalized" tractor connection [START_REF] Bailey | Thomas's structure bundle for conformal, projective and related structures[END_REF]. In the same way, we assert that ψ 1 is a twistor and that the covariant derivative D1 is a generalized twistor connection [START_REF] Penrose | Spinors and Space-Time[END_REF]. Both assertions are supported by the analysis of the residual gauge symmetries.
Residual gauge symmetries. Being by construction K 1
K1 -invariant, the composite fields collectively denoted by χ 1 are expected to display K 0 -residual and K0 -residual gauge symmetries. The group K 0 breaks down as a direct product of the Lorentz and Weyl groups, K 0 = SO(1, 3)W , and in the same way, K0 = SL(2, C)W , with respective matrix presentations
K 0 = SZ := 1 0 0 0 S 0 0 0 1 z 0 0 0 1 4 0 0 0 z -1 z ∈ W, S ∈ SO(1, 3) (27) K0 = SZ := S-1 * 0 0 S z 1 /2 0 0 z -1 /2 z ∈ W, S ∈ SL(2, C) (28)
We focus on Lorentz symmetry first, then only bring our attention to Weyl symmetry. In the following, we will use the above matrix presentations S and S for elements of the Lorentz gauge group SO and the SL(2, C)gauge group SL. The residual gauge transformations of the composite fields under SO is inherited from that of the dressing field u 1 . Using γ0 to compute q S = a S • (e S ) -1 = qS, one easily finds that u S 1 = S -1 u 1 S, and correspondingly, ūS 1 = S-1 ū1 S. This is a local instance of Prop. 2, which then allows to conclude that the composite fields χ 1 are genuine gauge fields (see Section 2.2.1), w.r.t. Lorentz gauge symmetry. Hence, from Cor. 3 follows that the residual SO-gauge and SL-gauge transformations are:
S 1 = S -1 1 S + S -1 dS = 0 P 1 S 0 S -1 θ S -1 A 1 S + S -1 dS S -1 P t 0 θ t S 0 , (29)
¯ S 1 = S-1 ¯ 1 S + S-1 d S = -S * Ā1 S-1 * + d S * S-1 * -i S * P1 S i S-1 θ S-1 * S-1 Ā1 S + S-1 d S , (30)
and
Ω S 1 = S -1 Ω 1 S, ϕ S 1 = S -1 ϕ 1 , (D 1 ϕ 1 ) S = S -1 D 1 ϕ 1 , (31)
ΩS 1 = S-1 Ω1 S, ψ S 1 = S-1 ψ 1 , ( D1 ψ 1 ) S = S-1 D1 ψ 1 . (32)
See [1; 2] for details. Notice that ϕ 1 and ψ 1 transform as sections of the SO(1, 3)-associated bundle E 1 = E u1 = P × SO R 6 and the SL(2, C)-associated bundle E 1 = E ū1 = P × SL C 4 respectively. We repeat the exact same procedure to analyze the Weyl gauge symmetry, using again the matrix notations defined in [START_REF] Frohlich | Higgs phenomenon without symmetry breaking order parameter[END_REF] and [START_REF] Garajeu | W-gauge structures and their anomalies: An algebraic approach[END_REF] for Z in the Weyl group W ⊂ K 0 and Z in its complex counterpart W ⊂ K0 . We first compute the action of W on the dressing field: using γ0 to compute q Z = a Z • (e Z ) -1 , one easily finds that
u Z 1 = Z -1 u 1 C(z), where C : W → K 1 W ⊂ H is defined by C(z) := k 1 (z)Z = 1 Υ 1 2 Υ 2 0 1 4 Υ t 0 0 1 z 0 0 0 1 4 0 0 0 z -1 = z Υ z -1 2 Υ 2 0 1 4 z -1 Υ t 0 0 z -1 (33)
where explicitly Υ = Υ a = Υ µ e µ a , with Υ µ := z -1 ∂ µ z, and Υ 2 = Υ a η ab Υ b . The corresponding complex case is ūZ 1 = Z-1 ū1 C(z), where C :
W → K1 W ⊂ H is defined by, with Ῡ = Υ a σ a , C(z) := k1 (z) Z = 1 2 -i Ῡ 0 1 2 z 1 /2 1 2 0 0 z -1 /2 1 2 = z 1 /2 1 2 -i z -1 /2 Ῡ 0 z -1 /2 1 2 . ( 34
)
The map C is not a group morphism, C(z)C(z ) = C(zz ), but is a local instance of a 1-α-cocycle satisfying Prop. 6:
C(zz ) = C(z z) = C(z ) Z -1 C(z)Z .
Under a further W-gauge transformation and due to e Z = ze,
one has k 1 (z) Z = Z -1 k 1 (z)Z , which implies C(z) Z = Z -1 C(z)Z . So, if u 1 undergoes a further W-gauge transformation Z , we get u Z 1 Z = Z Z -1 u Z 1 C(z) Z = Z -1 Z -1 u 1 C(z ) Z -1 C(z)Z = (ZZ ) -1 u 1 C(zz ).
Mutadis mutandis, all this is true for C in [START_REF] Kibble | Symmetry breaking in nonAbelian gauge theories[END_REF] and for ū1 as well. We have then a well-behaved action of the gauge groups W and W in the real and complex cases.
From this we conclude that the composite fields χ 1 are instances of generalized gauge fields described in Section 2.2.2. By Prop. 5, the residual W-gauge and W-gauge transformations are
Z 1 = C(z) -1 1 C(z) + C(z) -1 dC(z) and ¯ Z 1 = C(z) -1 ¯ 1 C(z) + C(z) -1 d C(z), explicitly given by Z 1 = 0 z -1 P 1 + ∇Υ -ΥθΥ + 1 2 Υ 2 θ t 0 zθ A 1 + θΥ -Υ t θ t * 0 zθ t 0 , ( 35
) ¯ Z 1 = -Ā * 1 -( Ῡθ ) 0 -i z -1 P1 + d Ῡ -Ῡ Ā1 -Ā * 1 Ῡ -Ῡθ Ῡ i z θ Ā1 + ( θ Ῡ) 0 , ( 36
)
where ( θ Ῡ) 0 is the sl(2, C) part of θ Ῡ = ( θ Ῡ) 0 + Υθ 2 1 2 . And (see [1; 2] for details):
Ω Z 1 = C(z) -1 Ω 1 C(z) ΩZ 1 = C(z) -1 Ω1 C(z) ( 37
)
ϕ Z 1 = C(z) -1 ϕ 1 = z -1 ρ 1 -Υ 1 + σ 2 Υ 2 1 -Υ t σ zσ , (D 1 ϕ 1 ) Z = C(z) -1 D 1 ϕ 1 , ( 38
)
ψ Z 1 = C(z) -1 ψ 1 = z -1 /2 π 1 + i Ῡω 1 z 1 /2 ω 1 , ( D1 ψ 1 ) Z = C(z) -1 D1 ψ 1 . ( 39
)
From ( 35), we see that A 1 exhibits the known Weyl transformation for the Lorentz connection, and P 1 transforms as the Schouten tensor (in an orthonormal basis). But, actually, the former genuinely reduces to the latter only when one restricts to the dressing of the normal Cartan connection N,1 , so that A 1 is a function of θ and P 1 = P 1 (A 1 ) is the symmetric Schouten tensor. So f 1 vanishes and we have,
Ω N,1 = d N,1 + 2 N,1 = 0 C 1 0 0 W 1 C t 1 0 0 0 , ( 40
) Ω Z N,1 = C(z) -1 Ω N,1 C(z) = 0 z -1 (C 1 -ΥW 1 ) 0 0 W 1 * 0 0 * , (41)
ΩN,1 = d ¯ N,1 + ¯ 2 N,1 = -W * 1 -i C1 0 W1 , ( 42
) ΩZ N,1 = C(z) -1 ΩN,1 C(z) = -W * 1 -i z -1 C1 -Ῡ W1 -W * 1 Ῡ 0 W1 . ( 43
)
We see that C 1 = ∇P 1 is the Cotton tensor, and indeed transforms as such, while W 1 is the invariant Weyl tensor.
From ϕ Z 1 in [START_REF] Lavelle | Nonlocal symmetry for QED[END_REF], we see that the dressed section ϕ 1 is a section of the C(W )-twisted vector bundle E 1 = E u1 = P × C(W ) R n+2 (see [START_REF] Brading | Symmetry and symmetry breaking[END_REF]). But this same relation is also precisely the defining Weyl transformation of a tractor field as derived in [START_REF] Bailey | Thomas's structure bundle for conformal, projective and related structures[END_REF]. Then E 1 is the so-called standard tractor bundle. Since C(z) ∈ K 1 W ⊂ H, we have (C(z) -1 ) T ΣC(z) -1 = Σ. So the bilinear form on E defined by the group metric Σ is also defined on E 1 : ϕ 1 , ϕ 1 = ϕ T 1 Σϕ 1 . This is otherwise known as the tractor metric. Furthermore, (D 1 ϕ 1 ) Z in [START_REF] Lavelle | Nonlocal symmetry for QED[END_REF] shows that the operator D 1 := d + 1 is a generalization of the tractor connection [5; 14]. The term "connection", while not inaccurate, could hide the fact that 1 is no more a geometric connection w.r.t. Weyl symmetry. So we shall prefer to call D 1 a generalized tractor covariant derivative. The standard tractor covariant derivative is recovered by restriction to the dressing of the normal Cartan connection, D N,1 = d + N,1 , and Ω N,1 in ( 40) is known as the tractor curvature.
In the same way, ψ Z 1 in [START_REF] Lavelle | Observables and gauge fixing in spontaneously broken gauge theories[END_REF] shows that the dressed section ψ 1 is a section of the C(W )-twisted vector bundle E 1 = E ū1 = P × C(W ) C 4 . This same relation is also, modulo the z factors, the defining Weyl transformation of a local twistor as given by Penrose [START_REF] Penrose | Spinors and Space-Time[END_REF]. So E 1 is identified with the local twistor bundle. It is endowed with a bilinear form defined by the group metric Σ of SU (2, 2):
ψ 1 , ψ 1 = ψ T 1 Σψ 1 .
It is well-defined since, in view of C(z) ∈ K1 W ⊂ H, we have (C(z) -1 ) * ΣC(z) -1 = Σ. In the twistor literature, the quantity 1 2 ψ 1 , ψ 1 is known as the helicity of the twistor field ψ 1 [51; 52]. Also, ( D1 ψ 1 ) Z in [START_REF] Lavelle | Observables and gauge fixing in spontaneously broken gauge theories[END_REF] shows that the operator D1 := d + ¯ 1 is a generalization of the twistor connection. For the reason stated above, we shall prefer to call D1 a generalized twistor covariant derivative. The usual twistor covariant derivative is recovered by restriction to the normal case, DN,1 = d + ¯ N,1 , and ΩN,1 in ( 42) is known as the twistor curvature.
Remark that the actions of the Lorentz/SL(2, C) and Weyl gauge groups on the composite fields χ 1 commute.
In the real case for instance, we have S W = S so that χ SO
1 W = χ S 1 W = χ W 1 S W = χ C(z) 1 S = χ C(z)S 1 . But we also have C(z) SO = S -1 C(z)S, so we get χ W 1 SO = χ C(z) 1 SO = χ SO 1 C(z) SO = χ S 1 S -1 C(z)S = χ C(z)S 1
. Our notations for the tractor and twistor bundles can then be refined to reflect this:
E 1 = P × C(W )•SO R 6 and E 1 = P × C(W )•SL C 4 .
Following the ending considerations of Section 2.2.1, the fact that the composite fields 1 , ϕ 1 are genuine Lorentz-gauge fields satisfying ( 29) and [START_REF] Higgs | Broken symmetry and the mass of gauge bosons[END_REF] suggests that a further dressing operation aiming at erasing Lorentz symmetry is possible. In [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF] we showed that in the case of tractors, the vielbein e = e a µ could be used to this purpose since it has the transformation e S = S -1 e, characteristic of a SO-dressing field. This is the same process as in the example of GR, treated in Section 4. The difference is that in GR one erases Lorentz symmetry and ends-up with "nothing", that is no gauge symmetry but only coordinates transformations characteristic of geometric objects living on M, while in the tractor case one ends-up with Weyl rescalings as residual gauge symmetry in addition to coordinates transformations. Computing the residual Weyl symmetry after this second dressing displays a slightly different C-map to be used to perform the transformation of the composite fields, see [START_REF] Attard | Tractors and Twistors from conformal Cartan geometry: a gauge theoretic approach I[END_REF]. As a matter of fact, in the literature two kinds of transformation law for tractors can be found, which in our framework corresponds to either erasing only the K 1 -symmetry [56; 57], or to erasing both K 1 and Lorentz-symmetries [5; 14].
Since there is no finite dimensional spin representation of GL, in the twistor case the vielbein cannot be used as a second dressing field. By the way, looking at the SL(2, C) gauge transformation of the vielbein, one sees that it is unsuited as a SL-dressing field. So, as far as twistors are concerned, the process of symmetry reduction ends here.
BRST treatment
The gauge group of the initial Cartan geometries are H and H. The associated ghost v ∈ LieH and v ∈ Lie H split along the grading of h and h,
v = v 0 + v ι = v ε + v s + v ι = ε ι 0 0 s ι t 0 0 -ε , v = v0 + vι = vε + vs + vι = -(s * -ε /2) -iῑ 0 s -ε /2 .
The BRST operator splits accordingly as s = s 0 + s 1 = s W + s L + s 1 . The algebra satisfied by the gauge fields χ = { , Ω, ϕ, ¯ , Ω, ψ}, noted BRST, is
s = -Dv = -dv -[ , v], sΩ = [Ω, v], sv = -v 2 , s ¯ = -Dv = -dv -[ ¯ , v], s Ω = [ Ω, v], sv = -v 2 , sϕ = -vϕ, sψ = -vψ,
From Section 2.3, the composite fields χ 1 = { 1 , Ω 1 , ϕ 1 , ψ 1 } satisfy a modified BRST algebra, formally similar but with composite ghost v 1 := u -1
1 vu 1 + u -1 1 su 1 . From the finite gauge transformations of u 1 , and the linearizations γ 1 1 + v ι and S 1 + v s , the BRST actions of K 1 and SO are found to be:
s 1 u 1 = -v ι u 1 and s L u 1 = [u 1 , v s ]
. This shows that the Lorentz sector is an instance of the general result [START_REF] Dirac | Gauge-invariant formulation of quantum electrodynamics[END_REF]. Using the linearizations
Z 1 + v ε and k 1 (z) 1 + κ 1 (ε), so that C(z) = k 1 (z)Z 1 + c(ε) = 1 + κ 1 (ε) + v ε , the BRST action of W is s W u 1 = -v ε u 1 + u 1 c(ε)
. This shows that the Weyl sector is an instance of the general result [START_REF] Dubois-Violette | The Weil-BRS algebra of a Lie algebra and the anomalous terms in gauge theory[END_REF]. After a straightforward computation and a similar analysis for the complex case, we get the composite ghosts
v 1 = c(ε) + v s = ε ∂ε 0 0 s ∂ε t 0 0 -ε , v1 = c(ε) + vs = -s * -ε 2 1 2 -i ∂ε 0 s -ε 2 1 2 , where ∂ε := ∂ a ε = ∂ µ ε e µ a .
The ghost of conformal boosts, ι, has disappeared from these new ghosts, replaced by the first derivative of the Weyl ghost. This means that s 1 χ 1 = 0, which reflects the K 1 -gauge invariance of the composite fields χ 1 . The composite ghost v 1 only depends on v s and ε: it encodes the residual K 0 -gauge symmetry. The algebra satisfied by the composite fields χ 1 , denoted by BRST W,L , is then simply
s 1 = -D 1 v 1 = -dv 1 -[ 1 , v 1 ], sΩ 1 = [Ω 1 , v 1 ], sv 1 = -v 2 1 , s ¯ 1 = -D 1 v1 = -dv 1 -[ ¯ 1 , v1 ], s Ω1 = [ Ω1 , v1 ], sv 1 = -v 2 1 , sϕ 1 = -v 1 ϕ 1 , sψ 1 = -vψ 1 ,
and reproduces the infinitesimal version of ( 29), ( 30), ( 31), (32) (Lorentz/SL(2, C) sector) and ( 35), ( 36), ( 37), (38) (39) (Weyl sector). Explicit results are obtained via simple matrix calculations, we refer to [1; 2] for all details.
Since v 1 = c(ε) + v s , BRST W,L splits naturally as Lorentz and Weyl subalgebras, s = s W + s L . The Lorentz sector (s L , v s ) shows the composite fields χ 1 to be genuine Lorentz gauge fields. While the Weyl sector (s W , c(ε)) shows χ 1 to be generalized Weyl gauge fields.
Discussion
Today, tractors and twistors are terms whose meaning extends beyond their original context of definition, conformal (and projective) geometry, and are quite broad concepts in the theory of parabolic geometries [START_REF] Cap | Parabolic Geometries I: Background and General Theory[END_REF]. In their original meaning, most often tractor and local twistor bundles are constructed in a "bottom-up" way, starting with a conformal manifold (M, c) and building a gauge structure on top of it.
First, one poses a defining differential equation on (M, c). In the case of tractors, this is the almost Einstein equation (AE)
∇ µ ∇ ν σ -P µν σ - gµν n (∆σ -Pσ) = 0,
with σ a 1-conformal density ( σ = z -1 σ), ∇ the Levi-Civita connection associated to a choice of metric g µν ∈ c, ∆ := g µν ∇ µ ∇ ν , and P := g µν P µν . For twistors, one defines the twistor equation
∇ (A A ω B) = 0, or equivalently ∇ AA ω B -1 2 δ B A ∇ CA ω C = 0,
where ω B : M → C 2 is a Weyl spinor. Then one prolongs these equations, recast them as first order systems. These are interpreted as first order differential operators acting on multiplets: ∇ T V = 0 and ∇ T Z = 0 respectively, where V = (σ, µ , ρ) and Z = (ω A , π A ). The transformations of the components of V and Z under Weyl rescaling of the metric is given either by definition, when the components are functions of the metric (V ), or by choice (Z). This takes some algebra to prove. With still more algebra, one shows that these transformation laws also apply to ∇ T V and ∇ T Z. But then V and Z are interpreted as parallel sections of some vectors bundles over M, the standard tractor bundle T and local twistor bundle T respectively, which are endowed with their linear connections, the tractor connection ∇ T and twistor connection ∇ T (hence the notation). Their commutators [∇ T , ∇ T ]V = κV and [∇ T , ∇ T ]Z = KZ are said to define respectively the tractor and twistor curvatures. Thus, starting from differential equations on (M, c), one ends-up with a gauge structure on top of it in the form of the tractor and twistor bundles and their connections. The latter provide natural conformally covariant calculi for torsion-free conformal manifolds. We refer the reader to [5; 14] for detailed calculations of this bottom-up procedure in the tractor case, and to the classic [53, Sec. 6.9] for the twistor case. See also [START_REF] Eastwood | Complex paraconformal manifolds -their differential geometry and twistor theory[END_REF]Sec. 6.1], which extends the twistor construction to paraconformal manifolds. It has been noticed that the tractor and twistor bundles can be seen as associated bundles of the principal Cartan bundle P(M, H), and a link between the normal conformal Cartan connection and the twistor 1-form was drawn by Friedrich [START_REF] Friedrich | Twistor connection and normal conformal cartan connection[END_REF]. Nevertheless, the construction via prolongation has been deemed more explicit in [START_REF] Eastwood | Complex paraconformal manifolds -their differential geometry and twistor theory[END_REF], and more intuitive and direct in [START_REF] Bailey | Thomas's structure bundle for conformal, projective and related structures[END_REF], than the viewpoint in terms of associated vector bundles.
However, our procedure present several advantages. Starting from a "bigger" gauge structure over M controlled by the conformal Cartan bundle P and a double cover complex version P, we obtain the vectors bundles endowed with covariant derivatives (E 1 , D 1 ) and (E 1 , D1 ) in a very straightforward and systematic way by symmetry reduction. So, our constructive procedure via the dressing method is "top-down" and involves much less calculations than the usual "bottom-up" approach outlined above, and is arguably more direct and intuitive.
Furthermore, these bundles reduce to the usual tractor and twistor bundles and their respective covariant derivatives when restricted to the normal Cartan geometry, and one gets (E 1 , D N,1 ) = (T , ∇ T ) and (E 1 , DN,1 ) = (T, ∇ T ). So, here we effortlessly generalize the tractor and twistor derivatives, providing essentially tractor and twistor calculi for conformal manifolds with torsion. It follows that if N,1 and ¯ N,1 are the genuine tractor and twistor 1-forms, then 1 and ¯ 1 may be labeled as generalized tractor and twistor 1-forms.
Our approach allows to clearly highlight the fact that, while tractors, twistors, and the associated (generalized) 1-forms and curvatures are genuine Lorentz/SL(2, C) gauge fields, they are gauge fields of generalized kind w.r.t. Weyl rescaling gauge symmetry, transforming using a 1-α-cocycle on the Weyl group. A fact that, as far as we know, has never been noticed.
Let's finally notice that in this framework, one can easily write a Yang-Mills-type Weyl-invariant Lagrangian and compute the corresponding field equations. It turns out that this Lagrangian reproduces Weyl gravity if one restricts to a normal Cartan connection, as was shown in [START_REF] Attard | Weyl gravity and cartan geometry[END_REF]. This by the way explains the equivalence between the Bach equation and the Yang-Mills equation for the normal conformal Cartan connection [START_REF] Korzyński | The normal conformal Cartan connection and the Bach tensor[END_REF] or the twistor 1-form [START_REF] Merkulov | The twistor connection and gauge invariance principle[END_REF].
Conclusion
The dressing field method of gauge symmetry reduction is a fourth way, beside gauge fixing, SSBM, and the bundle reduction theorem, to handle challenges one faces in gauge theories. As a matter of fact, as mentioned at the end of Section 2.4, it is relevant in many places in gauge fields theories, from QCD to anomalies in QFT. In this review paper we have outlined the main general results of the method concerning the construction of partially gauge invariant composite fields out of the usual gauge variables, and discussed two important cases where their residual gauge transformations can be treated on a general ground. Interestingly, we saw a case in which the composite fields are gauge fields of an unusual geometric nature, so that we label them "generalized" gauge fields.
We have shown that the method applies to the BEHGHK mechanism pivotal to the electroweak model. In doing so, we highlighted the fact that the notion of spontaneously broken gauge symmetry, which have long raised doubts among both philosophers of science and lattice gauge theorists (in view of the Elitzur theorem), is dispensable and anyway unnecessary for the empirical success of the Standard Model. This result is thus satisfying from a philosophical standpoint, and does not question the heuristic power of the gauge principle.
We have argued that the usual switching between the tetrad and metric formulations of GR is a simple application of the dressing field method. In doing so, we have stressed that, contrary to what is sometimes said, the linear connection Γ and the Lorentz connection A are not mutual gauge transforms, even if one considers GR as a gauge theory on the frame bundle LM. Actually, to recover A from Γ one needs the bundle reduction theorem, and to get Γ from A one needs the dressing field method. So that, in this instance, these are reciprocal operations.
The method applied to the conformal Cartan geometry and its spinorial version allows to obtain generalizations of the tractor and twistor calculi for conformal manifolds, extending them to manifolds with torsion, in a very straightforward "top-down" way. It happens to be computationally much more economical than the usual "bottom-up" approach by prolongation of the Almost Einstein and twistor equations, and arguably more direct and intuitive. Also, we have seen that tractors and twistors, while being genuine Lorentz gauge fields, are generalized gauge fields as far as Weyl rescaling symmetry is concerned.
One suspects that still more instances of the dressing field method could be found in the literature on gauge theories. Furthermore, its simplicity may put within reach results otherwise difficult to obtain by other approaches; the example of tractor calculi for various parabolic geometries and their application to physics comes to mind. It is our hope that this approach could contribute to clarify and enrich some aspects of gauge field theories in physics.
In its present form, the method relies on the defining (structural) relations for gauge transformations: as already mentioned, while the field contents are different, definitions (4) look algebraically like gauge transformations (1). 10 This is a key ingredient of the method. One can raise the question about some possible other routes one could elaborate to define dressed fields on which a part of the gauge symmetry is erased, but not using gauge transformation-like relations.
Finally, to make the dressing field method a full-fledged approach to gauge QFT, the question of its compatibility with quantization must be addressed. In particular, do the operations of quantization and of reduction by dressing commute? So far, the question has not been fully addressed. One can find in [START_REF] Masson | A remark on the spontaneous symmetry breaking mechanism in the standard model[END_REF] some hints that the problem is not easy and straightforward, mainly because we may first face the problem of the definition of a mathematically sound, let alone unique, quantization scheme. A rich topic in itself, that again exemplifies the fruitful cross-fertilization between physics and mathematics.
βα. The action of d is defined on the generators by: dω = Ω -1 2 [ω, ω] (Cartan structure equation), dΩ = [Ω, ω] (Bianchi identity), dv = ζ and dζ = 0. The action of the BRST operator on the generators gives the usual defining relations of the BRST algebra,
See[START_REF] Martin | Gauge principles, gauge arguments and the logic of nature[END_REF] for a critical discussion of its scope and limits.
Weyl topped this with an even stronger endorsement of the importance of symmetries in physics: "As far as I see, all a priori statements in physics have their origin in symmetry"[START_REF] Weyl | Symmetry[END_REF].
See the nice short appendix by S. S. Chern of the book on differential geometry he co-authored[START_REF] Chern | Lectures on differential geometry[END_REF].
In the general theory the group G is replaced by a C * -algebra A.
For instance, such a term is the one for a spontaneous symmetry breaking mechanism.
In fact, it could even be reduced to a technical step useful to perform the usual field quantization procedure, which relies heavily on the identification of propagators and mass terms in the Lagrangian.
While in the previous example we hadG = K = SU (2) ⊂ H = U (1) × SU (2).
Notice that from now on we shall make use of "•" to denote Greek indices contractions, while Latin indices contraction is naturally understood from matrix multiplication.
Beware of the fact that in this index free notation a is the set of components of the 1-form a. This should be clear from the context.
Let us mention here how it is has been difficult, in several occasions, to convince some colleagues that these relations are not mathematically on the same footing. |
01767804 | en | [
"chim.anal",
"chim.theo"
] | 2024/03/05 22:32:15 | 2017 | https://u-paris.hal.science/hal-01767804/file/StockmannAngewChemIntEd57%282017%2913493.pdf | T Jane Stockmann
email: [email protected]
Léo Angelé
Vitor Brasiliense
Catherine Combellas
Frédéric Kanoufi
email: [email protected]
Platinum nanoparticle impacts at a liquid|liquid interface
Keywords: Nanoparticles, impacts, O 2 reduction, bipolar, liquid|liquid interface
Single nanoparticle (NP) electrochemistry detection at a micro liquid|liquid interface (LLI) is exploited through the oxygen reduction reaction (ORR) catalysis. In this way, current spikes reminiscent of nano-impacts were recorded owing to electrocatalytic enhancement of the ORR by Pt-NPs. The nature of the LLI provides the exploration of new phenomena in single NP electrochemistry. The current impacts are due to a bipolar reaction occurring at the Pt-NP straddling the LLI: O 2 reduction takes place in the aqueous phase, while ferrocene hydride (Fc-H + ), a complex generated upon facilitated interfacial proton transfer by Fc, is oxidized in the organic phase. Ultimately, the role of reactant partitioning, NP bouncing, or their ability to induce Marangoni effects are evidenced.
Body
The understanding of charge transfer processes at the nanoscale has been fueled by the emergence of single NP studies, such as electrochemical nanoimpact experiments, where insight into the reactivity of a NP is gained from the electrochemical detection of the collision of individual NPs onto a polarized microelectrode. [1] However such studies, recently reviewed by Compton et al., [1a] as well as by Robbs and Rees,[1b] are complicated by a rigorous control of the microelectrode detector surface chemistry and activity. The polarized interface between two immiscible liquids or electrolytes (ITIES) offers a more reproducible electrochemical soft-interface. Moreover, the use of nanoparticles (NPs) for catalysis at a polarized liquid|liquid interface (LLI), between water (w) and oil (o), particularly for the case of the oxygen reduction reaction (ORR), has gained prevalence. [2] The ORR is then controlled by modification of the Galvani potential difference across the ITIES: 𝜙 𝑤 -𝜙 𝑜 = ∆ 𝑜 𝑤 𝜙, while NPs enhance the reaction by effectively behaving as multivalent redox species, or electron reservoirs. [2c, 3] Single entity studies with immiscible liquid systems are few. Ensemble polarographic measurements of TiO 2 , SnO 2 , and Fe 2 O 3 nanoparticles at renewed Hg droplet electrodes were first performed by Heyrovsky et al. [4] Later, the group of Scholz investigated soft matter microparticle impacts, such as liposomes [5] and vesicles/organelles, [6] at a Hg electrode. These were expanded upon using solid ultramicroelectrodes (UMEs) to investigate nanodroplet impacts. [7] Laborda et al. [8] recently reported on single emulsion fusion events triggering a large flux of ions at a macro w|o interface and generating a spike-shaped current similar to the destructive impacts of metal-NPs. [1f, 9] However, the study of metal-NP impacts at LLIs only concerns those at liquid Hg/Pt UMEs or dropping Hg electrodes. [4,10] We propose herein to transpose the concept of electrochemical nanoimpact to a water|1,2dichloroethane (w|DCE) soft interface for the case of the catalytic activation of the ORR by Pt-NPs as illustrated in Scheme 1.
For this purpose, a microITIES platform (25μm diameter) housed at the tip of a pulled borosilicate glass capillary was employed. O 2 reduction is catalyzed by Pt-NPs present in w, using ferrocene (Fc) as a sacrificial electron donor in o, and H 2 SO 4 as the proton source/supporting electrolyte in w.
The system was first analyzed without Pt-NPs and with 5mM of H 2 SO 4 added to w. In the absence of Fc in o (trace a in Figure 1A), the potential window is limited on either end by the transfer of the supporting electrolyte ions: H + and HSO 4 -or SO 4 2-at positive and negative potentials, [11] respectively. After addition of 50mM Fc to the DCE phase (trace b in Figure 1A), a peak-shaped wave was observed at -0.150V. As the Fc concentration was increased the half-wave potential (∆ 𝑜 𝑤 ϕ 1/2 ) shifted towards more negative potentials while the peak current remained the same. This is indicative of facilitated ion transfer (FIT) [12] of H + by Fc. Variation in half-wave transfer potential, ∆ 𝑜 𝑤 𝜙 Fc-H + ,1/2 with the Fc bulk concentration, 𝑐 Fc * , (Figure 1B)
suggests the FIT produces a (1:1) metallocene-hydride complex Fc-H + at a β value of 4.2×10 12 L mol -1 . This agrees with recent soft interfacial FIT studies. [12c, 12g] Since Fc can catalyze O 2 reduction itself, although slowly, the slight negative baseline shift for the red trace in Figure 1A is likely due to Fc + transfer from o→w, which becomes noticeable at the high [Fc] used here.
After addition of Pt-NPs, chronoamperograms (CAs) were recorded at 0.20V, after the FIT wave. Figures 2 A-C depict the CAs recorded using 5, 30, and 50nm diameter citrate capped Pt-NPs in w (1.6×10 11 NP L -1 ); D shows the response with no NPs added. Once NPs are added, current spikes are observed reminiscent of electrocatalytic nanoimpacts of Pt-NPs at metallic UMEs. [1c-f, 9b, 9c, 13] This suggests the spikes are related to Pt-NP impacts at the w|o microinterface. Owing to the charge convention at a LLI, the positive spikes could either be a positive charge transfer from w→o or a negative one from o→w. The latter is consistent with electrons passed from o→w, as illustrated in Scheme 1, where the NP catalyses the ORR in a bipolar electron transfer mode. Samec et al. [14] electrochemically synthesized Pt-NPs at a macro w|o interface creating a nanofilm and proposed a similar interfacial NP catalysed electron transfer reaction, where decamethylferrocene acts as the electron donor.
No current spikes were observed at potential steps recorded before the FIT wave (≤-0.20 V) indicating that here the hydride Fc-H + is the electron donor and that the NP catalyses the combined two interfacial reactions:
Fc -H 𝑜 + → Fc 𝑜 + + H 𝑜 + + 𝑒 - (1)
2H 𝑤 + + O 2,𝑤 + 2𝑒 -→ H 2 O 2,𝑤 (2)
The standard redox potential for Fc oxidation in DCE is 0.640 V [START_REF] Fermin | Liquid Interfaces In Chemical, Biological And Pharmaceutical Applications[END_REF] , while for (2) E o = 0.695 V. [START_REF] Her | Encyclopedia of Electrochemistry[END_REF] The standard potential of the combined interfacial redox reaction: [14] 𝐸
𝑜′ = 𝐸 𝐹𝑐 + 𝐹𝑐 ⁄ 𝑜 -𝐸 𝑂 2 𝐻 2 𝑂 2 ⁄ 0 + 0.059pH (3)
is then 𝐸 𝑜′ = 0.081 V which has a limited driving force at the applied potential (∆ 𝑜 𝑤 ϕ 1/2 >0.2V). It is enhanced with the hydride Fc-H + , an activated form of Fc, which likely has a higher E o .
The form of the spikes detected, see inset in Figure 2, consists of three regions, as described recently by Stevenson et al. [10a, 10b] for Pt NPs at a Hg/Pt UME and Compton et al. [10c] for Hg 2 Cl 2 NPs at a Hg/carbon UME: (i) a period before impact with no catalysis Further evidence of Marangoni instabilities induced by NP electrochemistry were sought through successive positive/negative potential stimulation (large changes in interfacial tension) of the LLI in the presence of NPs. Figure 3 shows the CA traces recorded at potentials more negative than the FIT peak (≤-0.20 V). In trace A, obtained before any CA curve was recorded in the positive region, no current spike was observed. However, after CAs were performed at positive potentials (curves B-D), >100pA large negative current spikes were recorded. This suggests that Pt-NPs may form a film or 'islands' at the interface upon positive polarization, become deactivated, and then removed at negative potentials. These data are also reminiscent of recordings by Samec et al. [14] via CV and Pt-NP film electrogenerated at a macroITIES.
Despite the possible aggregate formation, multiple bouncing, LLI deactivation process (and possible Marangoni effects) encountered, the observed catalytic current spikes were discussed more quantitatively. The histograms of their amplitude are plotted in Figure 4A-C.
Even if the presence of NP aggregates is suspected, [START_REF] Shimizu | [END_REF] the smaller current region of each histogram has been fitted using a log-normal distribution to establish average spike amplitudestaken from the log-normal peakof 10, 40, and 46pA for the 5, 30 and 50 nm particles, respectively.
The observed current amplitudes were rationalized by a Comsol 2D simulation (details in SI). As presented in Scheme 1, the model considered the two interfacial reactions (1) and
(2) occurring on each electrolyte side of the NP interface. The bipolar condition ensures that the reaction is driven with equal diffusional flux on each NP side. Even though it may depend on the penetration of the NP in the organic phase (see NP positions in Figure S7), a situation explored for solid sphere-cap electrodes resting on a plane, [18] here we explore the situation of one hemisphere placed in each phase. The simulation predicts the concentration profiles (Figure S5 for O 2,w ) and flux (or equivalent current flow at the ITIES) for the consumption of each reactant. Particularly (see Figure S6) the availability of O 2 in the organic phase and its partitioning, allows local feeding of the aqueous phase and increase of O 2,w mass-transfer at the NP-water interface. Based on the simulation, the simulated steady-state current (i lim ) is limited by O 2,w and values of 4, 26, and 43pA were estimated for the 5, 30, and 50nm NPs respectively, which are in fair agreement with the experimental values.
The limiting current decreases when a larger portion of the NP is present in the DCE phase (subhemisphere). In the reverse situation (superhemisphere) higher currents are expected if, still in a bipolar fashion, O 2 is reduced at the superhemispherical surface, while
Fc-H + oxidation occurs in the LLI ideally polarized at the same potential as the lower cap of the NP, present or contacting the DCE phase. This extreme situation of a nanosphere sitting on the interface provides the highest i lim for O 2 reduction (see [O 2 ] w profiles in Figure S8).
Simulated values, obtained with O 2 partitioning are 6, 37 and 61pA respectively for 5, 30 and 50nm NPs. This suggests that the smallest NPs are sitting on the LLI, while the larger ones are likely penetrating the LLI.
This communication shows a proof of concept towards liquid|liquid interfacial NP detection/characterization through stochastic collisions that broadens the number of platforms upon which this technique can be employed, thereby, expanding the 'state-of-the-art'. Owing to the non-destructive nature and flexibility of the LLI, bouncing and/or rolling of NPs was observed, demonstrating deactivation then re-activation without overall NP catalytic activity loss. Furthermore, this method provides sensitive individual as well as ensemble NP information not possible for nanofilms synthesized in situ at the LLI. Since the LLI is much easier to fabricate/more reproducible, it could also replace carbon nanoelectrodes in doublebarrel multimode sensing [19] which combines resistive pulse and NP impact analysis. The fair agreement between the simulated and observed spike amplitudes, along with the calculated and experimental impact frequencies, demonstrate the sensitivity of this method.
(
baseline),(ii) a sudden onset catalytic current when the NP collides, and (iii) a seconds-long current decay. The latter suggests a deactivation of the NP active surface, either by displacement of the citrate capping agent by Fc + , or from the action of reactive oxygen species produced during the ORR.The liquid tube (of height L~150µm and radius a) restricts NP diffusion toward the LLI.Moreover, this restricted volume, ~23pL, contains (under Figure2conditions) ~4 NPs and allows the visualisation of single entity behaviourespecially for slowly diffusing entities.For example, the CA in FigureS1shows two sets of multiple peaks: (i) a doublet at 45 and 47s and (ii) 3 large current spikes at t>82s within 40s. Owing to their uniform intensity (resp. 150 and 600pA) and residency time (resp. 1 and 2.5s), it can be assumed that each multiplet corresponds to the same large cluster/aggregate of 30nm diameter NPs bouncing on the LLI. Such bouncing also likely occurs for individual NPs, but the knowledge of their diffusion coefficient and concentration, D NP and c NP respectively, allows predicting the frequency of impacts f from the equation of NP diffusive flux in the liquid tube through: 𝑓 = (𝐷 𝑁𝑃 𝑐 𝑁𝑃 𝜋𝑎 2 )/(𝑎 + 𝐿) (4) Using Stokes-Einstein estimates of D NP for the 5, 30 and 50nm diameter NPs, respectively, for c NP =1.6×10 11 NP L -1 in Figure 2, one predicts ~4, 0.67 and 0.4 current spikes per 100s. Spikes were selected, based on the above spike profile and excluding clusters, within a range of 8-100pA. Additional CAs can be found in the SI. Averaging over ~10 CAs of 120s each, yields 2.5, 1, and 0.7 impacts per 100s, respectively. The concentration of the 30nm NPs was changed from 1.6 to 7.0 and 16.0×10 11 NP L -1 giving f ~2.6 and 8.0 impacts per 100s (3 and 6.7 predicted resp.). If the observed frequency of impacts is in general agreement with the predicted flux of NPs to the interface, the occurrence of frequent spikes, even with lower current amplitude, detected for the largest 30 and 50 nm NPs (Figure 2B,C) could indicate NP rolling/bouncing or physical perturbation of the LLI owing to the presence of the NP. In this way, NPs may instigate Marangoni-type instabilities, generated by rapid local changes in surface tension within the back-to-back double layers upon adsorption.
Figures and Schemes
Figure 1 .
1 Figure 1. (A) Cyclic voltammograms in Cell 1 without (curve a) and with (curve b) 50mM ferrocene (Fc) with no NP added. (B) Trend of -zF RT (∆ 𝑜 𝑤 𝜙 Fc-H,1/2 -∆ 𝑜 𝑤 𝜙 H 𝑜′ ) versus ln[𝑐 Fc * ].
Figure 2 :
2 Figure 2: Chronoamperograms (CA) using Cell 1 (5mM Fc) with a potential step from -0.20 to 0.20V with 5, 30, and 50nm diameter Pt-NP (1.6×10 11 NP L -1 ) for curves A-C, respectively (D: no NP added); inset is a typical current spike as marked with an *. Arrows: presumed current spikes.
Figure 3 :
3 Figure 3 : Chronoamperograms (CAs) using Cell 1 (5mM Fc) with 5nm diameter Pt-NPs dissolved at 1.6 (A and B), 7.0 (C), and 16.0×10 11 NP L -1 (D) in the aqueous phase. A was recorded after a cyclic voltammogram was made to establish the polarizable potential window, but before any further positive polarization. Curves B-D were recorded after CAs were performed at positive potentials. CA potential steps were from -0.20 to -0.30V.
Figure 4 :
4 Figure 4: Histogram with log-normal trend (solid curve) of current spikes for (A) 5, (B) 30, and (C) 50 nm diameter Pt-NPs ([NP] = 1.6, 7.0, and 16.0×10 11 NP L -1 ).
Acknowledgements
TJS is grateful to the European Commission for a H2020-MSCA-IF grant, project number DLV-708814. |
01570245 | en | [
"sdv.ee",
"sde.be"
] | 2024/03/05 22:32:15 | 2017 | https://amu.hal.science/hal-01570245/file/01570245%20Lavoie-ecological%20indicators%202017.pdf | Isabelle Lavoie
Paul B Hamilton
Soizic Morin
Sandra Kim Tiam
Maria Kahlert
Sara Gonçalves
Elisa Falasco
Claude Fortin
Brigitte Gontero
David Heudre
Mila Kojadinovic-Sirinelli
Kalina Manoylov
Lalit K Pandey
Jonathan C Taylor
Diatom teratologies as biomarkers of contamination: Are all deformities ecologically meaningful?
Keywords:
Contaminant-related stress on aquatic biota is difficult to assess when lethal impacts are not observed. Diatoms, by displaying deformities (teratologies) in their valves, have the potential to reflect sub-lethal responses to environmental stressors such as metals and organic compounds. For this reason, there is great interest in using diatom morphological aberrations in biomonitoring. However, the detection and mostly the quantification of teratologies is still a challenge; not all studies have succeeded in showing a relationship between the proportion of abnormal valves and contamination level along a gradient of exposure. This limitation in part reflects the loss of ecological information from diatom teratologies during analyses when all deformities are considered. The type of deformity, the severity of aberration, species proneness to deformity formation, and propagation of deformities throughout the population are key components and constraints in quantifying teratologies. Before a metric based on diatom deformities can be used as an indicator of contamination, it is important to better understand the "ecological signal" provided by this biomarker. Using the overall abundance of teratologies has proved to be an excellent tool for identifying contaminated and non-contaminated environments (presence/absence), but refining this biomonitoring approach may bring additional insights allowing for a better assessment of contamination level along a gradient. The dilemma: are all teratologies significant, equal and/or meaningful in assessing changing levels of contamination? This viewpoint article examines numerous interrogatives relative to the use of diatom teratologies in water quality monitoring, provides selected examples of differential responses to contamination, and proposes solutions that may refine our understanding and quantification of the stress. This paper highlights the logistical problems associated with accurately evaluating and interpreting teratologies and stimulates more discussion and research on the subject to enhance the sensitivity of this metric in bioassessments.
Introduction
Diatoms are useful tools in the bioassessment of freshwater ecosystem integrity and are presently included in numerous water quality monitoring programs worldwide. A variety of diatom-based indices have been developed using different approaches (e.g., Lavoie et al., 2006, 2014 andreferences therein;[START_REF] Smol | The Diatoms. Applications for the Environmental and Earth Sciences[END_REF] and references therein). Most indices were created to assess ecosystem health ⁎ Corresponding author.
E-mail address: [email protected] (I. Lavoie).
MARK
reflecting general water quality and regional climate. There are also countless studies reporting the response of diatom assemblages to metal contamination (see review in [START_REF] Morin | Consistency in diatom response to metalcontaminated environments[END_REF] and to organic contaminants [START_REF] Debenest | Effects of pesticides on freshwater diatoms[END_REF]. However, diatom-based indices have not been developed to directly assess toxic contaminants (e.g., metals, pesticides, hydrocarbons). Contaminant-related stress on biota is difficult to assess when lethal impacts are not observed. Diatoms, by displaying aberrations in their valves (deviation from normal shape or ornamentation), have the potential to reflect sub-lethal responses to environmental stressors including contaminants. Observed deformities can affect the general shape of the valve, the sternum/raphe, the striation pattern, and other structures, or can be a combination of various alterations (Falasco et al., 2009a). Other stressors such as excess light, nutrient depletion, and low pH also have the potential to induce frustule deformities (Fig. 1; see review in Falasco et al., 2009a). However, the presence of abnormal frustules (also called teratologies or deformities) in highly contaminated environments is generally a response to toxic chemicals. For this reason, there is great interest in using morphological aberrations in biomonitoring. Teratologies may be a valuable tool to assess ecosystem health and it can be assumed that their frequency and severity are related to magnitude of the stress. We focussed our main discussion on teratologies as biomarkers although other descriptors such as valve densities, species diversity and assemblage structure are also commonly used to evaluate the response of diatom assemblages to contaminants.
Based on the current literature, the presence of deformities in contaminated environments is considered an indication of stress; however, detection and quantification of teratologies is still a challenge. In other words, not all studies have succeeded in showing a relationship between the proportion of abnormal valves and contamination level along a gradient of exposure (see Sections 3.2 and 5.1 for examples). Before a metric based on diatom teratologies can be used as an indicator of contamination, we believe it is imperative to better understand the "ecological information" provided by the different types of deformities and their severity. Furthermore, how are teratologies passed through generations of cell division? These aspects may influence our assessment and interpretation of water quality. This paper will not provide a detailed review of the abundant literature on the subject of diatom valve morphogenesis or the different types of teratologies and their causes, but will examine numerous interrogatives relative to the use of diatom teratologies for the assessment of various types of contamination. This work is an extension of the discussion issued from the collaborative poster entitled "Diatom teratologies in bioassessment and the need for understanding their significance: are all deformities equal?" presented at the 24th International Diatom Symposium held in Quebec City (August 2016). The participants were invited to take part in the project by adding comments, questions and information directly on the poster board, and by collaborating on the writing of the present paper. Numerous questions were presented (Table 1) related to the indicator potential of different types of deformities and their severity, the transmission of teratologies as cells divide, and species proneness to deformities. These questions, we believe, are of interest when using diatom teratologies as biomarkers of stress. This topic is especially of concern because diatom teratologies are increasingly used in biomonitoring as shown by the rising number of publications on diatom malformations (Fig. 2). With this paper, we aim to initiate a discussion on the subject. Hopefully, this discussion will create new avenues for using teratologies as biomarkers of stress and contamination. The ultimate goal would be the creation of an index including additional biological descriptors to complement the teratology-based metric.
Teratology formation and transmission
Valve formation
Current routine identifications of diatom species are based on morphological characters such as symmetry, shape, stria density, and ornamentation. The characteristic shape of each diatom species results from a combination of genetic and cellular based processes that are regulated by environmental factors. There is a wealth of literature on valve morphogenesis, based both on ultrastructure observations and cellular (molecular and biochemical) processes. Descriptions of the processes involved in valve formation are provided, among others, by
Altered individuals
(teratologies)
Stressed
Non-stressed
Biological (e.g., crowding) Chemical (e.g., pH, nutrients) Physical (e.g., flow, light, temp.)
Environment
Diatom assemblage
Contamination Response
Normal individuals
Fig. 1. Conceptual model representing the response of a diatom assemblage to environmental and anthropogenic perturbations.
Table 1
List of questions that initiated this communication as well as questions raised by participants during the 24th International Diatom Symposium (IDS 2016, Quebec City).
Teratology formation and transmission
• How are deformities transmitted to the subsequent generations?
• The newly-formed valve is an exact copy (or smaller) of the mother cell; in this case, how does the first deformity of the valve outline appear? • Are abnormal ornamentation patterns observed on both valves?
• Are deformed cells able to survive and reproduce? Ecological meaning • Are deformities equal between different species? Are all types of deformities equal within the same species? • Are all toxicants likely to induce similar deformities? (or are deformities toxicantspecific?) • Should a deformity observed on a "tolerant" species (versus a "sensitive" species)
have more weight as an indicator of stress? Issues with teratology assessment • Certain types of deformities are difficult or impossible to see under a light microscope, particularly for small species. Should problematic taxa be included in bioassessments based on teratologies? • How to assess deformities on specimen that are in girdle view?
• How should the "severity" of a teratology be assessed? Implications for biomonitoring • The sternum is the initial structure to be formed; should an abnormal sternum (including the raphe) be considered more important/significant than other types of aberrations? • Proneness to produce abnormal valves and sensitivity to specific contaminants are key factors for the inclusion of teratological forms in diatom indices. How to quantify them? • What is the significance of deformities in a single species versus multiple species in an assemblage?
the following authors: Cox (2012), [START_REF] Cox | Integrated simulation with experimentation is a powerful tool for understanding diatom valve morphogenesis[END_REF]), Falasco et al. (2009a), [START_REF] Gordon | The glass menagerie: diatoms for novel applications in nanotechnology[END_REF], [START_REF] Knight | with water quality regulations and guidelines that would greatly benefit from such a biomonitoring tool[END_REF], [START_REF] Kröger | A new calcium binding glycoprotein family constitutes a major diatom cell wall component[END_REF][START_REF] Pickett-Heaps | Cell division in the pennate diatom Pinnularia. IV. Valve morphogenesis[END_REF][START_REF] Round | The Diatoms -Biology and Morphology of the Genera[END_REF][START_REF] Sato | Valve morphogenesis in an araphid diatom Rhaphoneis amphiceros (Rhaphoneidaceae, Bacillariophyta)[END_REF] and [START_REF] Schmid | Wall morphogenesis in diatoms: deposition of silica by cytoplasmic vesicles[END_REF]. Although a detailed description of cellular processes involved in valve formation is far beyond the scope of this discussion, the following section briefly summarizes the information given in the above-mentioned publications.
Diatoms have external cell walls (frustule) composed of two valves made of amorphous polymerized silica. They mainly reproduce asexually during the life cycle with short periods of sexual activity. During cell division (mitosis), a new hypotheca (internal valve) is formed after cytokinesis. Silica polymerization occurs in a membrane-bound vesicle (silica deposition vesicle; SDV) within the protoplast [START_REF] Knight | with water quality regulations and guidelines that would greatly benefit from such a biomonitoring tool[END_REF]. In pennate species, a microtubule center is associated with initiation of the SDV [START_REF] Pickett-Heaps | Cell division in the pennate diatom Pinnularia. IV. Valve morphogenesis[END_REF], 1990). The sternum (with or without a raphe) is the first structure to be formed followed by a perpendicular development of virgae (striae). In raphid diatoms the primary side of the sternum develops, then curves and fuses with the later-formed secondary side; the point of fusion generally appears as an irregular stria called the Voigt discontinuity or Voigt fault [START_REF] Mann | A note on valve formation and homology in the diatom genus Cymbella[END_REF]. Sketches and pictures of valve morphogenesis are presented in [START_REF] Cox | Ontogeny, homology, and terminology-wall morphogenesis as an aid to character recognition and character state definition for pennate diatom systematics[END_REF], [START_REF] Cox | Integrated simulation with experimentation is a powerful tool for understanding diatom valve morphogenesis[END_REF] and in [START_REF] Sato | Valve morphogenesis in an araphid diatom Rhaphoneis amphiceros (Rhaphoneidaceae, Bacillariophyta)[END_REF]. The size of the new hypotheca formed by each daughter cell is constrained by the size of the parent valves, resulting in a gradual size reduction over time. Sexual reproduction initiates the formation of auxospores which can ultimately regenerate into large initial frustules (see [START_REF] Sato | Auxospore fine structure and variation in modes of cell size changes in Grammatophora marina (Bacillariophyta)[END_REF] for information on auxosporulation). Asexual spore formation [START_REF] Drebes | On the life history of the marine plankton diatom Stephanopyxis palmeriana[END_REF][START_REF] Gallagher | Cell enlargement in Skeletonema costatum (Bacillariophyceae)[END_REF]) may also lead to large initial frustules and a larger population. Auxospore initial cells may differ greatly in morphology compared to cells forming during the division and these differences should not be confused with deformity. These initial cells are however rather rare.
Overview of teratogenesis
Deformities are commonly observed in natural diatom assemblages, but their frequency of occurrence is generally low (< 0.5% according to [START_REF] Arini | Remediation of a watershed contaminated by heavy metals: a 2-year field biomonitoring of periphytic biofilms[END_REF]Morin et al., 2008a). The presence of multiple stressors, however, can significantly increase the proportions of deformed individuals. Falasco et al. (2009a) reviewed different types of deformities observed on diatom valves and the various potential mechanisms involved, as well as numerous environmental factors known to be responsible for such aberrations. We are aware that various stresses may induce teratologies, but here we focus our observations and discussion on the effects of toxic contaminants such as metals and organic compounds.
Based on the current literature, mechanisms inducing teratologies are not fully understood. Due to physical (e.g., crowding, grazing) or chemical stresses (e.g., metals, pesticides, nutrient depletion), cellular processes involved in cell division and valve formation may be altered [START_REF] Barber | Observations on some deformities found in British diatoms[END_REF][START_REF] Cox | Deformed diatoms[END_REF]. One reliable explanation for teratology formation involves the microtubular system, an active part in the movement of silica towards the SDV. Exposure to anti-microtubule drugs [START_REF] Schmid | Valve morphogenesis in diatoms: a pattern-related filamentous system in pennates and the effect of APM, colchicine and osmotic pressure[END_REF] or a pesticide [START_REF] Debenest | Herbicide effects on freshwater benthic diatoms: induction of nucleus alterations and silica cell wall abnormalities[END_REF], can affect the diatom microtubular system (including microfilaments), leading to abnormal nucleus formation during cell division and to the deformation of the new valve. [START_REF] Licursi | Short-term toxicity of hexavalent-chromium to epipsammic diatoms of a microtidal estuary (Riode la Plata): Responses from the individual cell to the community[END_REF] observed a significant increase in the production of abnormal nuclei (dislocation and membrane breakage) in mature biofilms exposed to hexavalent chromium. No teratological forms were observed, but the biofilm was exposed to the contaminant only for a short duration (96 h).
Malformations can also be induced by other independent factors. For instance, malfunctions of proteins involved in silica transport and deposition [START_REF] Knight | with water quality regulations and guidelines that would greatly benefit from such a biomonitoring tool[END_REF][START_REF] Kröger | A new calcium binding glycoprotein family constitutes a major diatom cell wall component[END_REF][START_REF] Kröger | Frustulins: domain conservation in a protein family associated with diatom cell walls[END_REF][START_REF] Kröger | Characterization of a 200-kDa diatom protein that is specifically associated with a silica-based substructure of the cell wall[END_REF][START_REF] Kröger | Diatoms-from cell wall biogenesis to nanotechnology[END_REF], or proteins responsible for maintenance, structural and mechanical integrity of the valve [START_REF] Kröger | Diatoms-from cell wall biogenesis to nanotechnology[END_REF][START_REF] Santos | Cadmium chelation by frustulins: a novel metal tolerance mechanism in Nitzschia palea (Kützing) W. Smith[END_REF] would have significant impacts on teratologies. Metals could also inhibit silica uptake due to metal ion binding on the cell membrane (Falasco et al., 2009a). Likewise, the initial formation of the valve can be affected by a lack of transverse perizonial bands on the initial cell [START_REF] Chepurnov | Experimental studies on sexual reproduction in diatoms[END_REF][START_REF] Mann | Auxospore formation in Licmophora (Bacillariophyta)[END_REF][START_REF] Mann | Methods of sexual reproduction in Nitzschia: systematic and evolutionary implications (Notes for a monograph of the Bacillariaceae 3)[END_REF][START_REF] Sabbe | Apomixis in Achnanthes (Bacillariophyceae); development of a model system for diatom reproductive biology[END_REF][START_REF] Sato | Auxospore fine structure and variation in modes of cell size changes in Grammatophora marina (Bacillariophyta)[END_REF][START_REF] Toyoda | Fine structure of the vegetative frustule, perizonium and initial valve of Achnanthes yaquinensis (Bacillariophyta)[END_REF][START_REF] Von Stosch | On auxospore envelopes in diatoms[END_REF][START_REF] Williams | Comments on the structure of 'postauxospore' valves of Fragilariforma virescens[END_REF]. Finally, biologically-induced damage related to bottom-up and top-down processes (e.g., parasitism, grazing, crowding) represent natural stresses that may result in abnormal valves [START_REF] Barber | Observations on some deformities found in British diatoms[END_REF][START_REF] Huber-Pestalozzi | Der Walensee und sein plankton[END_REF][START_REF] Stoermer | Atypical Tabularia incoastal Lake erie[END_REF].
Deformities can also be the consequence of plastid abnormalities or mis-positioning during cell division, as observed in standard laboratory cultures of Asterionella formosa Hassall (Kojadinovic-Sirinelli, Bioénérgétique et Ingénierie des Protéines Laboratory UMR7281 AMU-CNRS, France; unpublished results) and under metal exposure in Tabellaria flocculosa (Roth) Kütz. (Kahlert, Swedish University of Agricultural Sciences; unpublished results). When considering normal cellular morphotypes of A. formosa, plastids are symmetrically positioned within dividing cell (Fig. 3A). In some cases, the plastids are significantly larger than normal, which may be the consequence of a microtubular system defect. This seems to induce formation of curved epivalve walls (Fig. 3B). As a consequence, daughter cells appear "A morphological variation in the frustules outline is easily transmitted through generations, others, like the pattern and distribution of the striae, are not: this is the reason for the lower frequency of the latter alterations. " Falasco et al. (2009a) "If the damaged cells survive, they will be able to reproduce: in this case, the daughter clones will build their hypotheca on the basis of the damaged epitheca, spreading the abnormal shape through the generations" [START_REF] Stoermer | Atypical Tabularia incoastal Lake erie[END_REF] This propagation of abnormal valves during cell division may explain why valve outline deformities are the most frequently reported in the literature and with the highest abundances. For example, [START_REF] Leguay | Using biofilms for monitoring metal contamination in lotic ecosystems: the protective effects of hardness and pH on metal bioaccumulation[END_REF] observed high abundances of individuals presenting abnormal valve outlines in two small effluents draining abandoned mine tailings (50% and 16%, all observed on the same Eunotia species). Valve outline deformities reaching 20-25% (on Fragilaria pectinalis (O.F.Müll.) Lyngb.) were observed at a site located downstream of textile industries introducing glyphosate in the Cleurie River, Vosges, France (Heudre, DREAL Grand Est, Strasbourg, France; unpublished results). [START_REF] Kahlert | Utveckling av en miljögiftsindikator -kiselalger i rinnande vatten[END_REF] found deformities of up to 22% on Eunotia species in a Pb contaminated site. The effect of carry-over from cell division could explain the high frequency of abnormal individuals (reaching up to > 90% with a marked indentation) in a culture of Gomphonema gracile Ehrenb. from the IRSTEA-Bordeaux collection in France (Morin, IRSTEA-Bordeaux, France; unpublished results).
If cell division is the key agent for the transmission of valves with abnormal outlines due to the "copying effect", then how does the first frustule get deformed? An initial abnormal valve must start the cascade of teratologies: logically, we could argue that the initial deformity appears during sexual reproduction when the frustule of a new cell is formed without the presence of an epivalve as a template. [START_REF] Hustedt | Kieselalgen (Diatomeen). Einführung in Die Kleinlebwelt[END_REF] discussed this scenario where he suggested that particular environmental conditions during auxospore formation may induce morphological changes that are perpetuated during vegetative reproduction, giving rise to a population with a morphology different from the parental line. This new abnormal cell would then divide by mitosis and legate the abnormal shape to all subsequent daughter cells, as also suggested by [START_REF] Stoermer | Polymorphism in mastogloia[END_REF]. This is in-line with the observation that the above-mentioned G. gracile bearing the marked incision on the margin is ca. 50% larger than its "normal" congeners of the same age. On the other hand, there is also the possibility or hypothesis in the gradual appearance of an abnormal outline that is accentuated from generation to generation. First, a very subtle deviation from the normal pattern appears on the forming hypovalve and a deformity is not noticed. This subtle deviation from the normal shape is progressively accentuated by the newly forming hypovalve leading to a very mild abnormality of the overall shape, and so on through multiple successive divisions resulting in a population of slightly abnormal to markedly deformed individuals. If this scenario is possible, then the opposite situation could also be plausible: the subtle deviation from the normal overall shape is "fixed" or "repaired" during subsequent cell divisions instead of being accentuated. In another scenario, the epivalve could be normal and the hypovalve markedly deformed, potentially resulting in an individual that would not be viable. [START_REF] Sato | Auxospore fine structure and variation in modes of cell size changes in Grammatophora marina (Bacillariophyta)[END_REF] reported something similar in old cultures of Grammatophora marina (Lyngb.) Kütz. where drastic differences in valve length between epivalve and hypovalve (up to 50% relative to epitheca) were observed, suggesting that a "perfect fit" is not always necessary. These authors also observed cells that had larger hypothecae than epithecae, implying expansion before or during cell division. In this case, are these growth forms viable and sustainable?
Other deformities
Although irregular valve outlines appear to be a common and frequent type of teratology, it is not always the dominant type of deformity observed within a given population. For instance, [START_REF] Arini | Cadmium deconta-as biomonitors of metal contamination from abandoned tailings[END_REF] found abnormal striation patterns and mixed deformities to be the most frequently observed aberration in a Cd exposure experiment using a culture of Planothidium frequentissimum Lange-Bertalot. Deformities on the same species were observed more frequently on the rapheless-valve and the structure affected was generally the cavum and less frequently the striae (Falasco, Aquatic Ecosystem Lab., DBIOS, Italy; unpublished results from field samples). deformed (Fig. 3C). Extreme curvatures of the valve results in the formation of much smaller daughter cells (15-20 μm; Fig. 3C) compared to the mother cells (about 40-50 μm). The "small-cell" characteristic is then transmitted to subsequent daughter cells, resulting in colonies of small individuals. In this case, the deformity and reduction in size does not seem to decrease cell fitness, because the small-sized cells reproduce as efficiently as the normally-sized cells, or even faster. In this case, the abrupt size reduction is certainly a response to the environment. Interestingly, abnormally small cells seem to appear at the end of the exponential growth phase and to increase in frequency as cultures age (Falasco et al., 2009b). This may suggest that the "small-size aberration" was a consequence of nutrient depletion or the production of secondary metabolites that could stress A. formosa. [START_REF] Sato | Auxospore fine structure and variation in modes of cell size changes in Grammatophora marina (Bacillariophyta)[END_REF] also reported a sharp decrease in cell size accompanied by deformed individuals bearing two valves of unequal size in old cultures.
According to [START_REF] Hustedt | Kieselalgen (Diatomeen). Einführung in Die Kleinlebwelt[END_REF] and [START_REF] Granetti | Alcune forme teratologiche comparse in colture di Navicula minima Grun. e Navicula seminulum Grun[END_REF], certain morphological alterations are not induced by genetic changes, because the diatoms return to their typical form during the subsequent sexual cycle. In contrast, other authors have elevated altered forms to the variety or species level (e.g., [START_REF] Jüttner | Gomphonema varioreduncum sp. nov., a new species from northern and western Europe and a re-examination of Gomphonema exilissimum[END_REF], thus assuming taxonomic distinctness. Biochemical and molecular investigations of clones with distinct morphotypes would thus be required to assess whether deformities are short term phenotypic responses, problems with gene expression (i.e., assembly line malfunction) or true alterations in the genes. The evolution of a species, at least in part, is a temporal process of physiological (teratological) changes resulting in "deviations from the normal type of organism/species". The gain or loss of any structure, like for example rimoportulae, potentially represents a new species. Even a change in the position of a structure can constitute a new species. Teratologies under temporal changes can influence populations or species. For the purpose of this discussion paper, longer temporal events of teratology (reproduction of a selected deformity over generations) can lead to speciation events, while short term teratologies (not reproductively viable in the next generation after sexual reproduction) are considered dead end and non-taxonomically significant.
Abnormal overall shape
The initial question here would be: "when does an atypical valve outline fall into the abnormal category?" For the purpose of this discussion, an abnormal outline is when aberrations affect valve symmetry, or when defects alter the "normal" shape of the diatom. This working definition excludes deviations from expected shape changes as cells get smaller (natural variability). Variability in shape related to post auxosporulation is difficult to differentiate from an abnormal form, but these forms are considered as rare. The second question is when does the deviation from the "common shape" become significant enough to be deformed? This question is particularly relevant when aberrations are subtle and subjectively identified with variability between analysts. On the other hand, marked deviations from the normal shape are easy to notice and classify as aberrant. Deformities affecting the general valve outline are assumed to be passed along from generation to generation through asexual cell division. Replication of the deformity happens because the newly formed valves must "fit into" the older valves; thus, the aberration is copied and the number of abnormal valves increases even though "new errors" do not occur. This scenario is clearly stated in numerous publications, as for instance: linking spine connections during colony formation. Alterations in the raphe system could limit the locomotion of motile diatoms (although this has not been observed in preliminary experiments conducted on G. gracile, Morin, IRSTEA-Bordeaux, France; unpublished results). Motility represents an important ecological trait especially in unstable environmental conditions because species can move to find refuge in more suitable habitats. Alterations in the areolae patterns located within the apical pore fields may prevent the correct adhesion of erected or pedunculated taxa to the substrate, impairing their ability to reach the top layer of the biofilm and compete for light and nutrients.
The ecological meaning of teratological forms
Types of deformities
A good fit was observed in certain studies between the abundance of teratologies and the presence of a contaminant (review in [START_REF] Morin | Consistency in diatom response to metalcontaminated environments[END_REF]. However, other studies have failed to show a clear relationship between the frequency of abnormal forms and the level of contamination along a gradient (e.g., [START_REF] Fernández | Design and testing of a new diatom-based index for heavy metal pollution[END_REF][START_REF] Lavoie | A mine of information: benthic algal communities[END_REF]Leguay et al., 2015); this is the "raison d'être" of this paper. Here we discuss potential avenues to deepen our interpretation of the ecological signal provided by diatoms. Do deformed cells reproduce normally? Do they consistently reproduce the teratology? These questions are intimately linked to the various types of teratologies observed. The type of deformity may therefore be an important factor to consider in biomonitoring because they may not all provide equivalent information (Fig. 5). Most authors agree to categorize teratological forms based on their type, summarized as follow: (i) irregular valve outline/abnormal shape, (ii) atypical sternum/raphe, (iii) aberrant stria/areolae pattern, (iv) mixed deformities. Despite the fact that various types of aberrations are reported, most authors pool them together as an overall percentage of teratologies (e.g., [START_REF] Lavoie | A mine of information: benthic algal communities[END_REF]Leguay et al., 2015;, 2008a;, 2012;[START_REF] Roubeix | Impact of the herbicide metolachlor on river periphytic diatoms: experimental comparison of descriptors at different biological organization levels[END_REF] and relate this stress indicator to contamination. Only a few studies report the proportion of each type of deformity (e.g., [START_REF] Arini | Cadmium deconta-as biomonitors of metal contamination from abandoned tailings[END_REF][START_REF] Pandey | Morphological abnormalities in periphytic diatoms as a tool for biomonitoring of heavy metal pollution in a river[END_REF][START_REF] Pandey | Response of a phytoplanktonic assemblage to copper and zinc enrichment in microcosm[END_REF][START_REF] Pandey | Exploring the status of motility, lipid bodies, deformities and size reduction in periphytic diatom community from chronically metal (Cu, Zn) polluted waterbodies as a biomonitoring tool[END_REF].
Based on a literature review of more than 100 publications on diatoms and teratologies, we created an inventory of > 600 entries concerning various diatom taxa reported as deformed (and the type of teratology observed) as a response to diverse stresses (Appendix A). This database is an updated version of the work presented in Falasco et al. (2009a). We assigned each of the reported teratologies to one of the four types of aberrations, which resulted in a clear dominance of abnormalities affecting valve outlines (Fig. 6). The sternum is the first structure produced by the SDV; if an aberration occurs in this region, other/additional aberrations may subsequently appear in striation patterns occurring later during valve formation. This could therefore be considered as "collateral damage" because of an abnormal sternum (including the raphe), leading to mixed deformities. For example, [START_REF] Estes | Valve abnormalities in diatom clones maintained in long-term culture[END_REF] have shown that raphe aberrations can lead to subsequent valve and virgae (striae) distortions. However, abnormal striation patterns have also been observed on valves showing a normal raphe or sternum system. Because the appearance of striae aberrations is believed to happen later during valve formation, should these teratologies be considered as a signal reflecting a mild deleterious effect? The same reasoning applies to the general valve outline; should it be considered as a minor response to stress or as collateral damage? Another interesting deformity is the presence of multiple rimoportulae on Diatoma vulgaris valves. Rimoportulae are formed later in the morphogenesis process; should this type of alteration be considered equal to raphe or striae abnormalities? Our observations on raphid diatoms suggest that individuals generally exhibit abnormal striation or sternum/raphe anomalies only in one valve, while the other valve is normal (Fig. 4). The possibility of an abnormal structure on the two valves of a cell is not excluded, and would therefore suggest two independent responses to stress. A mother cell with one abnormal valve (e.g., raphe aberration) will produce one normal daughter cell and one abnormal daughter cell, resulting in a decreasing proportion of teratologies if no additional "errors" occur. This makes deformities in diatom valve structure, other than the abnormal outline category, good biomarkers of stress because the deformity is not directly transmitted and multiplied though cell division. In other words, aberrations occurring at different stages of valve formation may not all have the same significance/severity or ecological signal, and this may represent important information to include in bioassessments. The problem, however, is that these abnormalities are often rare.
Are deformed diatoms viable, fit and able to reproduce?
Based on numerous laboratory observations made by authors of this publication, it seems clear that deformed diatoms in cultures are able to reproduce, even sometimes better than the normal forms (e.g., deformed Asterionella formosa Hassall, Section 2.2 and deformed Gomphonema gracile, section 2.3). However, the ability of abnormal cells to survive and compete in natural environments is potentially affected. Teratologies have different impacts on physiological and ecological sustainability depending on the particular valve structure that was altered. Valve outline deformation, for instance, could prevent the correct 3.2. Are deformities toxicant-specific?
As deformities are expected to occur during morphogenesis, different types of deformities may result from exposure to contaminants with different toxic modes of actions. Are all toxicants likely to induce similar deformities? From our database, the occurrences of the different types of deformities were grouped into three categories of hypothesized cause (including single source and mixtures): metal(s), organic compound(s), and a third one with all other suspected causes (a priori nontoxic) such as crowding, parasitism, and excess nutrients (excluding unspecified causes). The results presented in Fig. 7 should be interpreted with caution with unequal data available for the different categories (in particular, low number of data for organic compounds). Similar patterns in the distribution of deformities were found with exposure to organic and inorganic toxicants; in more than 50% of the cases, solely the valve outline was mentioned as being affected. Other types of deformities were, by decreasing order of frequency: striation (ca. 20%), followed by mixed deformities (ca. 14%), and sternum/ raphe alterations (ca. 12%). This is in concordance with other observations indicating that exposures to metals led to about the same degree of deformations as exposures to herbicides; in both cases, the highest toxin concentrations caused the highest ratio of sternum/raphe deformities to outline deformities [START_REF] Kahlert | Utveckling av en miljögiftsindikator -kiselalger i rinnande vatten[END_REF]. In contrast, other stresses than toxic exposure conditions (or unknown) resulted in deformities affecting cell outline in 45% of the cases, while 30% were mixed teratologies, 20% affected the striae and less than 10% the sternum/raphe system. Thus, the distributions of deformity types for toxic and non-toxic exposure were slightly different, which underscores the potential of deformity type to clarify the nature of environmental pressures and strengthens the need for describing precisely the deformities observed. Fig. 7 suggests that mixed deformities occur more frequently for environmental stresses (including various perturbations such as nutrient depletion) than for contaminant-related stresses. However, timing could also be a potential cause of differentiation between the various types of aberrations. Timing here can be interpreted in two very Fig. 7. Deformity occurrence (expressed as %) classified by types and reported causes of stress in field and laboratory studies. The data were gathered from the information available in the publications presented in Appendix A. Data were not considered for this graph when the cause of teratology was not specified.
Numerous species are known to be tolerant to contaminants. For example, [START_REF] Morin | Consistency in diatom response to metalcontaminated environments[END_REF] provide a list of diatom species that are cited in the literature as tolerant or intolerant to metals. As explained in their review, species that are able to tolerate toxic stress will thrive and dominate over sensitive species. Similar observations led to a concept called Pollution-Induced Community Tolerance (PICT) developed by [START_REF] Blanck | Pollution-induced community tolerance -a new ecotoxicological tool[END_REF]. According to this paradigm, the structure of a stressed assemblage is rearranged in a manner that increases the overall assemblage tolerance to the toxicant. Considering an assemblage where most species are tolerant, we would expect to observe less teratologies. However, this is not necessarily the case as aberrations are commonly encountered on tolerant species. This observation is not a surprise because even tolerant and dominant species are still under stress conditions (Fig. 8, scenario A). In this scenario, most teratologies are observed on tolerant species and very few on sensitive species due to their rarity in the assemblage. However, this is not always the case as some tolerant species are less prone to deformities than others (Fig. 8, scenario B), resulting in fewer deformed valves in highly contaminated environments. This raises the question as to whether deformities should be weighted as a function of species proneness to abnormalities. Furthermore, species have been shown to develop tolerance resulting in a Fig. 8. Conceptual model for the occurrence of teratologies from contaminant exposure among tolerant and sensitive species. In scenario A, the assemblage is prone to deformities and their occurrence increases with contaminant concentration. In scenario B, the occurrence of deformities is low due to the predominance of cells that typically do not exhibit structural changes in the presence of contaminants. Since sensitive species are likely to be eliminated from the assemblage as contamination increases, the occurrence of valve deformities observed on sensitive species in this assemblage for scenarios A and B is low. Finally, in scenario C, short term or pulse exposures are not likely to alter the assemblage composition and the occurrence of deformities is likely to affect mostly sensitive species as tolerant species are rare. different ways. First, it can be related to the chronology of teratology appearance in ecosystems or cultures. For example, if an abnormal valve outline aberration occurs early during an experiment, then this deformity will be transmitted and multiplied through cell division. However, if the individual bearing the abnormal valve shape appears later in time (or if this type of deformity does not occur), then other types of deformities may appear and become dominant. On the other hand, the presence of one type of deformity over another could also be associated with the moment during cell formation at which the stress occurs, i.e., that the contaminant reached the inner cell during the formation of one structure or another. There is also the possibility that an abnormal outline deformity is a secondary result from an impact affecting another mechanism of valve formation.
Proneness to deformities and tolerance to contamination
Are all diatom species equally prone to different types of deformities? From the literature published over the past ca. 70 years, we present species observed, the type of deformities noted and the tolerance to contamination when reported (Appendix A). Based on these data, we observed that the most common aberration is valve shape (as also presented in Fig. 6) and that this aberration is particularly evident for araphid species. Deformities in araphid species had ca. 60% of the reported deformities as irregular shape. This finding suggests that araphid diatoms may be more "prone" to showing abnormal valve outlines compared to raphid or centric diatoms. Therefore, araphid diatoms may not be good biomarkers compared to other species especially considering that shape aberration is multiplied by cell division (see above discussion). However, proneness to different types of deformities differed among long and narrow araphids: Fragilaria species mostly exhibited outline deformity (67%), compared to the robust valves of Ulnaria species (29%).
In addition to araphids, Eunotia species also tend to show abnormal shapes (> 75% in our database). This suggests that the formation of a long and narrow valve may provide more possibility for errors to occur or that the araphid proneness to deform may result from the absence of a well-developed primary and secondary sternum/raphe structure that could strengthen the valve. This argument may also be valid for Eunotia species that have short raphes at the apices, which is supported by irregularities mostly observed in the middle portion of the valve. Specimens of the Cocconeis placentula Ehrenb. complex (monoraphids) from natural assemblages collected in contaminated and uncontaminated waters have also frequently been observed with irregular valve outlines in Italian streams (Falasco, Aquatic Ecosystem Lab., DBIOS, Italy; unpublished results). This genus might be considered as unreliable in the detection of contamination because it seems to be prone to teratologies (mainly affecting valve outline which is transmitted during cell division).
A puzzling observation is the presence of deformities affecting only one species among the array of other species composing the assemblage. The abnormal specimens may all belong to the dominant species in the assemblage or not. When this situation is encountered for irregular shape teratologies, we can argue that this is in part due to the transmission of the aberration during cell division. This was the case at a mine site (with an assemblage almost entirely composed of two species) where 16% of the valves showed an abnormal outline and were all observed on species of Eunotia, while no teratology was observed on the other dominant species (Leguay et al., 2015). The same situation was noted in the previously mentioned example from the French River contaminated by a pesticide where 20-25% of abnormal shapes were observed on F. pectinalis (O.F. Müll.) Gray. On the other hand, when only one species in the assemblage presents deformities of the sternum/raphe structure and/or the striae, this suggests a true response to a stress event by a species prone to deformities. This has been observed at a mine site (high Cu) where deformities reached 8% and were always observed on Achnanthidium deflexum (Reimer) Kingston (Leguay et al., 2015). population adapted to certain stressors, which then may or may not show deformities. For example, [START_REF] Roubeix | Variations in periphytic diatom tolerance to agricultural pesticides in a contaminated river: an analysis at different diversity levels[END_REF] observed that the same species isolated from upstream and downstream of a Cu-contaminated site has different sensitivities to Cu, i.e., that not all populations of a species have the same tolerance. We should therefore expect variability in the sensitivity to deformation, even within tolerant species.
There is also the scenario where diatom assemblages are stressed by intermittent events of contamination; a spill from a mine tailing pond for example. If such assemblages are dominated by metal-sensitive species, we would expect to observe more teratologies on these species and very few on tolerant species as they are rare (Fig. 8,scenario C). This, of course, is based on the hypothesis that deformities will appear on sensitive species faster than the time it takes the assemblages to restructure towards a dominance of tolerant species (which would bring us back to the above-mentioned scenarios; also see Section 5.3).
We would furthermore expect that tolerance to deformities would not only be species-dependent, but also environment-dependent. In general, we hypothezise that suboptimal conditions (e.g., pH, nutrients, light, competition) favour the occurrence of teratological forms, while optimal conditions decrease their occurrence. Environmental conditions would then set the baseline on how sensitive a diatom assemblage is to toxic impacts. For example, some samples from pristine forest wetlands/swamps with low pH and no source of contaminants in the Republic of the Congo showed cell outline deformities (2%) (Taylor, Unit for Environmental Sciences and Management, NWU, South Africa; unpublished results). The presence of teratologies was therefore assumed to be attributed to the low pH of the environment or to the fact that these isolated systems had become nutrient limited. The key message from this section is to acknowledge that deformities may be found under different stresses (not only contamination by metals or organic compounds), and also that deformed diatoms are not always observed in highly contaminated environments.
Issues with teratology assessment
Small species and problematic side views
Certain abnormalities are more or less invisible under a light microscope, particularly for small species. There are numerous publications reporting valve aberrations observed with a scanning electron microscope which would otherwise be missed with a regular microscope (e.g., Morin et al., 2008c). This is problematic in a biomonitoring context, especially when a contaminated site is dominated by small species such as Fistulifera saprophila (Lange-Bertalot & Bonk) Lange-Bertalot, Mayamaea atomus (Kütz.) Lange-Bertalot or Achnanthidium minutissimum Kütz., or by densely striated species like Nitzschia palea (Kütz.) W.Sm. In these cases, the frequency of deformities may be underestimated. Would it be more appropriate to calculate a percentage of teratologies considering only the species for which all structures are easily seen under a light microscope? In the same line of thought, how should we deal with specimens observed in girdle view where deformities are often impossible to see? This situation is of concern when the dominant species tend to settle on their side, such as species belonging to the genera Achnanthidium, Gomphonema, and Eunotia. It could therefore be more appropriate for bioassessment purposes to calculate the teratology percentages based on valve view specimens only. This recognizes that the proportion of aberrations on certain species, often seen in girdle view, may consequently be underestimated. A separate count of deformities for species regularly observed side-ways could also be performed only considering valve-view specimens, and the percentage of teratologies could then be extrapolated to the total valves enumerated for this species. This proposal of a separate count is based on the likely hypothesis that a deformed diatom has the same probability to lay in one or the other view as normal specimens.
How to score the severity of the teratology?
The severity of teratologies, i.e. the degree of deviation from the "normal" valve, is usually not assessed in biomonitoring (Fig. 9). Would this information be useful to better interpret the magnitude of the stress? This question leads to another: how to quantify the severity of valve deformities depending on the type of abnormality? The line between a normal variation and a slight aberration is already difficult to draw [START_REF] Cantonati | Achnanthidium minutissimum (Bacillariophyta) valve deformities as indicators of metal enrichment in diverse widely-distributed freshwater habitats[END_REF]; is it possible to go further in this teratology assessment and score the deformities under slight-mediumpronounced deviations from the normal shape/pattern? This additional information could be of ecological interest, but might also be very subjective and limited to individual studies or situations. Image analysis might help to solve this problem in the future, although preliminary tests using valve shape have been inconclusive so far [START_REF] Falasco | The ecological meaning of diatom teratological forms induced by environmental stress[END_REF].
Implications for biomonitoring
Deformities as an indicator of unhealthy conditions
The frequency of deformities has been reported as a good biomarker of metal contamination, and in fewer studies to organic contamination. In most cases the effects of contamination on diatom teratologies were evaluated using percent of deformities regardless of their type. The majority of the studies either compared a contaminated site with a reference site or tested experimental conditions with a control and one or two contamination levels. As examples, [START_REF] Duong | Seasonal effects of cadmium accumulation in periphytic diatom communities of freshwater biofilms[END_REF] and Morin et al. (2008a) found a significantly higher presence of teratologies in a stream contaminated by metals (Cd and Zn) compared to its upstream control. In laboratory experiments using a monospecific diatom culture or on biofilm communities exposed to three levels of Cd (control, 10-20 μg/l and 100 μg/l), [START_REF] Arini | Cadmium deconta-as biomonitors of metal contamination from abandoned tailings[END_REF], [START_REF] Gold | Impacts of Cd and Zn on the development of periphytic diatom communities in artificial streams located along a river pollution gradient[END_REF] and Morin et al. (2008b) observed significantly higher proportions of deformed individuals in the contaminated conditions, but the overall difference in % teratologies between concentrations of Cd was not statistically significant. These examples underscore the usefulness of teratologies as a biomarker of stress. However, linking the magnitude of the response to the level of contamination is not as straightforward as comparing contaminated and reference conditions. For example, [START_REF] Cattaneo | Diatom taxonomic and morphological changes as indicators of metal pollution and recovery in Lac Dufault (Québec, Canada)[END_REF] only found a weak relationship between deformities and metal concentrations in lake sediments. [START_REF] Fernández | Design and testing of a new diatom-based index for heavy metal pollution[END_REF] and [START_REF] Lavoie | A mine of information: benthic algal communities[END_REF] were not able to correlate the forms increased up to 4% downstream at less contaminated sites with species potentially more prone to deformation. More specifically, all aberrations affected valve outline and were mostly observed on Fragilaria capucina Desm. For this reason, it was impossible for the authors to correlate metal concentrations with teratologies. In this particular scenario, changing the weight of the deformations based on the type of deformity recorded and by considering the species (and their proneness to form abnormal valves) would potentially better reflect the environmental conditions.
An experiment on the effect of Cd on a Pinnularia sp. (Lavoie, INRS-ETE, Quebec, Canada; unpublished results) will serve as an example illustrating the potential interest in scoring teratology severity. In this experiment, a higher percentage of deformed valves were observed after 7 days of exposure to Cd compared to a control. The observed teratologies were almost exclusively mild aberrations of the striation pattern. The proportions of deformed valves increased even more after 21 days of exposure, with more severe teratologies of different types (sternum/raphe, striae). In this experiment, considering the types and severity of the deformities (mild vs severe) would better define the response to Cd between 7 days and 21 days of exposure, which would bring additional information on toxicity during longer exposure times. Developing the use of geometric morphometry approaches could also help to quantitatively assess the deviation to the normal symmetry/ ornamentation.
Also worth discussing is the presence of abnormally shaped valves in high abundances. If mitosis is the main precursor for the occurrence of abnormal valve shape, then it is legitimate to wonder if these aberrations really reflect a response to a stressor or if they are the result of an error "inherited" from the mother cell? If cell division multiplies the number of valves showing abnormal outlines, then this type of deformity should potentially be down-weighted or not considered for biomonitoring. However, to identify valves with irregular shapes as a result of contamination versus inherited irregularities is near impossible without running parallel control studies.
Finally, the score related to the frequency of deformities could also be weighted by species diversity estimates. For example, if species diversity in the community is very low (e.g. one species, or one strongly dominating species and some rare species) there is a potential bias in the assessment of the response to a stressor. The impact may be overestimated if the species is prone to deformity, and underestimated otherwise. Therefore, in addition to considering the proneness to deformity, teratology-based monitoring could also include a metric where the % deformity is combined with information on species diversity. This should improve ecological interpretations. However, low diversity and strong dominance of one species are also typical symptoms of certain stresses such as metal contamination (see section 5.3).
Biological descriptors complementing a teratology-based metrics
This paper has focused on the presence of diatom valve teratologies as an indicator of environmental stress, specifically for contaminants such as metals and pesticides; this excludes eutrophication and acidification for which diatom-based indices and metrics already exist (Lavoie et al., 2006, 2014 andreferences therein). The teratology metric is gaining in popularity as seen by the number of recent publications on the subject. However, other biological descriptors or biomarkers have been reported to reflect biological integrity in contaminated environments. Although it is generally impossible to examine all metrics due to limited resources and time, the most informative approach would undoubtedly be based on incorporating multiple indicators.
One very simple metric to use that does not require any taxonomic knowledge is diatom cell density. Lower diatom cell counts are expected as a result of altered algal growth under contaminated stress conditions. This has for example been reported in metal-contaminated environments (e.g., [START_REF] Duong | Experimental toxicity and bioaccumulation of cadmium in freshwater periphytic diatoms in relation with biofilm maturity[END_REF][START_REF] Gold | Field transfer of periphytic diatom communities to assess short-term structural effects of metals (Cd, Zn) in rivers[END_REF]Pandey et al., occurrence of valve deformities with a gradient in metal concentrations in a contaminated stream. Leguay et al. (2015) observed the highest proportions of deformities at the most contaminated sites, but significant correlations were not observed using each metal separately and the confounding effects of metal contamination and low pH (∼3) made the direct cause-effect link difficult to assess. In these last studies, more aberrant diatom valves were observed at the contaminated sites compared to the reference sites, but the correlation between teratologies and metal concentrations collapsed in the middle portion of the contamination gradient. In laboratory cultures, a linear correlation has been observed between the frequency of deformities and metal concentrations, except for the highest concentration in the gradient where fewer deformations were noted (Gonçalves, University of Aveiro, Portugual and Swedish University of Agricultural Sciences, Uppsala, Sweden; unpublished results). This result could be explained by the fact that deformed cells may be less viable at very high metal concentrations.
Using an estimate of metal exposure/toxicity (e.g. CCU, cumulative criterion unit score; [START_REF] Clements | Heavy metals structure benthic communities in Colorado mountain streams[END_REF] may result in a better fit between metal contamination (expressed as categories of CCU) and deformity frequency. Using this approach, [START_REF] Morin | Consistency in diatom response to metalcontaminated environments[END_REF] demonstrated that > 0.5% of deformities were found in "high metal" conditions. Falasco et al. (2009b) used a similar approach and also observed a significant positive correlation between metals in river sediments (Cd and Zn expressed as a toxicity coefficient) and deformities (expressed as deformity factors). Some metric of integrated information summarizing (i) the response of diatoms to contaminants (e.g. score based on teratologies) and (ii) the cumulative stresses (e.g. using an overall "stress value") seems to be an interesting approach to establishing a link between contamination level and biomarker response.
Refining ecological signals by weighing teratologies
Water quality assessment with respect to toxic events linked to diatom indices could potentially be refined by "weighting the deformities" as a function of deformation type. Moreover, this assessment could also be pushed further by considering the severity of the deformity, the proneness of the species to present abnormal forms and diversity of the species affected. Although abnormal cells are often classified by types, there seems to be no ecological information extracted from this approach. Here, we raise the discussion on how (or if!) we could improve biomonitoring by considering the specific teratologies and their severity by modifying their weight/importance. A systematic notation/description of the type and severity of deformation and species affected would be required. Thus, "ecological profiles" of teratologies could be determined, as a function of the species affected (as suggested in [START_REF] Fernández | Design and testing of a new diatom-based index for heavy metal pollution[END_REF] and type of deformity. Indeed, improving our understanding about life cycle processes and the various types of deformations would greatly enhance the assignment of impact scores for biomonitoring, which is the essence of this paper.
The observation that valve aberrations are routinely found in extremely contaminated conditions led [START_REF] Coste | Improvements of the Biological Diatom Index (BDI): Description and efficiency of the new version (BDI-2006)[END_REF] to include the occurrence and abundance of deformed individuals in the calculation of the biological diatom index BDI. In their approach, observed deformities were assigned the worst water quality profile, meaning that their presence tends to lower the final water quality score. This means that the severity and type of malformation, and the species involved were not considered; all teratologies were scored equally. However, based on the discussion presented in Section 4, this approach may be simplistic and valuable ecological information on the characteristics of the deformities lost. For example, in the case of araphid diatoms prone to deformation (even in good quality waters, i.e., [START_REF] Cremer | Planktonic diatom communities in high arctic lakes (Store koldewey, northeast Greenland)[END_REF], the presence of teratologies may not always reflect the true degree of contamination. As a case example, [START_REF] Lavoie | A mine of information: benthic algal communities[END_REF] observed 0.25-1% deformations at a site highly contaminated by metals and dominated by A. minutissimum, while the number of abnormal ∼100% Achnanthidium minutissimum [START_REF] Lainé | A multicompartment approach ? diatoms, macrophy[END_REF][START_REF] Lavoie | A mine of information: benthic algal communities[END_REF][START_REF] Lavoie | A mine of information: benthic algal communities[END_REF]. In these cases, low diversity was not exclusively linked to metal contamination but also to low nutrients. Species diversity increased downstream in both systems which matched with dilution of the contamination; however, this could also be attributed to cell immigration and to increased nutrient concentrations downstream.
Assemblage structure also provides valuable information on ecosystems health as a shift from sensitive to tolerant species reflects a response to environmental characteristics. This assemblage-level response is believed to operate on a longer temporal scale as compared to the appearance of teratologies. This has been observed, for example, in a study with chronic metal exposure where deformed individuals were outcompeted and replaced by contamination-tolerant species, thus abnormal valves slowly disappeared from the assemblage [START_REF] Morin | Diatom responses to zinc contamination along a Mediterranean river[END_REF]. This suggests that the presence of deformities may be an early warning of short/spot events of high contamination, while the presence of tolerant species may reflect chronic exposure. The apparent temporal disparity could in part explain unclear response patterns observed under natural conditions when documenting teratologies alone as a biological descriptor.
Diatom frustule size is considered an indicator of environmental conditions, and selection towards small-sized individual and or species has been observed under contamination/stress conditions [START_REF] Barral-Fraga | Short-term arsenic exposure reduces diatom cell size in biofilm communities[END_REF][START_REF] Ivorra | Translocation of microbenthic algal assemblages used for in situ analysis of metal pollution in rivers[END_REF][START_REF] Luís | Environmental impact of mining activities in the Lousal area (Portugal): chemical and diatom characterization of metal-contaminated stream sediments and surface water of Corona stream[END_REF]Pandey et al., in press;[START_REF] Tlili | In situ spatio-temporal changes in pollution-induced community tolerance to zinc in autotrophic and heterotrophic biofilm communities[END_REF]. This metric is not commonly used in bioassessment, although it has potential in contributing additional information on ecosystem health. The time required for valve measurements may be one limiting factor which makes cell-size metrics currently unpopular in biomonitoring studies. Studies also reported deformities or shape changes in diatom frustules as a result of size reduction [START_REF] Hasle | Marine diatoms[END_REF].
Assessment of diatom health (live, unhealthy and dead cells) is also an interesting but unconventional descriptor to consider when assessing a response to contamination [START_REF] Gillett | The role of live diatoms in bioassessment: a large-scale study of Western US streams[END_REF][START_REF] Morin | Effects of a bactericide on the structure and survival of benthic diatom communities[END_REF]Pandey et al., in press;[START_REF] Stevenson | Assessing environmental conditions in rivers and streams with diatoms[END_REF]. It however requires relatively early observations of the sample. This analysis of fresh material could be coupled with cell motility [START_REF] Coquillé | Use of diatom motility features as endpoints of metolachlor toxicity[END_REF] and lifeform (or guild or trait) assessments. These biological descriptors, also not commonly used, have shown relationships with ecological conditions (e.g., [START_REF] Berthon | Using diatom life-forms and ecological guilds to assess organic pollution and trophic level in rivers: a case study of rivers in South-Eastern France[END_REF][START_REF] Passy | Differential cell size optimization strategies produce distinct diatom richness-body size relationships in stream benthos and plankton[END_REF][START_REF] Rimet | Use of diatom life-forms and ecological guilds to assess pesticide contamination in rivers: lotic mesocosm approaches[END_REF]. The live and dead status assessment can also be coupled with teratology observations. For example, live and dead diatoms were differentiated at sites affected by metals and acid mine drainage, and the results showed a large amount of deformities and high percentage of dead diatoms (> 15%) (Manoylov, Phycology lab, Georgia College and State University, Georgia, USA; unpublished results).
The presence of lipid bodies or lipid droplets in diatoms can be a descriptor of ecosystem health. Lipid bodies are produced by all algae as food reserves, and can be stimulated under various conditions (d'Ippolito et al., 2015;[START_REF] Liang | Dynamic oil body generation in the marine oleaginous diatom Fistulifera solaris in response to nutrient limitation as revealed by morphological and lipidomic analysis[END_REF][START_REF] Wang | Algal lipid bodies: stress induction, purification and biochemical characterization in wild-type and starchless Chlamydomonas reinhardtii[END_REF][START_REF] Yang | Molecular and cellular mechanisms of neutral lipid accumulation in diatom following nitrogen deprivation[END_REF]. This biomarker has shown good fit with contamination; lipid bodies increasing in number and size under metal contamination (Pandey et al., in press;[START_REF] Pandey | Exploring the status of motility, lipid bodies, deformities and size reduction in periphytic diatom community from chronically metal (Cu, Zn) polluted waterbodies as a biomonitoring tool[END_REF]. Lipid analysis does not require taxonomic skills, and can be quantified using dyes and fluorescence. However, depending on the level of contamination, the cell may be excessively stressed and the lipid bodies could be oxidized in order to reduce the overproduction of reactive oxygen species (ROS) (as observed in the green alga Dunaliella salina, [START_REF] Yilancioglu | Oxidative stress is a mediator for increased lipid accumulation in a newly isolated Dunaliella salina strain[END_REF]. Moreover, lipid bodies are produced under many environmental conditions (e.g., lipids, more specifically triacyl glycerol (TAGs), increase under high bicarbonate levels; [START_REF] Mekhalfi | Effect of environmental conditions on various enzyme activities and triacylglycerol contents in cultures of the freshwater diatom, Asterionella formosa (Bacillariophyceae)[END_REF], and the correlation with metal contamination may be subject to fluctuation.
Finally, antioxidant enzymes are also good biomarkers of stress [START_REF] Regoli | Oxidative pathways of chemical toxicity and oxidative stress biomarkers in marine organisms[END_REF]. Under stress conditions organisms suffer cellular alterations, such as overproduction of ROS, which can cause damage in lipids, proteins and DNA. Cells have defense mechanisms against ROS, and once they are activated, there are several biochemical markers to assess different contaminations. These classical tests, adapted to diatoms, are associated with the measurement of ROS scavenging enzymes or non-enzymatic processes such as production and oxidation of glutathione and phytochelatins, or measuring lipid peroxidation and pigments content. More studies are being developed to find specific biomarkers for toxicants in order to effectively assess their impact on diatoms [START_REF] Branco | Sensitivity of biochemical markers to evaluate cadmium stress in the freshwater diatom Nitzschia palea (Kützing) W. Smith[END_REF][START_REF] Corcoll | The use of photosynthetic fluorescence parameters from autotrophic biofilms for monitoring the effect of chemicals in river ecosystems[END_REF][START_REF] Guasch | The use of biofilms to assess the effects of chemicals on freshwater ecosystems[END_REF].
Considering the number of available diatom-based biological descriptors, we recommend the development of a multi-metric index for contamination assessment. Keeping in mind the limited time and resources available (money, analysts, equipment) it would not be reasonable to include all metrics. In the future, new technologies combining genetic, physiological and environmental measures may contribute to develop routine biomonitoring tools. As a first step to facilitate future bioassessments, a library of teratological metrics rated against environmental health will be required. Currently, the complementary information issued from the combination of certain selected metrics could significantly enhance the ecological information provided by diatoms, and therefore improve our understanding of ecosystems status. The assessment of contamination using biological descriptors could also be refined by combining the response of organisms from different trophic levels. For example, diatom-based metrics could be combined with invertebrate-teratology metrics such as chironomid larvae mouthpart deformities.
Conclusions and perspectives
Are teratologies alone sufficient to adequately assess a response to contamination? Is this biological descriptor ecologically meaningful? These are the fundamental questions of this discussion paper. The answer is undoubtedly yes with selected taxa based on the number of studies that were successful in correlating % deformities and contamination (mostly metals and pesticides). However, taxa prone to shape deformities (e.g., Fragilaria, Eunotia) under natural conditions may provide a false positive in terms of a response to contamination and thus deformities in these taxa alone within a community should not be overinterpreted. Sharing current experiences and knowledge among colleagues has certainly raised numerous questions and underscores certain limitations in the approach. This paper provides various paths forward to refine our understanding of diatom teratologies, and hence, increase the sensitivity of this metric in bioassessments. Many suggestions were presented, and they all deserve more thorough consideration and investigation. One more opinion to share is that the occurrence of teratologies is a red flag for contamination, even though teratologies do not always correlate with the level of contamination. Teratologies, at the very least, are good "screening" indicators providing warnings that water quality measurements are needed at a site. This alone is interesting for water managers trying to save on unnecessary and costly analyses. Moreover, the general ecological signal provided could suggest the presence of a stressor that may affect other organisms, and ultimately ecosystem integrity and functions (ecosystem services). We anticipate that enumerating and identifying diatom deformities can 2014). However, this metric alone does not consistently reflect the response of diatoms to perturbation because numerous other factors such as water discharge or grazing pressure have an influence on algal abundance and biomass. Another simple metric to calculate is diversity. For example, metal loading possibly contributed to lowering diatom diversity in the Animas River watershed, Colorado [START_REF] Sgro | Diatom species composition and ecology of the Animas River Watershed, Colorado, USA[END_REF]. On the other hand, diversity is also driven by many other factors which do not always correlate with ecosystem's health [START_REF] Blanco | Are diatom diversity indices reliable monitoring metrics?[END_REF]. This multilayer condition has been noticed at sites with different scenarios of contamination (abandoned mine tailings in Canada, or industrial discharge in France), where assemblages were composed of
Fig. 2 .
2 Fig. 2. Number of papers on the topic of diatom teratologies in freshwater environments (natural and laboratory conditions) published from 1890 to 2015. Database provided in Supplementary Material.
Fig. 3 .
3 Fig. 3. Light micrographs of Asterionella formosa grown in laboratory conditions. Micrographs were made on a culture in late exponential growth phase. A: Normal cellular morphotype of an A. formosa colony. The dashed line represents septum position in a dividing cell. B: Abnormal morphotype. The arrow points to a curved epivalve wall. C: Colony of normally-sized cells (about 50 μm long) cohabiting with a colony of small and deformed cells (about 15-20 μm long). Scale bars represent 10 μm.
Fig. 4 .
4 Fig. 4. Examples of diatom frustules showing deformities on one valve, while the other valve is normal. The first three examples represent striae aberrations, while the last two pictures show mixed deformities with raphe and striae aberrations (see section 3.1. Types of deformities). Scale bar = 10 μm.
Fig. 5 .
5 Fig.5. Examples of different types (i, ii, iii and iv) and degrees of deformities observed on Pinnularia sp. valves in a culture exposed to cadmium. (i) irregular valve outline/abnormal shape, (ii) atypical sternum/raphe (iii) aberrant striae/areolae pattern, (iv) mixed deformities. Should they all be considered equally meaningful for biomonitoring purposes? Scale bar = 10 μm.
Fig. 6 .
6 Fig.6. Types of deformities reported in the literature for various diatom species. The data used to create this graph come from the publications reported in Appendix A.
Fig. 9 .
9 Fig. 9. Normal valve, slightly deformed valve, and markedly deformed valve of Nitzschia palea, Eunotia sp., and Achnanthidium minutissimum exposed to metals. Scale bar = 10 μm.
Acknowledgements C. Fortin and I. Lavoie acknowledge the financial support of the Fonds de recherche du Québec -Nature et technologies through the Développement durable du secteur minier programme. C. Fortin is supported by the Canada Research Chair programme. The contribution of S. Morin is through the framework of the Cluster of Excellence COTE (ANR-10-LABX-45). Funding for this work and curation of the National Phycology Collection (CANA) for P.B. Hamilton comes from the Canadian Museum of Nature (RAC 2014(RAC -2016)). The contribution of M. Kahlert and S. Gonçalves is partly funded by SLU's Environmental monitoring and assessment (EMA) programme "Non-Toxic environment". M. Kahlert is also thankful for funding from The Swedish Agency for Marine and Water Management sub-programme "Diatoms". L.K. Pandey is supported by Incheon National University (International Cooperative Research Grant). J.C. Taylor is the recipient of South African National Research Foundation (NRF) incentive funding. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and therefore the NRF does not accept any liability in regard thereto. The authors would like to thank Prof. G. Badino for helful discussion and comments on this paper.
Author contributions
The first five authors contributed most significantly to the paper. The other co-authors, who collaborated to the work, are placed subsequently in alphabetical order.
Appendix A. Supplementary data
Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.ecolind.2017.06.048. |
01508534 | en | [
"shs.eco"
] | 2024/03/05 22:32:15 | 2018 | https://hal.science/hal-01508534v2/file/GreenDynamics-16042018.pdf | Jean-Marc Bourgeon
email: [email protected]
Margot Hovsepian
Green Technology Adoption and the Business Cycle
Keywords: Growth, sustainability, uncertainty. JEL: O33, O44, E37, Q55
We analyze the adoption of green technology in a dynamic economy affected by random shocks where demand spillovers are the main driver of technological improvements. Firms' beliefs about the best technology and consumers' anticipation drive the path of the economy. We derive the optimal policy of investment subsidy and the expected time and likelihood of reaching a targeted level of environmental quality under economic uncertainty. This allows us to estimate the value as a function of the strength of spillover effects that should be given to the environment in order to avoid an environmental catastrophe.
Introduction
The increasing number of environmental issues that the world is facing has triggered a wide debate on how to switch to sustainable development paths. Adoption of green technologies (AGT) is one of the main channels through which countries will be able to avoid environmental disasters without overly harming their well-being. However, adopting new, cleaner technologies is risky for firms. At first, they incur a switching cost: green technologies are often more expensive and less productive, and the workforce may not have the skills to operate the new technology. Moreover, in the long run, investment choices may turn out to be inefficient, harming the firms' profitability. 1 Hence, even if a technology is available (at the turn of the 20th century, 38% of American automobiles were powered by electricity) and is endorsed by even such a preeminent inventor and businessman as Thomas Edison, that does not guarantee that it is the best choice to be made. Both network externalities and technological spillovers play an important role in determining what is the optimal technology for firms to adopt. Public policies may be designed to help firms to overcome this risk and to invest in green technologies, but the extent of this investment effort also depends on the prevailing economic environment. If the economy is facing a recession, available funds may be scarce, leading firms to postpone investment projects. We may thus expect the path toward environmental sustainability to be stochastically affected by the same economic shocks as those that generate the business cycle.
In this paper, we analyze this problem using a simple AGT model, focusing on an environmental policy that takes the form of subsidizing investment in green technologies. We determine the relationship between the volatility in the adoption path of technology and the value that should be given to the environmental quality (EQ) in order to avoid an environmental disaster. We consider a setting where industrial production continuously harms the environment. By investing in less-polluting technologies, firms contribute to lowering their impact on the environment. Firms' profit depends on their past and present investment choices that are subsumed in a "technology mix" index, a parameter that measures the pollution intensity of their production process. A key feature of our framework is that a firm being equipped at a given date with the most profitable production process does not mean that its machines are the most recent or innovative ones. Instead, the profitability of the various production processes depends also on technological spillovers like the skills of the workforce that use the technology, and the availability of the inputs and of the maintenance services required by the technology. The more a technology is used, the better the diffusion of knowledge and the networks of related services, including the research effort of the machine industry sector, implying that the "optimal technology mix" depends on the investment choices made by all firms. Each period, firms devote a certain amount of resources to conform their technology to the most efficient one. To decide on their investment, they form expectations based on private and public information and may undershoot or overshoot the optimal technology index. The imperfect assessment firms have about their future economic environment results in an industrial sector composed of firms with heterogeneous technological processes. This explains why along any equilibrium path different technologies coexist at the same time, some more largely adopted than others.
Because markets do not account for the environmental footprint of the economy, the social planner designs a policy that redirects the firms' technology indexes toward greener mixes. The policy implemented by the social planner internalizes the marginal benefit of AGT and thus of a better EQ on social welfare. There is often substantial scientific uncertainty about the impact of human activities on environmental resilience (the capacity of ecosystems to respond to shocks). It is generally considered that the environment has an intrinsic ability to partly regenerate and recover from past damages as long as EQ has not dropped below a tipping point, i.e. a possible threshold for abrupt changes, from one stable state to another, catastrophic one, after a transition period that may be irreversible. 2 One of the public objectives is thus to avoid EQ crossing the tipping point and wandering around hazardous levels. However, since business cycles and uncertainty affecting AGT make the path of the economy stochastic, the economy may at some point hit or even pass the tipping point while transitioning toward a desired long-term value.
Our model captures these various uncertainties in a very parsimonious way thanks to two convenient analytical tools commonly used in financial economics: global games and chance-constrained stochastic programming. 3 The firms' problem of adapting their production process over time is framed as a dynamic global game where a continuum of firms, characterized at date t by their current technological mix, faces a coordination problem guided by private and public signals about the next period's optimal technology mix. We focus on Markov perfect equilibria where firms' optimal investment strategy is linear and signals are normally distributed. The social planner's problem is expressed as a chance-constrained stochastic program: the difference between the level of EQ and the tipping point can be compared to a "budget" (in the case of the global warming problem that we consider in our numerical simulations, a "carbon" budget) that, in order to avoid being at risk of bankruptcy, society should not exhaust. As the path of the economy is stochastic, the social planner must find the optimal policy such that this budget is not exhausted with a certain level of confidence. Depending on the value attached by individuals to the environment, this constraint could be binding over a long period, i.e., securing only a minimum carbon budget for decades could be optimal if the individuals' direct utility from the carbon budget is low. However, just as a financial firm that manages its portfolio of assets using the value-at-risk criterion may, because of unexpected large shocks, go bankrupt nevertheless, society still faces the risk of an environmental catastrophe even while maintaining EQ at the chance-constrained level. 4 A relevant benchmark is thus the minimal value that should be given to the environment so that this budget constraint along the optimal path is never binding: this Chance-Constrained Value of the Environment (CCVE) can serve as the Social Cost of Carbon (SCC) in the case of the global warming problem. [START_REF] Nordhaus | A review of the stern review on the economics of climate change[END_REF] We exhibit a framework where the average technological mix and the other state variables of the economy follow a Gaussian random walk that permits estimation of confidence intervals for the realized paths of AGT and EQ indexes. The optimal subsidy policy is characterized through benchmark values of the interest rate that should prevail at equilibrium on the financial market. The targeted rate is high at the beginning of the public intervention: firms benefit from large subsidies to improve their technology mix, which leads to a high demand of capital. The average interest rate is higher the higher the marginal value of the environment. It stays at this level as long as the economy traverses a transition period that is eventually followed by a carbon-neutral trajectory, i.e. a situation where the industrial sector no longer impacts the environment. Along this path, the targeted rate decreases toward its stationary value and so does the public subsidy.
We illustrate this approach with numerical simulations that show a positive relationship between the technological spillovers and the CCVE. The more spillover effects are at work, the more incentives should be given to direct firms' choices toward a sustainable (carbon-neutral) path, but also the fastest this aim is reached under the optimal policy. The major impact of technological spillovers on the optimal policy comes from their impact on catastrophic risk: a large subsidy is required because the risk of reaching the environmental tipping point is high.
Our model is related to the large literature on growth and sustainability. The endogenous green growth literature focuses on productivity improvements and frontier innovation. This is the case in the AK paradigm where capital-knowledge accumulates with learning-by-doing (Stockey, 1998), in Lucas-like extensions (Bovenberg & Smulders, 1995), in a product variety framework (Gerlagh & Kuik, 2007) or in the Schumpetarian growth paradigm of destructive creation and directed technical changes (Acemoglu et al., 2012), where the most productive innovations are adopted by firms as soon as they are discovered. While frontier innovation is needed to make green technologies competitive compared to "brown" ones, our focus is on adoption of existing technologies that go with spillover effects causing a gradual replacement of old and polluting machines by greener ones. Our approach is thus close to the literature on endogenous growth viewed as a process of adoption of existing ideas and mutual imitation between firms, as exposed by Eaton & Kortum (1999); Lucas Jr & Moll (2014); Lucas ditional ton of carbon dioxide emissions that create lasting reductions of output through the rise in temperatures (see, e.g. Nordhaus, 2014). We consider instead (arguably equivalently) that in addition to consumption, individuals enjoy having a good EQ but their willingness to pay for it is not related to the risks faced by future generations.
(2009); Perla & Tonetti (2014). In these papers, it is assumed that each agent in the economy is endowed with a certain amount of knowledge ("ideas") and this knowledge evolves through contact with the rest of the population. We adopt a similar approach to describe AGT: while there is no explicit R&D sector in our model, there is a pool of existing technologies with potentials that are more or less exploited depending on the proportion of firms that use them. The distribution of technology used among firms shifts over time according to the firms' incentives to adopt new techniques.
Most of the literature on sustainable growth focuses on environmental uncertainty, that is, uncertainty on the frequency of catastrophic environmental events, on the size of the damage, or on the ability of the environment to recover from pollution. 6 Contrary to these analyses, we suppose that the hazard rate function (which links the likelihood of the catastrophe to EQ) and the extent of the damages on social welfare caused by the catastrophe are unknown. The social planner knows only that doomsday is more likely the lower the EQ, particularly once this index has reached the tipping point. Its objective is thus to maximize the expected intertemporal utility, avoiding EQ passing the environmental threshold. The few papers that describe the responsiveness of optimal abatement policy to business cycles (Jensen & Traeger, 2014) or that assess the optimal policy instrument in stochastic environments (Heutel, 2012; Fischer & Heutel, 2013) do not consider AGT. In the few integrated assessment models encompassing economic risk such as Golosov et al. (2014) or Traeger (2015), there is no absorbing lower bound in the dynamics of the environmental quality, and even though there is a risk that the environmental policy is not optimal given the realized shocks, there will be no irreversible consequences. Our model instead gives us a simple tool to derive the SCC such that given the optimal green technology subsidy, EQ avoids reaching a tipping point with a large enough probability.
The remainder of the paper is organized as follows: In section 2, we describe the economy and the dynamics of the AGT index. In section 3, we discuss the social planner's program and we characterize the optimal policy of investment subsidy. In section 4, we specify this policy in a framework that allows the economy to follow a Gaussian random path. Section 5 is devoted to numerical simulations. The last section concludes.
Green technology dynamics
The economy is composed of a continuum of firms, of total mass equal to one, that collectively produce at date t an amount q t of output, taken as the numeraire, which corresponds to the GDP of the economy. In the following, we abstract from the problem of production per se (in particular, the demand and supply of labor) to focus on the cleanliness of the production processes, i.e. the extent to which firms harm the environment while producing. We take an agnostic stance on the relative productivity of 'brown' and 'green' technologies by supposing that the GDP follows the same dynamic whatever the economy's technological mix of productive capital, and more specifically that it is given by the following first-order autoregressive dynamic
q t = gq t-1 + ĝ + κ t , (1)
with 1 > g ≥ 0, ĝ ≥ 0, and where κ t corresponds to exogenous shocks that affect the economy at date t and is the realization of random variable κt ∼ N (0, σ 2 κ ).7 As g < 1, the per-period expected increase in GDP, ĝ -(1 -g)q t-1 , diminishes over time and converges to q S = ĝ/(1 -g). 8 The production processes used by firms are diverse and their environmental impacts are captured by real value parameters dubbed 'technology mixes' or 'green technology' indexes, denoted x it for firm i at date t, which on average lead to an AGT index of the industrial sector given by µ t ≡ ´xit di. Each period, firm i may decide to spend I it to improve its technology mix that evolves according to
x it+1 = x it + I it .
(2)
The spendings (or savings) I it comes in addition to the capital necessary to increase the firm's productivity; its corresponds to outlays on productive investments that are due to the buying of technologies greener or browner than the ones embedded in x it . With green technologies costlier than brown ones, a positive I it indicates that firm i is greening its production process. These spendings allow firms to adapt their mix to the economic environment that will prevail the next period. A mix different from the optimal one is costly because of the specific knowledge and skills required to operate and maintain technologies that are not widely used or because of the relative scarcity of inputs employed. Firms would ideally be equipped with the most efficient mix, but it evolves with the diffusion of knowledge and the know-how of workers (the so-called knowledge spillover: the more firms invest, the better the workers' knowledge of new technologies in general), network externality (how easy it is to find specific inputs and parts to service the technology) and the engineering and research effort exerted by the machine industry firms that compete to satisfy the demand for means of production in the economy. To grasp these various effects in a parsimonious way, we suppose that firm i's ideal mix at time t is given by
x it = λ it µ t-1 + (1 -λ it )µ t (3)
where λ it corresponds to firm's i characteristics at that time (i.e. the composition of its local economic environment, from its network of suppliers to the know-how of its employees). λ it > 1 corresponds to a firm at the front edge of the spectrum, because of a favorable local environment, while λ it < 1 denotes a backward situation. Since the actual technology mix x it results from investment choices made in previous periods, the firm i's date-t revenue net of the spending to adapt its mix for the next period is given by π(x it , I it ;
x it ) = Π(x it ) -(x it -x it ) 2 /2 -I it (4)
where Π(x it ) is the firm's potential revenue and (x it -x it ) 2 /2 the loss due to a less effective mix than x it . As firms' local environments evolve (due, e.g. to new hires and choices made by other firms around), we suppose that the firms' specificities of the next period, λ it+1 , are unknown to their managers at the time they make their investment choices and correspond to idiosyncratic and time-independent draws from the same time-invariant distribution. Denoting by I it the information of firm i at time t, we thus have E t [ λit+1 |I it ] = λ for all i and t. The next period optimal index of the industrial sector is given by
x t+1 = ˆx it+1 di = λµ t + (1 -λ)µ t+1 = µ t + λ ˆIit di (5)
where λ captures the green technology demand spillover. We assume that λ is positive but lower than 1 because of the slow spreading of new technologies. 9 If λ = 0, the average best mix for the next period corresponds to the current average mix µ t . When demand spillovers are at work (λ > 0), it also depends on the investment choices made by all firms, which creates a coordination problem. As firms make their investment decisions simultaneously each period, they must somehow anticipate the extent of the resulting total investment. This intertemporal coordination problem is formalized as a succession of global games. 10 Each period, firms form their expectations on x it+1 thanks to a public signal and firms' private (idiosyncratic) signals on the average optimal index, ωt = x t+1 + ηt , and wit = x t+1 + εit respectively, where ηt and εit are normally distributed time-independent noises with zero mean and precision τ η = σ -2 η and τ ε = σ -2 ε verifying E[ε it εjt ] = 0 for all i, j and ´ε it di = 0. These signals allow firms to (imperfectly) coordinate their investment levels each period although their decisions are taken independently. This dynamic setup is solved sequentially: our focus is on Markov perfect equilibria where the economic fundamentals x t is a state variable that firms must anticipate each period to decide on their investment levels. More specifically, Bayesian updating implies that firm i's posterior beliefs about x t+1 given its signals are normally distributed with mean
xit+1 ≡ E x t+1 |ω t , w it = τ η ω t + τ ε w it τ η + τ ε = x t+1 + τ η t + (1 -τ )ε it , (6)
where τ = τ η /(τ η + τ ε ) is the relative precision of the public signal, and variance
σ2 it = (τ η + τ ε ) -1 , (7)
for all i and t. The firm chooses its investment plan to maximize the discounted sum of per-period profit (4), which is equivalent to minimizing the expected sum of the discounted revenue loss compared with the optimal mix. Applying the principle of optimality, the firm's optimal plan is derived by solving the Bellman equation
V(x it ; x it ) = min I it (x it -x it ) 2 /2 + I it + δ t E t [V(x it + I it ; x it+1 )|ω t , w it ] ( 8
)
where δ t is the discount factor corresponding to interest rate r t , i.e. δ t = (1 + r t ) -1 . It is shown in the appendix that Proposition 1 Firm i's equilibrium investment strategy at time t is given by
I it = xit+1 -x it -r t . (9)
The firms' technology levels at t + 1 are normally distributed with mean
µ t+1 = µ t -(r t -τ η t )/(1 -λ) (10)
corresponding to the date-t + 1 AGT level of the economy, and variance
σ 2 x ≡ (1 -τ ) 2 σ 2 ε .
According to (9), firms' investment strategy is to adapt their production process to their estimate of the average most efficient mix diminished by the price of capital r t , which leads firms to disinvest. For firms with a low mix, this strategy corresponds to buying more environmentally friendly equipment. For the others, their investment is directed in the opposite direction: they can save on new equipment spending by buying less expensive brown technologies. On average, this heterogeneity in investment policies should somehow be canceled out, but while this is true for idiosyncratic noises, the public signal η t distorts firms choices in the same direction to an extent that depends on its reliability τ : the better the signal's precision, the larger the distortion. Indeed, given the firms' investment strategy
I(ω t , w jt , x jt , r t ) = τ ω t + (1 -τ )w jt -r t -x jt (11)
we obtain that on average, as ´εjt dj = 0,
ˆI(ω t , wjt , x jt , r t )dj = x * t+1 + τ η t -r t -µ t ,
and thus that at a rational expectation equilibrium, the optimal average mix satisfies
x * t+1 = µ t + λ[x * t+1 + τ η t -r t -µ t ] = µ t + λ(τ η t -r t )/(1 -λ). (12)
The shock η t also modifies the dynamic of the AGT index given by (10) which is derived from (12) using (5). This shock is magnified by a factor equal to (1 -λ) -1 : the larger λ, the larger the effects of the public signal and of the cost of capital r t . Without governmental intervention, as E[µ t+1 ] = µ t -r t /(1-λ), the larger λ, the lower the AGT index. Indeed, as investment is costly in the absence of an environmental policy, each firm anticipates that other firms' investments will be low, which leads to an equilibrium that tends to the lower bound of the AGT index.
Due to the idiosyncratic shocks on believes, ε it , firms have different expectations on x t+1 . These discrepancies lead to a Gaussian distribution of firms' green indexes around the AGT level with a dispersion that is larger the better the relative precision of the idiosyncratic shocks 1 -τ . Hence, the industrial sector can be thought of as a 'cloud' of firms with a technology level that is drawn each period from a normal distribution centered on the AGT index µ t with standard deviation σ x .
To counteract the negative effect of the interest rate on green investment, we assume in the following that the government implements at date t 0 a subsidy policy plan {z t } t≥t 0 leading to a per-period investment cost r t -z t . The dynamic of the AGT index (10) then becomes
µ t+1 = µ t + (z t -r t + τ η t )/(1 -λ) (13)
which increases in expectation if z t > r t , i.e. if the net cost of capital is negative.
Consumers
Consumers maximize their intertemporal utility by arbitraging between consumption and savings every period. Their well-being mainly comes from consumption, but it is also impacted by the environment, either from private awareness and concerns for environmental issues, or because of tangible consequences of climate change such as damages due to more frequent extreme weather events. More precisely, we consider that their per-period preferences on consumption c and environmental quality e are represented by a utility function u φ (c, e) increasing in c and e and concave, that belongs to a parametric family where φ indexes the consumers' marginal rate of substitution (MRS) of consumption for EQ:
d dφ ∂u φ (c, e)/∂e ∂u φ (c, e)/∂c > 0.
We suppose that all consumers are identical and thus that φ characterizes society as a whole. We also suppose that, while aware of the impact of EQ on their welfare, consumers do not try to modify the environment through their consumption and saving plans. This could be the case because they consider that they are too numerous for their individual behavior to have a significant impact on the environmental path.11 Accordingly, we model their behavior by considering a representative consumer whose saving and consumption plans solve
max E t +∞ τ =t β τ -t u φ (c τ , ẽτ ) : cτ = Rτ + rτ-1 Sτ-1 -sτ , sτ = Sτ -Sτ-1 ( 14
)
where R t is her date-t revenue, S t-1 her savings from the previous period, r t-1 S t-1 the corresponding date-t capital earnings, s t the savings adjustment of period t and β the psychological discount factor. Solving the consumer's program, we obtain Lemma 1 The consumption rule that solves (14) satisfies
∂u φ (c t , e t )/∂c = (1 + r t )βE t [∂u φ (c t+1 , ẽt+1 )/∂c] ( 15
)
at each date t.
Equation ( 15) corresponds to the Ramsey-Euler rule in our setup. It also defines the supply function of capital, while (13) defines the demand side coming from firms. At the date-t equilibrium on the capital market, the interest rates embodied in (15) and (13) are equal. Moreover, at the goods market equilibrium, aggregate production net of investment must be equal to total consumption, i.e.12 c t = q t -ˆIit di = q t -µ t+1 + µ t .
(16)
Together, these two conditions allow us to determine the global equilibrium that the economy reaches each period.
The environment
Production generates pollution, which harms the environment, but this detrimental effect can be reduced if firms improve their technology parameter. This mechanism is embodied in the following dynamic of the EQ index
e t+1 = θe t + ê -ι t q t ( 17
)
where 0 < θ < 1, ê ≥ 0 and where ι t is the emission intensity of the technology mix at date t, which measures the damage to the environment coming from human activities per unit of GDP. ê is the per-period maximum regeneration capacity of the environment, the actual regeneration level reached at period t being ê -(1θ)e t-1 , which depends (positively) on the environmental inertia rate θ. Without human interference (ι t = 0), the EQ index is at its pristine level e N = ê/(1 -θ). Emission intensity is related to the AGT index by
ι t = ϕ -ξµ t /q t ( 18
)
where ϕ is the maximum emission intensity and ξ > (1 -θ)ϕ so that green technologies are sufficiently effective. Substituting for ι t in (17), we obtain a dynamic of EQ that follows the linear first-order recursive equation
e t+1 = θe t + ξµ t -ϕq t + ê. ( 19
)
We suppose that ϕq S > ê, implying that without AGT, the environment will collapse once production is sufficiently large and will eventually reach the "tipping point" ē that should not be passed permanently: if pollution is too high too often, the resilience of the environment is at stake, i.e. abrupt shifts in ecosystems may happen with dire and irreversible consequences for society. 13 On the other hand, Definition 1 (Environmentally Neutral Path) The economy has reached at date
T an Environmentally Neutral Path (ENP) if for all t ≥ T , E[ι t ] = 0.
Therefore, an ENP is a sustainable situation in which the expected emission intensity of the economy is nill. The AGT subsidy that is required once the economy has reached an ENP should allow for µ t to stay at ϕq t /ξ in expectations. ENPs are of most interest because we consider green technologies that aim to reducing emissions (i.e. they do not allow for direct improvement of EQ) and thus environmental neutrality is the best society can achieve. Along an ENP, thanks to the natural regeneration 13 There is often substantial scientific uncertainty about the resilience of ecosystems and thus a debate about the relevant value of these environmental thresholds: drastic changes may happen when EQ is larger than a given referential ē (or nothing at all while EQ is lower), suggesting that the date of the catastrophic event, T , depends on EQ stochastically. Formally, this corresponds to a survival function at time t, S t = Pr{T > t|e t } that increases with e t , but the lack of scientific knowledge prevents us from specifying it. The debate over ē is thus about the "reasonable" maximum exposure of society to a potential catastrophic collapse, but since there is always such a risk, a cautious strategy is to make sure that EQ is as large as possible.
capacity of the environment, the average EQ increases and tends toward its pristine level e N .
However, reaching an ENP may prove to be too demanding and society may end up stabilizing around an expected EQ level e S < e N . This sustainable situation corresponds to the notion of a stable environment path:
Definition 2 (Stable Environment Path) The economy has reached at date T a Stable Environment Path (SEP) at level e S if for all t ≥ T , E[ι t qt ] = (1 -θ)(e N -e S ).
Under an SEP, technological improvements must fill the gap between the environmental damages due to economic growth and the regenerative capacity of the environment to maintain EQ at the desired level over time. Compared to its ENP levels, the expected AGT index is lower by a constant proportional to the difference between the pristine EQ level e N and the stabilized one e S : (1 -θ)(e N -e S ) corresponds to the per-period loss of EQ compared to the pristine level that society does not compensate for. Nevertheless, to stay at a constant EQ level, the AGT index must increase at the same pace as the emissions coming from aggregate production.
Environmental policy
We consider a benevolent social planner who decides on a policy that modifies the dynamics of the AGT level (13) through setting z t for all t ≥ t 0 where t 0 is the first period where the policy is designed and implemented. The aim of the policy is twofold: first, it has to correct for the fact that the consumer's value of the environment is not reflected in the interest rate that prevails at equilibrium on the capital market. Second, while the MRS at the citizen level allows individuals to link EQ with the damages they encounter, these evaluations are likely to be underestimations of the actual contribution of EQ to global welfare. Indeed, a measure of the social value of EQ should encompass all the services provided by the environment and, in particular, the fact that the lower EQ, the more at risk the resilience of the environment is and thus the larger the risk of a catastrophe for future generations. Hence, since there is always a non-zero probability that the actual environmental path hits ē under the optimal policy, the social planner must also ensure that an environmental disaster is avoided with at least a certain level of confidence, i.e. that Pr{ẽ t ≤ ē} ≤ α for all t ≥ t 0 , where α corresponds to the chosen confidence level. The chance-constrained problem that she must solve is thus 14
max E t 0 +∞ t=t 0 β t-t 0 u φ (c t , ẽt ) : E[ι t ] ≥ 0, Pr{ẽ t ≤ ē} ≤ α (20)
where the first constraint corresponds to the ENP constraint (emission intensity cannot be negative) and the second to the Environmental Safety (ES) constraint. 15 The public policy affects the dynamic of the economy in the following way. From (13), the path of the AGT index evolves stochastically according to
μt+1 = µ t + (z t -rt + τ η)/(1 -λ). (22)
In addition to directly modifying the AGT path, it also indirectly affects the equilibrium on the goods market since we have, from ( 22) and ( 16),
ct = q t -(z t -rt + τ η)/(1 -λ). (23)
Finally, as we suppose that the social planner does not intervene directly in the capital market, the interest rate at equilibrium must satisfy the Euler equation
E [∂u φ (c t , e t )/∂c] = βE [(1 + rt )∂u φ (c t+1 , ẽt+1 )/∂c] ( 24
)
which is the ex ante equivalent of (15). Consider the problem of the social planner at date t when the state of the economy is far from both the ENP and ES constraints. [START_REF] De Zeeuw | Regime shifts and uncertainty in pollution control[END_REF] The term "chance-constrained programming" was first coined by Charnes & Cooper (1959). Yaari (1965) provides an early comparison of this approach with the penalty/loss function procedure in a lifetime expected utility maximization problem. 15 Alternatively, we may consider an intertemporal state-dependent utility setup (see e.g., Tsur & Zemel 2008, 2009; Bommier et al. 2015, propose an Epstein-Zin recursive utility framework). This can be done by assuming, e.g., that the random date of the catastrophe T follows an exponential distribution that depends on EQ so that the survival function at time t is given by S t = Pr{T > t|e t } = 1 -e -pet . The public objective then becomes
E t0 +∞ t=t0 β t-t0 [(1 -e -pẽt )u(c t , ẽt ) + e -pẽt u] ( 21
)
where u is the doomsday continuation utility. Compared to the chance-constraint approach, this setup requires to know the relevant values of p and u, and more generally, the hazard rate of the catastrophe as a function of EQ and its impacts on humanity, which is beyond the current level of scientific knowledge.
The corresponding Bellman equation is given by 19), ( 22), ( 23), (24)} (25) where µ t , e t and q t are state variables and z t the control variable. It is shown in the appendix that:
W(µ t , e t , q t ) = max zt {E t [u φ (c t , e t )] + βE t [W(μ t+1 , ẽt+1 , qt+1 )] : (1), (
Proposition 2 The production/environment state that solves (25) satisfies
r e t+1 + r e t r e t+1 -r e t θ = ξE [∂u φ (c t+2 , ẽt+2 )/∂e] /E [∂u φ (c t+2 , ẽt+2 )/∂c] , (26)
where
r e t ≡ E [r t ∂u φ (c t+1 , ẽt+1 )/∂c] E [∂u φ (c t+1 , ẽt+1 )/∂c] = E[r t ] + cov (r t , ∂u φ (c t+1 , ẽt+1 )/∂c) E [∂u φ (c t+1 , ẽt+1 )/∂c] . ( 27
)
The unconstrained dynamic of the economy is thus determined through the sequence of the "corrected" expected optimal interest rate r e t that solves (26). Compared to E[r t ], it takes into account the (positive) correlation between the marginal utility of consumption and the interest rate, as shown in (27).
If the consumer's valuation of EQ is very low (i.e., φ low), it could be the case that the ES constraint is binding at some date T and that the regulated economy follows an SEP during a long period at expected level e S > ē, the edge depending on α. Or, for larger φ, that this constraint is only temporarily binding while the regulated economy is on its way toward an ENP. To gauge the value of the environment along a stochastic path of the economy, we consider the discounted MRS of consumption for EQ along this path as expected at time t 0 , i.e.
ρ φ = (1 -β)E t 0 +∞ t=t 0 β t-t 0 ∂u φ (c t , ẽt )/∂e ∂u φ (c t , ẽt )/∂c
which increases with φ. A benchmark for the public policy is given by Definition 3 The chance-constrained value of the environment (CCVE) is the lowest value of the discounted MRS evaluated along the optimal expected path of the economy such that the ES constraint is never binding: it is given by ρ φ * where
φ * = arg min φ max E t 0 +∞ t=t 0 β t-t 0 u φ (c t , ẽt ) : E[ι t ] ≥ 0 : Pr{ẽ t ≤ ē} < α
The CCVE ρ φ * corresponds to hypothetical preferences φ * that separate two types of societies: those with φ > φ * , for which the SE constraint is not binding, and those with φ < φ * which are more at risk of being struck by a catastrophe in some distant future. It is an interesting benchmark because, as the path of the economy of any society depends on λ, this edge is affected by the technological spillovers.
Because we consider a public objective that integrates a safety constraint that could be binding, the social planner may be considered "paternalistic", i.e. imposing a route to individuals for their own sake. This is even more so if it considers that the CCVE must inform the policy because of the selfishness of individuals regarding future generations and the lack of scientific knowledge that makes it difficult to thoroughly evaluate the risk that those future generations face. In this regard, the SE constraint is a minimum safety measure, and avoiding its binding corresponds to a sensible safeguard.
The constant MRS -CARA utility case
In the rest of the article, we derive the optimal policy assuming that the representative consumer's preferences are CARA and that consumption and environmental quality can be subsumed in a 'global wealth index' denoted y t ≡ c t + ρe t , so that the MRS is the same whatever the GDP of the economy (we thus have φ = ρ = ρ φ in that case). 16 Under theses assumptions, the dynamic of the economy follows a Gaussian random walk and the optimal policy is easily characterized. 17 With a constant MRS, (26) simplifies to r e t+1 + r e t r e t+1 -r e t θ = ρξ (28) 16 A constant MRS is a reasonable approximation as long as g < 1 (i.e. wealth is bounded by q S ) and the environment has not incurred dramatic changes (e t is above ē). Assuming steady growth (g > 1), as the environmental quality is bounded upward, this price should increase at the same pace as GDP along the balanced growth path: ρ t = g t ρ 0 .
17 Interestingly, under these assumptions the problem of finding the CCVE may be written as
min p max E t0 +∞ t=t0 β t-t0 e -pẽt u(c t ) : E[ι t ] ≥ 0, Pr{ẽ t ≤ ē} < α
which can be interpreted as the expected sum of losses in consumption well-being: compared to (21) in footnote 15, only the last sum appears with the continuation utility u replaced by u(c t ), the utility of consumption of the period. Increasing p diminishes the per-period probability of the catastrophe and thus of the total losses: the larger p, the more optimistic is the viewpoint. Hence, by considering the lowest p such that the environmental safety constraint is not binding, the social planner adopts a careful standpoint (the minimax regret criterion). The difference between programs (20) and ( 21) is particularly apparent in the case where u = 0 in the latter: then, the social planner only considers the well-being of generations that have avoided the catastrophe.
which prescribes the evolution of the interest rate from one period to the next along an optimal path when neither the ENP constraint nor the ES constraint are binding.
Observe that this dynamic does not depend on the state of the economy (none of the state variables q t , e t and µ t is involved). From its initial value r e t 0 , it may converge to a long run level that solves (r e + 1 -θ)r e = ρξ.
The left-hand side of (29) is a quadratic equation which is positive for either r e ≤ -(1 -θ) or r e ≥ 0, while the right-hand side is strictly positive. Hence r e may take two values, one being positive and the other negative and lower than -(1 -θ). Fig. 1 depicts the situation.
[Figure 1 about here.]
The parabola corresponds to the left-hand side of (29) which crosses the x-axis at 0 and -(1 -θ). The horizontal line corresponds to the right-hand side of (29). The negative root of ( 29) is lower than -(1 -θ), and increasingly so the larger ρξ. As r e t is greater than the optimal expected interest rate that must prevail in the long term on the capital market, only the positive root of (29) is relevant.
The following proposition describes the transition of the economy toward this longterm interest rate.
Proposition 3 With a constant MRS, the solution of (28) is given by
r e t = r e + A(r e t 0 -r e )k t-t 0 A + (1 -k t-t 0 )(r e t 0 -r e ) (30)
for all t ≥ t 0 , where A = (1 -θ) 2 + 4ρξ is the square root of the discriminant of (29),
r e = (A -1 + θ)/2 (31)
the positive root of (29), and
k = (1 + θ -A)/(1 + θ + A). (32)
r e t converges to r e if ρξ = θ and r e t 0 > -(A + 1 -θ)/2. Convergence is monotonic if ρξ < θ and oscillatory if ρξ > θ.
The sequence of rates given by (30) allows the social planner to estimate at time t 0 the expected path of the interest rate that should prevail on the capital market using (40) and thus to decide on the policy measure z t to be implemented. It depends on an initial value r e t 0 deduced from the current state of the economy that should not be too negative. This sequence converges to (31) which depends on the social value of the environment ρ and on the parameters that govern the dynamic of the environment: the environmental inertia θ and the parameter that captures the impact of the technology index on the environment ξ. It should be noted that Prop. 3 describes the dynamic of the economy when neither the ENP constraint nor the ES one is binding.
To detail the convergence of the interest rate given by (30), it is convenient to use the normalized gap between the expected interest rate at horizon h and its long run level which is defined as
d t 0 +h ≡ (r e t 0 +h -r e )/A = (E[r t 0 +h ] -r )/A. ( 33
)
It is shown in the appendix that Proposition 4 With a constant MRS, the normalized gap at horizon h from t 0 can be derived recursively using
d t 0 +h = f (d t 0 +h-1 )
where
f (x) ≡ kx/[1 + (1 -k)x] (34)
with k defined by (32) in Prop. 3. This gap converges to 0 provided d t 0 > -1.
A phase diagram of this recursion is given fig. 2 which depicts the relationship d t+1 = f (d t ) in the (d t , d t+1 ) plane and where the bisector is used as a "mirror" to project the value d t+1 back on the d t axis.
[Figure 2 about here.] Function f (•) is concave, with a vertical asymptote at d t = -1/(1 -k) and an horizontal one for large values of d t . We have f (0) = 0 and f (0) = k < 1, hence the bisector is located above the graph of function f (•) in the positive quadrant and since f (-1) = -1, below it in the negative quadrant for all d t ≤ -1. From an initial gap d t 0 the arrows describe the recursion toward 0 which occurs for all d t 0 > -1.
Applying recursively d t 0 +h = f (d t 0 +h-1 ) gives the expected value of the optimal interest rate from one period to the next and thus allows the government to define the entire policy at time t 0 . To initiate the recursion, we suppose that the government is able to determine the expected value of the interest rate at date t 0 -1, the period before the policy is implemented, which allows it to derive the initial normalized gap:
d t 0 -1 = (r t 0 -1 -r )/A.
The normalized gap of the first period is deduced from this value using d t 0 = f (d t 0 -1 ). The first period expected optimal interest rate is deduced from E t 0 [r t 0 ] = r +Ad t 0 , and the sequence of all future ones from applying E t 0 [r t ] = r +Ad t with d t = f (d t-1 ) for all t > t 0 . This allows us to analyze the commitment (also dubbed open-loop) strategy that supposes that the government sticks to this plan whatever the state of the economy in the subsequent periods. 18 To complete the analysis, we derive the Rational Expectation Equilibrium (REE) of this economy assuming that the consumer's preferences over global wealth are CARA, i.e. that we have u φ (c t , e t ) = -e -γ(ct+ρet) where γ corresponds to the coefficient of absolute risk aversion. If the global wealth index y t+1 is normally distributed (which is the case at equilibrium), we get from ( 15) using
E[e -γ ỹ] = e -γ(E[ỹ]-γV[ỹ]/2) , β = e -ψ
where ψ is the intrinsic discount factor, and 1 + r t ≈ e rt , a supply function of capital satisfying19
r t = ψ + γ(E t [ỹ t+1 ] -y t ) -γ 2 V t [ỹ t+1 ]/2 (35)
which exhibits the familiar effects that determine the rental price of capital: the intrinsic preference for an immediate consumption ψ, the economic trend of the global wealth index that also encourages immediate consumption if it is positive, and as revealed by the last term, a precautionary effect that operates in the opposite direction and corresponds to a risk premium due to the uncertainty affecting the economy. Given the environmental policy that will be implemented by the government {z t+h } h≥0 , we obtain that Proposition 5 Under an REE with preferences over global wealth and CARA utility, the dynamic of the technological parameter can be approximated by
µ t+1 = a 1 µ t + a 2 e t + a 3 q t + a 4 + a 5 τ η t + Z t (36)
where
Z t = a 5 +∞ i=0 (γa 5 ) i z t+i , (37)
and the distribution of the wealth index ỹt+1 by a normal distribution with variance
σ 2 y ≡ V t [ỹ t+1 ] = (1 -a 3 ) 2 σ 2 κ + a 2 5 τ 2 σ 2 η ( 38
)
leading to stationary value of the interest rate r S ≡ ψ -γ 2 σ 2 y /2. Moreover, we have a 1 < 1 and a 1 > 0 if 2 + (1 -λ)/γ -θ > ρξ, 0 < a 2 < ρ, a 3 > 0, 0 < a 5 < 1/γ and a 4 < 0 whenever r S ≥ 0, da 1 /dλ < 0, da 2 /dλ > 0, da 3 /dλ > 0 and da 5 /dλ > 0.
The equilibrium dynamic of the AGT index follows a linear first-order recursive equation of the state variables (µ t , e t , q t ) and of a forward looking term Z t given by (37) that accounts for the anticipated effects of the environmental policy. This policy index is an exponential smoothing of the policy measures to come. This dynamic is thus consistent only if the policy plan is known. Assuming that ρ is not too large, all coefficients are positive except the constant a 4 which is negative (r S ≥ 0 is a sufficient condition only for a 4 to be negative). 20 Hence, one may thus expect that under laissezfaire, even if the AGT and EQ indexes have reached very low levels, the increase in the GDP could reverse a negative trend: this is indeed the case when q t > -a 4 /a 3 (assuming that a catastrophe is avoided). As expected, the stronger the spillover effects captured by λ, the less the dynamic of the AGT index is dependent on its previous value and the more it is on the GDP and the economic shocks. But as a 5 determines the exponential smoothing coefficient of Z t , the dynamic is also more reactive to the public policy when λ is large.
Because of the linearity of (36), the AGT index μt+2 as estimated at date t is Gaussian (while the date-t realizations of the shocks are known, μt+2 depends on the next period shocks κt+1 and ηt+1 ). From (16), ct+1 is thus normally distributed, resulting in global wealth index ỹt+1 which is also Gaussian with variance given by (38). Knowing {z t+h } h≥0 , it is possible to infer statistically the state of the economy at horizon h from an initial state at date t by applying recursively (36) together with (1), (19), thanks to the normal distribution of the random shocks.
Using (13) or ( 35) and (36), it is possible at the beginning of each period to specify the distributions of the equilibrium interest rate. More precisely, we have Lemma 2 Under an REE with preferences over global wealth and CARA utility, before the current period shock and signals are known, the current interest rate and the next period wealth index are normally distributed. Their variances are given by
σ 2 r = (1 -λ) 2 a 2 3 σ 2 κ + [1 -(1 -λ)a 5 ] 2 τ 2 σ 2 η ( 39
)
and
σ 2 y +1 = σ 2 y + [(1 -a 3 )g + (1 -a 1 )a 3 + (a 2 -ρ)ϕ] 2 σ 2 κ + [(1 -a 1 )a 5 τ σ η ] 2
respectively. The expected interest rate corresponding to the optimal policy (27) is given by
E t [r t ] = r e t -(1 -λ)a 3 γ σ 2 y +1 -σ 2 y + γ[1 -(1 -λ)(a 5 -a 3 )](1 -a 1 ) 2 a 2 5 τ 2 σ 2 η (40)
Observe that the variance of the interest rate is different from γ 2 σ 2 y . This is due to the fact that the two bracketed terms in (35) are correlated random variables when the current shocks are unknown: the expectation of ỹt+1 without knowing the realization of ỹt is a random variable. Similarly, the variance of the next period wealth index differs form σ 2 y by terms accounting for the correlation between ỹt+1 and ỹt . Finally, we obtain that Proposition 6 The economy follows an ENP or an SEP reached at date T if the government implements the policy {z t } t≥T given by
z t = R t + (1 -λ)(1 -g)g t-T (q S -q T )ϕ/ξ (41)
where
R t = r S + γ(1 -g)g t-T (q S -q T )[1 + (1 -g)ϕ/ξ] + γρ(1 -θ)θ t-T (e S -e T ) (42)
converges toward r S as t goes to infinity, with e T = e S < e N under an SEP and e T < e S = e N under an ENP. Under an REE with preferences over global wealth and CARA utility, the corresponding policy index {Z t } t≥T is given by
Z t = a 5 r S 1 -a 5 γ + (1 -g)g t-T (q S -q T )[γ + (1 -λ + γ(1 -g))ϕ/ξ] 1 -a 5 γg (43)
+ γρ(1 -θ)θ t-T (e S -e T ) 1 -a 5 γgθ .
In case of an SEP, denoting by Φ the CDF of the standardized Gaussian variable, e S = ē + σ e Φ -1 (1 -α) where σ e is independent of the initial state of the economy.
Sequence {R t } t≥T given by (42) corresponds to the path of the expected interest rate E[r t ] along an ENP or an SEP. From (41), it follows that in both cases the gap between the governmental subsidy and the expected interest rate shrinks exponentially. The expected interest rate also diminishes steadily toward its stationary state value r S (which should not be confused with the long-run level r toward which the expected interest rate converges during the transition phase). The last term of (42) which involves the difference between the state of the environment at date T and its long term level, is positive in the case of an ENP since the environment keeps improving, while it is equal to zero for an SEP since, by definition, the environment is stabilized at level e S < e N .
Policy implementation and Environmental forecast
Assuming that the social planner implements a policy that results in a sequence of expected interest rates E t 0 [r t ] given by (40), eventually followed by a sequence satisfying (42) once the economy has reached either an SEP or an ENP, the expected paths of the AGT and EQ indexes can be anticipated using the system of equations ( 1), ( 19) and (36) which gives the recursion
Ỹt = B t Ỹt-1 + H νt (44)
where Ỹt = (μ t , ẽt , qt , 1) is the column vector of state values (with the constant), B t is the time-dependent transition matrix
B t = a 1 a 2 a 3 a 4 + Z t ξ θ -ϕ ê 0 0 g ĝ 0 0 0 1 , H = a 5 τ σ η 0 0 0 0 σ κ 0 0
, and νt = (ν 1t , ν2t ) is a column vector of independent standardized Gaussian variables. The transition matrix B t is time-dependent because of the policy index Z t defined by (37) which is an exponential smoothing of the policy measures that will be implemented in the following periods. This index is given by (43) after the economy has reached either an SEP or a ENP. For the transitory period where the state of the economy is not constrained, using (13) taken in expectation at date t 0 , we obtain that the optimal subsidy scheme satisfies
z t+i = E[r t+i ] + (1 -λ)(E[μ t+i+1 ] -E[μ t+i ]). ( 45
)
Multiplying each side of (45) by a 5 (γa 5 ) i and summing over all i ≥ 0 gives
Z t = a 5 T -t-1 i=0 (γa 5 ) i E[r t+i ] + a 5 +∞ i=T -t (γa 5 ) i R t+i (46) + a 5 (1 -λ)(1 -γa 5 ) +∞ i=0 (γa 5 ) i (E[μ t+i+1 ] -E[μ t+i ])
for all t < T . Because the last term of (46) involves values of the AGT index that depend on the public policy, it is not possible to obtain the optimal public policy schedule explicitly and a numerical (recursive) procedure is necessary.
(i) t } t≥t 0 is sufficiently close to {Z (i-1) t } t≥t 0 .
As the recursive dynamic (44) is linear, the path it generates follow Gaussian random walks with expected values and variances at time t given by E
[ Ỹt ] = (Π t-t 0 i=0 B i )X t 0 and V[ Ỹt 0 ] = t-t 0 i=0 (Π i j=0 B j )HH (Π i j=0 B j ) .
Numerical simulations
In this section, we illustrate the impact of the technological spillovers on the optimal path of the economy in the case of the global warming problem. The expected trends of the main economic variables and the confidence intervals in which they are lying can be derived under various assumptions using numerical simulations. 22 Of particular interest is the relationship between the technological spillovers and the CCVE (SEPs are not considered) given a confidence level. [START_REF] Yaari | Uncertain lifetime, life insurance, and the theory of the consumer[END_REF]
Calibration of the model
In the following, EQ is defined as the difference between a tipping point level of CO 2 in the atmosphere M and the level of GHG at date t, t , expressed in CO 2 equivalent:
e t = Mt . 24 Hence, e t can be thought of as a global "carbon budget" at date t that the economy should not deplete entirely to avoid to be environmentally bankrupt. The CCVE value ρ * corresponds to the minimal MRS value that allows EQ to stay above 0 along the optimal path with confidence level α = 1. Parameter θ is determined from the IPCC Fifth Assessment Report which estimates that 100 years after a 100 Gt CO 2 pulse in the preindustrial atmosphere, there would remain 40% of the 100 Gt CO 2 emitted, while after 1000 years 25% would remain. Accordingly, denoting by ˆ the preindustrial level of CO 2 , after an initial period 0 = ˆ +100, we have 100 = ˆ +40 and 1000 = ˆ + 25. Using (19) without industrial interferences and solving the recursion gives
t = θ t ( ˆ + 100) + (1 -θ t )[ M -ê/(1 -θ)].
Using the IPCC's estimates for 100 and 1000 we obtain
M -ˆ - ê 1 -θ = 40 -100θ 100 1 -θ 100 = 25 -100θ 1000 1 -θ 1000 .
The last equality can be expressed as 1 -5x + 4x 10 = 0 where x = θ 100 which gives θ ≈ (1/5) 1/100 ≈ 0.984 independently of the choice of ˆ and M . 25 We deduce parameter ê by assuming that EQ has reached its long term equilibrium e N = M -ˆ in the preindustrial period which gives ê = (1 -θ)e N . Considering that ˆ = 2176 Gt CO 2 (280 ppm) and M = 5439 Gt CO 2 (700 ppm), we obtain ê ≈ 52.1.26 Our reference year is 2005 which corresponds to a GHG level equal to 2945 Gt CO 2 (379 ppm) hence an initial EQ index e 0 = 2494.17 Gt CO 2 .
The AGT index in the reference year is deduced from the World Bank's estimates of the world CO 2 intensity which in our framework is given by function (18). 27 The data show that the world CO 2 intensity has sharply decreased over the second half of the 20th century and has been plateauing since 2000 (the period covered by the data is 1960 -2013) with a value in the reference year ι 2005 =0.51kg/US$. We assume in our baseline setup (table 1) that the maximum emission intensity ϕ is equal to 6.5kg/US$ (for static comparative exercises, we also consider the cases ϕ = 7.5kg/US$ and ϕ = 5kg/US$, see tables 6 and 7), so that the level of the AGT index in the reference year is given by µ 2005 = q 2005 (ϕ -ι 2005 )/ξ where q 2005 = 47 trillions US$. The effectiveness of the green technologies is captured by the ratio ξ/ϕ. It is set to 1/2 is the baseline setup, and we also consider the strong effectiveness (ξ/ϕ = 1) and the weak effectiveness (ξ/ϕ = 1/3) cases (cf. tables 4 and 5).
Parameters of the GDP dynamics (1) are set to g = 0.99 and ĝ = 0.2. These values correspond to a growth rate of 3.1% for the first year (2005) and a growth rate of 0.9% in year 2055 (t = 50) and then 0.4% in year 2105 (t = 100). 28 Risk aversion coefficient γ is set to 3 and ψ at 10% which corresponds to a stationary state value of the interest rate r S of approximately 6% in the baseline scenario. The standard deviation of the shocks affecting the GDP is set to σ κ = 1.5 (trillions US$), which corresponds to 3.19% of the 2005 GDP value. We also consider small and large GDP shocks, these cases corresponding to σ κ = 2 (table 2) and σ κ = 1 (table 3) respectively. The shocks affecting firms' beliefs are set to σ ε = σ η = 0.01 (trillions US$), hence 0.21% of the 2005 GDP value in the baseline. Table 8 reports values assuming σ η = 1 which is awfully large, but small variations around the baseline proved insignificant (we examined the cases where they are tenfold that of the baseline values and found no significant effects). For each set of parameters, the consumer's MRS ρ is derived from (35) so that the interest rate matches its 2005 value, equal to 6%. 29 All tables show the resulting estimates when λ is ranging from 0 to 0.9 by decimal steps.
Results
[Table 1 about here.] Table 1 presents the results of our baseline scenario. For each value of λ indicated in the first column, the second column indicates the implied consumer's valuation of the environment ρ given the other parameter values. The following columns σ y , σ µ and r S give the corresponding values of the (one period) standard deviation of the total wealth, y t = c t + ρe t , and of the AGT index µ t respectively, and the longterm stationary level of the interest rate. The rest of the table reports the environmental policy parameters.
Column ρ * gives the CCVE at confidence level 1, column r the corresponding long term expected interest rate during the transitory phase toward an ENP, column T indicates the date at which the expected EQ level under this policy merges with the expected ENP. A value equal to 199 indicates that the expected EQ does not reach the ENP in that time horizon (the maximum considered in the simulations). Finally, column e T indicates the expected EQ level reached a date T under the policy.
We observe that standard deviations do not evolve monotonically with λ : σ y first decreases and then increases, while the reverse is true for σ µ . This is due to the changes in coefficients {a k } that are triggered by the variations of λ and ρ. The stationary level of the interest rate r S is also non-monotonic: it first increases and then decreases. Still, it stays relatively close to r 0 = 6%. The expected optimal interest rate r targeted by the social planner is particularly high compared to r S : while r S stays around 6%, r ranges from 18% to almost 33%.
The consumer's valuation ρ is very large for low values of λ and decreases as λ increases, while the trend for the CCVE ρ is the opposite: it increases with λ. One can observe that in this baseline case, ρ * is lower than ρ as long as λ is lower than 0.7. For example, if λ = 0.5, the CCVE is 15.93$/t, while the implicit consumers' valuation is 23.79$/t. [START_REF] Perla | Equilibrium imitation and growth[END_REF] Graphic 3 depicts the evolutions of theses values and shows that they cross approximately around λ = 0.65.
[Figure 3 about here.] Finally, under the baseline scenario, carbon neutrality is a very long term objective: only when λ is larger than 0.7 is carbon neutrality reached within two centuries (but less than a century if λ is very large).
The fact that ρ decreases with λ is easily understood. The larger λ, the less dependent is the economy on past values of the AGT index and the more are investments reactive to market prices (which would graphically correspond to a flat capital demand curve). Under laissez-faire, when λ is large consumers expect a low demand of AGT investments and thus an increase in consumption and a decrease in EQ over time. As we calibrate the model so that the interest rate matches its 2005 value, equal to 6%, whatever the value of λ, it comes from (35) that ρ should be the lower the larger λ. The impact of λ on r # may come from two channels: a direct one, through the covariance term of (40) and an indirect one through changes in ρ . Numerical computations of the [START_REF] Perla | Equilibrium imitation and growth[END_REF] We can compare these values to the social costs of carbon found in the literature. Nordhaus (2007), finds a SCC of 11.5 $/t CO 2 for 2015. More recently, with similar calibration for the rate of pure time preference, Traeger (2015) and Golosov et al. (2014), find SCC of respectively 15.5 and 16.4
$/t CO 2 .
first effect with our baseline calibration show a negative but very small impact. On the other hand, (31) and fig. 1 show that r e # (and thus r # ) increases with ρ. Intuitively, the social planner must implement a larger subsidy when EQ matters more from a social viewpoint, which increases capital demand and results in a larger long run optimal expected interest rate. Hence, the increase in r # with λ showed in table 1 comes from the sharp increase in the MRS ρ . Therefore, the major impact of technological spillovers on the CCVE comes from their impact on catastrophic risk: a large subsidy is required when λ is large because the risk of hitting the environmental tipping point is large. Otherwise, assuming that the subsidy is small while λ is large, EQ would reaches a lower minimum than the one prescribed by the environmental safety constraint. The environmental path is then more risky and requires a larger MRS ρ to compensate. Tables 2 and3 allow to assess the effects of the magnitude of shocks affecting the GDP.
[Table 2 about here.]
[Table 3 about here.] Not surprisingly, the standard deviations of ỹt and μt are increasing with σ κ , whereas the implied consumers' valuation ρ and the long term interest rate r S are decreasing. The policy parameters are also negatively affected by a change in σ κ , but to a lower extent: Large shocks slightly lower the optimal interest rate r and the CCVE, while the ENP is reached faster (and conversely for small shocks). Large GDP shocks lower the market interest rates, by increasing the precautionary motive for savings. This compensates increased variances and explains a lower CCVE. Variations in the effectiveness of the green technologies, reported in tables 4 and 5, have a intuitive impact on the policy parameters: they are larger the less the green technologies are effective.
[Table 4 about here.]
[Table 5 about here.] Similarly, the variations in the environmental damages due to industrial emissions reported in tables 6 and 7 show that the more they impact the environment, the larger the policy parameters are.
[Table 6 Finally, table 8 shows the impact of a variation in the public signal.
[Table 8
about here.]
It should be noted that the change in σ η is 100 times larger than the baseline value. Smaller values (a tenfold increase of either the public or the private signal standard deviation) did not show any significant effect. Nevertheless, the standard deviations of ỹt and μt are not much affected. Interestingly, the range of ρ * is larger than in the baseline (its smaller value is reduced while its larger value is increased) while r is reduced (by approximately 3US$/t). Using our baseline calibration and assuming λ = .8, fig. 4 and fig. 5 illustrate the dynamics of the AGT and EQ indexes respectively, starting from the first period of the implementation of the environmental policy (2006). In these figures, the expected paths and the upper and lower limits of a 95% confidence interval around these paths are depicted. Stochastically generated shocks allow us to illustrate the difference between a laissez-faire situation and the corresponding "realized" paths of the indexes under the environmental policy.
[Figure 4 about here.] Also depicted fig. 4 is the carbon neutral path (the dashed blue curve indicated µ CN t ) that the expected value of the AGT index under the policy joins at date T = 147. The laissez-faire curve shows a positive trend, but significantly below the one under the policy (actually, the curve stays below the lower-bound of the 95% interval). As a result, fig. 5 shows that EQ is rapidly deteriorating under laissez-faire (the carbon budget is entirely depleted in 25 years).
[Figure 5 about here.] As mentioned above, EQ is also deteriorating under the optimal policy at first, reaching its minimum after approximately 80 years, but still sufficiently high to safely reduce the odds of reaching 0. It then increases, but the level reached at the time the economy is carbon neutral (e T = 1541Gt CO 2 ) is considerably lower than the one in 2005 (e 0 = 2494Gt CO 2 ). The policy scheme z t and the resulting policy index Z t are illustrated fig. 6.
[Figure 6 about here.] The subsidy z t increases rapidly the first 5 years (from 15% to 33%) then slowly decreases toward 28% during the following century. As the targeted interest rate level is 27.58%, the net expected subsidy rate is thus around 5%. The actual level is stochastic as shown fig. 7 where the dynamics of the interest rate under the policy and laissezfaire are depicted together with the corresponding expected rates. Interest rates are similarly affected by the GDP shocks, the amplitude of the variations being larger under the policy than under laissez-faire.
[Figure 7 about here.] Fig. 8 illustrates the per-period investment rates (relative to the AGT index level) under laissez-faire and under the environmental policy. The investment is negative under laissez-faire in the first 5 periods, that is, firms tend to invest increasingly in polluting technologies, while investment in green technology is always positive under the policy. Investment rates are comparable after 15 years (but not the level as noted above), and converge to the growth rate of the economy.
[Figure 8 about here.] Fig. 9 shows the consumption path under the optimal policy and under laissez-faire. A higher consumption differential between laissez-faire and the regulated economy means that the policy implies a higher investment level and thus a higher cost in terms of postponed welfare.
[Figure 9 about here.] Fig. 10 shows total wealth dynamics. Because of the decrease in the environmental quality, the laissez-faire curve decreases rapidly.
[Figure 10
about here.]
There are two aspects in the welfare impact of an environmental policy that can be analyzed here. First, implementing a subsidy for AGT has the immediate and most direct effect of increasing the rate of investment in green technologies, thus introducing a trade-off between current consumption and future environmental quality. Since our model disentangles green technology investment from productivity growth, increasing investment has no stock effect on the GDP level. On the other hand, the environmental quality is a stock and reducing the size of the negative impact of production on the environment has long term effects on welfare through environmental quality accumulation.
Conclusion
In this paper, we analyze the AGT process when demand spillovers are a key determinant of technical efficiency and when the economy is subject to uncertainty. We consider the problem of a social planner in charge of determining an investment subsidy policy to incentivize firms to increase their AGT. Because investment choices are ultimately made by private agents who react to the economic context according to their beliefs and because the efficiency of technologies is the result of their many choices, the public policy can only imperfectly direct the economy to a sustainable path. Besides, the value that should be ascribed to the environment to guide the public policy is difficult to assess because of the lack of scientific knowledge and the uncertainty that affects the economy. While extremely stylized, our model allows us to ascertain the effects of these uncertainties on the optimal policy. It provides a tool to estimate the value of the environment considering that society should decide on a safety level in order to avoid an environmental catastrophe. Of course, the CCVE values that we obtain depend on the safety level chosen, but more importantly, they are very dependent of the spillover effects at work in the AGT process. Our numerical simulations show that the sharper the demand spillovers are, the higher is the CCVE and the larger should be the subsidies given to firms to direct their choices toward a carbon neutral path. We have considered in this analysis that the productivity growth is unaffected by the technologies employed: technologies only differ in their impact on the environment. Including in the analysis the process that leads to productivity growth would permit to assess the intertemporal trade-off between growth, consumption and environmental safety when society faces both economic and environmental risks. This undertaking could also includes other policy instruments than investment subsidies, like the environmental tax, and would certainly give better estimates of the CCVE.
where S t and s t are the state and the control variables respectively. The first-order equation is given by
∂u φ (c t ,
C Proof of Proposition 2
From ( 22) and ( 23), the first-order condition with respect to z t is given by
E[∂u φt /∂c] = βE [∂W t+1 /∂µ] (51)
where u φt and W t are abbreviated notations for u φ (c t , e t ) and W (µ t , e t , q t ) respectively. The envelop theorem gives (60) and (57) yields k = (θ -r e )/(r e + 1).
∂W t /∂µ = ξβE [∂W t+1 /∂e] + βE [∂W t+1 /∂µ] (52)
(61)
We have k > 0 if -1 < r e < θ (as we cannot have r e > θ and r e < -1). The condition k < 1 implies θ -r e < 1 + r e , hence (θ -1)/2 < r e . We thus have 1 > k > 0 if (θ -1)/2 < r e < θ, which rules out the negative root of (29), since we cannot have (θ -1)/2 < -(A + 1 -θ)/2. The lower-bound condition on r e gives (θ -1)/2 < (A -1 + θ)/2, which is always true. The upper-bound condition on r e can be written as θ > (A -1 + θ)/2 which gives A < θ + 1. Squaring both terms, we arrive at 4ρξ
< (1 + θ) 2 -(1 -θ) 2 = 4θ hence ρξ < θ.
We have k < 0 if either r e < -1 or if r e > θ . Moreover we have |k| < 1 if r e -θ < 1 + r e which is always true. Hence, we have -1 < k < 0 if r e > θ or if r e < -1. Taking the positive root, this condition gives A -1 + θ > 2θ i.e. A > 1 + θ hence ρξ > θ.
Substituting (A -1 + θ)/2 for r e in (61) yields (32). From (60), the condition c = θ is equivalently stated as r e = -1 which is always the case with the positive root. Using (59) and as initial value r e t 0 = 1/v t 0 -c, the solution of (28) at period t ≥ t 0 satisfies
r e t = (1 -k)(r e t 0 + c) (1 -k)k t-t 0 + k 0 (1 -k t-t 0 )(r e t 0 + c) -c.
Using (1 -k)/k 0 = r e + c, we get r e t = r e + (r e + c)(r e t 0 -r e )k t-t 0 r e + c + (1 -k t-t 0 )(r e t 0 -r e )
.
Using (60) and (31), we obtain r e + c = 2r e + 1 -θ = A. Substituting allows us to obtain (30) which converges to r e when r e > r e t 0 if A > r e -r e t 0 hence r e t 0 > -(A+1-θ)/2.
E Proof of Proposition 4
This result can be obtained recursively by observing that (30) can be rewritten in term of normalized gaps as
d t 0 +h = f h (d t 0 ) with f 0 (x) = x and f h (x) ≡ f • f h-1 (x) for all h ≥ 1. Indeed, it is true for h = 1 since d t 0 +1 = f (d t 0
) and supposing it is true for t + h, it is true for t + h + 1: we have f
(d t+h ) = f (f h (d t )) = f h+1 (d t ) = d t+h+1 . A
direct and alternative proof is obtained using
f h (x) = k h x/[1 + (1 -k h )x]: we get f (f h (x)) = kf h (x), 1 + (1 -k)f h (x) = k h+1 x 1 + (1 -k h )x + (1 -k)k h x = k h+1 x 1 + (1 -k h+1 )x = f h+1 (x).
F Proof of Proposition5
Using (1), ( 16) and ( 19 where Z t+1 is a function of the z t+h , h = 2, 3... As the resulting expression of ỹt+1 is a linear combination of iid random shocks normally distributed, it is also normally distributed with variance σ y given by (38) which is independent of t.
The coefficients {a j } j=1,...,6 and Z t in (36) are derived as follows. Using
y t = q t - µ t+1 + µ t + ρe t yields E t [ỹ t+1 ] -y t = ĝ -(1 -g + ρϕ)q t -E t [μ t+2 ] + 2µ t+1 -(1 -ρξ)µ t -ρ[(1 -θ)e t -ê]
which gives, using (63) and collecting terms,
E t [ỹ t+1 ] -y t = ĝ(1 -a 3 ) -[1 -(1 -a 3 )g + (ρ -a 2 )ϕ]q t -(a 4 + Z t+1 ) + (2 -a 1 )µ t+1 -[1 -(ρ -a 2 )ξ]µ t -[ρ(1 -θ) + a 2 θ]e t + (ρ -a 2 )ê.
Replacing into (35) gives
r t = ψ -γ 2 σ 2 y /2 + γ{[ϕa 2 -(1 -g + ϕρ) -a 3 g]q t -(a 1 -2)µ t+1 -[1 + ξ(a 2 -ρ)]µ t } + γ{ĝ(1 -a 3 ) -e t [a 2 θ + ρ(1 -θ)] -ê(a 2 -ρ) -a 4 -Z t+1 }. (64)
Equalizing with (13), which can be rewritten as
r t = z t + τ η t -(1 -λ)(µ t+1 -µ t ), (65)
As the RHS of this inequality is increasing in ρ and null when ρ = 0, this is the case when ρ is not too large. Since we assume 2 + (1 -λ)/γ -θ > ρξ (to have a 1 > 0), a sufficient condition for a 3 < 1 is given by
1 2 [1 -θ + (1 -λ)/γ] 2 + 4[2 + (1 -λ)/γ -θ] -1 + θ -(1 -λ)/γ ≤ ξ/ϕ
which can be written as 1 2 y 2 + 4y + 4 -y ≤ K where y = 1 -θ + (1 -λ)/γ and K = ξ/ϕ, which gives
0 ≥ y 2 + 4y + 4 -(2K + y) 2 = 4y(1 -K) + 4(1 -K 2 ) = 4(1 -K)(y + 1 + K).
Consequently, a sufficient condition for a 3 < 1 is ξ ≥ ϕ. We also have a 5 > 0 and
γa 5 < 1 if γ < 1 -λ + γ(2 -a 1 ) hence if 0 < 1 -λ + γ(1 -a 1 )
which is always the case since a 1 < 1. Z t is thus an exponential smoothing of the public policy scheme {z t+h } h≥0 .
Differentiating P (a 1 -1) = 0 wrt λ yields
da 1 dλ = -(1 -a 1 ) γ[a 0 1 -θ + 2(1 -a 1 )]
< 0 which also gives, using (71),
da 2 dλ = - ρ(1 -θ) ξ da 1 dλ > 0.
Differentiating a 5 wrt λ gives
da 5 dλ = a 2 5 1 + γ da 1 dλ
where, using P (a 1 -1) = 0,
1 + γ da 1 dλ = a 0 1 -θ -(1 -a 1 ) a 0 1 -θ + 2(1 -a 1 ) = ρξ (1 -a 1 )[a 0 1 -θ + 2(1 -a 1 )] = - da 1 dλ γρξ (1 -a 1 ) 2 > 0.
We thus have da 5 /dλ > 0. Differentiating a 3 wrt λ yields
da 3 dλ = γ -ϕ[1 -λ + γ(2 -a 1 -g)](da 2 /dλ) + [1 -g + ϕ(ρ -a 2 )](1 + γda 1 /dλ) [1 -λ + γ(2 -a 1 -g)] 2
where, using a 0
1 = 1 + (1 -λ)/γ, 1 -λ + γ(2 -a 1 -g) = γ(a 0 1 -a 1 + 1 -g)
and, using (69) and (71),
ρ -a 2 = ρ(a 0 1 -a 1 ) a 0 1 -a 1 + 1 -θ = (a 0 1 -a 1 )(1 -a 1 ) ξ .
Replacing, we obtain that da 3 /dλ > 0 if
0 > ϕ(1 -θ) ξ (a 0 1 -a 1 + 1 -g) - ξ (1 -a 1 ) 2 1 -g + ϕ (a 0 1 -a 1 )(1 -a 1 ) ξ = (1 -g) ϕ(1 -θ) ξ - ξ (1 -a 1 ) 2 + ϕ(a 0 1 -a 1 ) 1 -θ ξ - 1 1 -a 1 = (1 -g)ξ (1 -a 1 ) 2 ϕa 2 2 (1 -θ)ρ 2 -1 + ϕ(a 0 1 -a 1 ) 1 -a 1 a 2 ρ -1 . As a 2 < ρ, this is always the case if a 2 < ρ (1 -θ)/ϕ, hence if a 1 > 1 - ξ/ (1 -θ)ϕ which yields 1 2 [1 -θ + (1 -λ)/γ] 2 + 4ρξ -1 + θ -(1 -λ)/γ < ξ/ (1 -θ)ϕ.
As the LHS of this inequality is increasing in ρ and null when ρ = 0, we thus have da 3 /dλ > 0 when ρ is not too large. Since we assume 2 + (1 -λ)/γ -θ > ρξ (to have a 1 > 0), a sufficient condition for da 3 /dλ > 0 is given by
1 2 [1 -θ + (1 -λ)/γ] 2 + 4[2 + (1 -λ)/γ -θ] -1 + θ -(1 -λ)/γ < ξ/ (1 -θ)ϕ
which can be written as 1 2
y 2 + 4y + 4 -y < K where y = 1 -θ + (1 -λ)/γ and K = ξ/ (1 -θ)ϕ, which gives 0 > y 2 + 4y + 4 -(2K + y) 2 = 4y(1 -K) + 4(1 -K 2 ) = 4(1 -K)(y + 1 + K).
Consequently, a sufficient condition for a 3 to increase with λ is ξ > (1 -θ)ϕ.
for all t > T , where Γ t = gΓ t-1 with
Γ T ≡ (1 -g)(q S -q T )[γ + (1 -λ + γ(1 -g))ϕ/ξ]/(1 -a 5 γg).
We can write the dynamic of the economy along the SEP as
Ỹt = B Ỹt-1 + H νt (74)
where Ỹt = (μ t , ẽt , qt , Γ t , 1) is a column vector,
B = a 1 a 2 a 3 1 a 4 + a 5 r S 1-a 5 γ ξ θ -ϕ 0 ê 0 0 g 0 ĝ 0 0 0 0 1 0 0 0 g 0 , H = a 5 τ σ η 0 0 0 0 σ κ 0 0 0 0
, and νt = (ν 1t , ν2t ) is a column vector of independent standardized Gaussian variables.
We have
E[ Ỹt ] = BE[ Ỹt-1 ] which gives Ỹt -E[ Ỹt ] = B( Ỹt-1 -E[ Ỹt-1 ]) + H νt .
The covariance matrix thus satisfies
E[( Ỹt -E[ Ỹt ])( Ỹt -E[ Ỹt ]) ] = B[E( Ỹt-1 -E[ Ỹt-1 ])( Ỹt-1 -E[ Ỹt-1 ]) ]B + HH .
By definition of an SEP, we have E[ẽ t ] = e S and E[ι t qt ] = ϕE[q t ] -ξE[μ t ] = (1 -θ)(e N -e S ) for all t. As q t converges to its stationary value q S , the stationary value of the AGT index is given by µ S ≡ q S ϕ/ξ + (1 -θ)(e N -e S )/ξ. Hence, the stationary value of Ỹt is given by Y S = (µ S , e S , q S , 0, 1) and satisfies Y S = BY S . The stationary covariance matrix satisfies
V S ≡ E[( Ỹt -Y S )( Ỹt -Y S ) ] = BE[( Ỹt-1 -Y S )( Ỹt-1 -Y S ) ]B + HH ,
and thus solves the Lyapunov equation V S = BV S B + HH . As neither B nor H depend on e S and q T , V S is independent of the date at which the SEP is reached (it only depends on the parameters of the model). As the dynamic (74) is linear, the distribution of Ỹt follows a Gaussian distribution with mean Y S and covariance matrix V S at the stationary equilibrium. Denoting by σ e the standard deviation corresponding to EQ, its stationary distribution must satisfy α = Pr{ẽ t ≤ ē} = Pr{(ẽ t -e S )/σ e ≤ (ē -e S )/σ e }.
Denoting by Φ(x) the CDF of the standardized Gaussian variable at level x and using Φ(-x) = 1 -Φ(x), we get α = Pr{ẽ t ≤ ē} = 1 -Φ((e S -ē)/σ e ), hence e S = ē + σ e Φ -1 (1 -α).
The covariance term in (27) is derived using u φ (y t ) = -e -γyt and E[e -γ ỹ] = e -γ(E[ỹ]-γV[ỹ]/2) which gives
u φ (ỹ t+1 ) E t-1 u φ (ỹ t+1 ) = e -γ(ỹ t+1 -E t-1 [ỹ t+1 ]+γσ 2 y +1 /2) .
Consequently,
Cov t-1 rt , u φ (ỹ t+1 ) E t-1 u φ (ỹ t+1 ) = E (r t -E t-1 [r t ]) u φ (ỹ t+1 ) E t-1 u φ (ỹ t+1 ) -1 = E {[1 -(1 -λ)a 5 ]a 5 τ ηt -(1 -λ)a 3 κt } e -γ(ỹ t+1 -E t-1 [ỹ t+1 ]+γσ 2 y +1 /2) -1 = e -γ 2 σ 2 y +1 /2 {[1 -(1 -λ)a 5 ]E τ ηt e -γ(ỹ t+1 -E[ỹ t+1 ]) -(1 -λ)a 3 E κt e -γ(ỹ t+1 -E t-1 [ỹ t+1 ]) }
where the last term can be written as
E κt e -γ(ỹ t+1 -E t-1 [ỹ t+1 ]) = E[κ t e -γ[(1-a 3 )g+(1-a 1 )a 3 +(a 2 -ρ)ϕ]κt ]
× E[e -γ[(1-a 3 )κ t+1 +(1-a 1 )a 5 τ ηt-a5τ ηt+1 ] from independence. Using E[e -γ X ] = e -γ(E X-γσ 2 X /2) for a normal random variable X, it comes that the last term is equal to e γ 2 {σ 2 y +(1-a 1 ) 2 a 2 5 τ 2 σ 2 η }/2 . Moreover, using
E Xe -γ X = - d dγ E[e -γ X ] = - d dγ e -γ(E X-γσ 2 X /2) = (E X -γσ 2 X )e -γ(E X-γσ 2 X /2) ,
we get
E[κ t e -γ[(1-a 3 )g+(1-a 1 )a 3 ]κt ] = -γ[(1-a 3 )g+(1-a 1 )a 3 +(a 2 -ρ)ϕ] 2 σ 2 κ e γ 2 [(1-a 3 )g+(1-a 1 )a 3 ] 2 σ 2 κ /2
which gives
E κt e -γ(ỹ t+1 -E t-1 [ỹ t+1 ]) = -γ[(1 -a 3 )g + (1 -a 1 )a 3 + (a 2 -ρ)ϕ] 2 σ 2 κ e γ 2 σ 2 y t+1 /2 .
Similarly, we have E (τ ηt )e -γ(ỹ t+1 -Eỹ t+1 ) = E (τ ηt )e -γa 5 (1-a 1 )(τ ηt)
× E[e -γ{(1-a 3 )κ t+1 +[(1-a 3 )g+(1-a 1 )a 3 ]κt-a 5 (τ ηt+1 )} ] = -γ(1 -a 1 ) 2 a 2 5 τ 2 σ 2 η e γ 2 σ 2 y t+1 /2 .
Collecting terms, we get
cov rt , u φ (ỹ t+1 ) E t-1 u φ (ỹ t+1 ) = (1 -λ)a 3 γ[(1 -a 3 )g + (1 -a 1 )a 3 + (a 2 -ρ)ϕ] 2 σ 2 κ -[1 -(1 -λ)a 5 ]γ(1 -a 1 ) 2 a 2 5 τ 2 σ 2 η = γ(1 -λ)a 3 (σ 2 y+1 -σ 2 y ) -γ[1 -(1 -λ)(a 5 -a 3 )](1 -a 1 ) 2 a 2 5 τ 2 σ 2 η which gives (40). 49 - -1 + θ 0 -1+θ 2 r e -1-θ 2 2 ρξ t t t t
Figure 1: Long run optimal interest rate. Baseline with λ = .8. Unit: 10 trillions US$ Parameters: σ η = 0.01, σ κ = 0.15, σ = 0.01, ξ = 6.5/2, ϕ = 6.5, γ = 3, ψ = 0.1, r 0 = 6%, α = 0.1%, µ 0 = 8.66, σ κ /q 0 = 3.19%. Parameters: σ η = 0.01, σ κ = 0.2, σ = 0.01, ξ = 6.5/2, ϕ = 6.5, γ = 3, ψ = 0.1, r 0 = 6%, α = 0.1%, µ 0 = 8.66, σ κ /q 0 = 4.26%. Parameters: σ η = 0.01, σ κ = 0.1, σ = 0.01, ξ = 6.5/2, ϕ = 6.5, γ = 3, ψ = 0.1, r 0 = 6%, α = 0.1%, µ 0 = 8.66, σ κ /q 0 = 2.13%. Parameters: σ η = 0.01, σ κ = 0.15, σ = 0.01, ξ = 6.5, ϕ = 6.5, γ = 3, ψ = 0.1, r 0 = 6.%, α = 0.1%, µ 0 = 4.3, σ κ /q 0 = 3.19%.
d t 0 d t 0 k 1-k 0 d t d t+1 ' c ' c ' c T E T E T E -1 -1 1-k c T E T E T E T E ' c
55
56 Parameters: σ η = 0.01, σ κ = 0.15, σ = 0.01, ξ = 6.5/3, ϕ = 6.5, γ = 3, ψ = 0.1, r 0 = 6.%, α = 0.1%, µ 0 = 12.99, σ κ /q 0 = 3.19%. Parameters: σ η = 0.01, σ κ = 0.15, σ = 0.01, ξ = 7.5/2, ϕ = 7.5, γ = 3, ψ = 0.1, r 0 = 6.%, α = 0.1%, µ 0 = 8.76, σ κ /q 0 = 3.19%.
57 Parameters: σ η = 0.01, σ κ = 0.15, σ = 0.01, ξ = 2.5, ϕ = 5, γ = 3, ψ = 0.1, r 0 = 6.%, α = 0.1%, µ 0 = 8.44, σ κ /q 0 = 3.19%.
Figure 2 :Figure 3 :
23 Figure 2: Convergence of the normalized gap.
Figure 10 :
10 Figure 10: Total wealth dynamic.
The recursion proceeds as follows: given a first set of values {E[μ t ] (1) } t≥t 0 , it is possible to estimate {Z } t≥t 0 . 21 Then, using (44), a second set of estimated values {E[μ t ] (2) } t≥t 0 is derived which, once plugged in (46), gives a second estimated set {Z
(1)
(2) t } t≥t 0 and thus {B t } t≥t 0 . One may iterate (2)
this procedure until iteration i such that {Z
t } t≥t 0 from (46) to obtain an initial set of transition matrices {B (1) t
about here.] [Table 7 about here.]
e t )/z t -r t + τ η t = βE t [∂v(S t ; ẽt+1 )/∂S] (50) and the envelope theorem gives ∂v(S t-1 ; e t )/∂S = r t-1 ∂u φ (c t , e t )/∂c + βE t [∂v(S t ; ẽt+1 )/∂S] . ; e t )/∂S = (1 + r t-1 )∂u φ (c t , e t )/∂c.Taking the expectation and replacing in (50) yields(15) where 1 + r t on the RHS is factorized out of the expected value since the date-t interest rate is a known parameter.
Replacing the last term using (50), we get
∂v(S t-1
and ∂W t /∂e = E [∂u φt /∂e] + θβE [∂W t+1 /∂e] . [∂W t+1 /∂µ] = E [∂u φt+1 /∂c] + ξβE [∂W t+2 /∂e] = E [∂u φt /∂c] /β using (51), hence E [∂W t+2 /∂e] = E [∂u φt /∂c] /(ξβ 2 ) -E [∂u φt+1 /∂c] /(ξβ). [∂u φt /∂c] + ξβE [∂u φt+1 /∂e] -θβE [∂u φt+1 /∂c]
Using (51), (52) can be written as ∂W t /∂µ = ξβE [∂W t+1 /∂e] + E [∂u φt /∂e] . Evaluating (54) in expectation one period ahead gives E [∂W t+1 /∂e] = E [∂u φt+1 /∂e] + (E [∂u φt /∂c] /β -E [∂u φt+1 /∂c])θ/ξ. We can thus express (54) as ∂W t ∂µ = (1 + θ)E which, evaluated one period ahead yield gives, E ∂W t+1 ∂µ = (1 + θ)E ∂u φt+1 ∂c + ξβE ∂u φt+2 ∂e -θβE ∂u φt+2 ∂c = 1 β E using (51). Reorganizing terms, we obtain E ∂u φt ∂c = (1 + θ)βE ∂u φt+1 ∂c -θβ 2 E ∂u φt+2 ∂c + β 2 ξE ∂u φt+2 ∂e . From (24) and (27), we have E [∂u φt /∂c] = (1 + r e t )βE [∂u φt+1 /∂c] . Substituting in (55) for initial dates t and t + 1 yields β 2 δ e t δ e t+1 E ∂u φt+2 ∂c = (1 + θ)β 2 δ e t+1 E ∂u φt+2 ∂c -θβ 2 E ∂u φt+2 ∂c + β 2 ξE ∂u φt+2 ∂u φt ∂c ∂e where δ e t ≡ (1 + r e t ) -1 , which upon simplifying and rearranging terms yields ξE [∂u φt+2 /∂e] /E [∂u φt+2 /∂c] = (1 + r e t )(1 + r e t+1 ) -(1 + r e t )(1 + θ) + θ E Plugging this expression in (53) evaluated one period ahead yields = r e t+1 + r e t r e t+1 -r e t θ. (53) (54) (55)
), we haveỹt+1 = ct+1 + ρe t+1 = gq t + ĝ + κt+1 -(μ t+2 -µ t+1 ) + ρ(θe t + ξµ t -ϕq t + ê) (62)in which, using (36), μt+2 = a 1 µ t+1 + a 2 (θe t + ξµ t -ϕq t + ê) + a 3 (gq t + ĝ + κt+1 ) + a 4 + a 5 (τ ηt+1 ) + Z t+1 (63)
Baseline with λ = .8. Unit: US$/t CO 2
3,000 2
8
2,000 1
7
0
1,000 6
-1
5
0 20 40 60 80 100 120 140 5 10 15 20 25 30 20 40 60 80 100 120 140 ẽt E t 0 [ẽ t ] e m e M ẽLF E t 0 [r t ] r t E t 0 [r LF t ] rLF c t E t 0 [c t ] cLF t E t 0 [c LF t ]
Figure 5: Environmental quality dynamic. Figure 9: Consumption dynamic.
•10 -2
2 30 0.4 15
20 0.3 0 5 10 15 20 25 30
10
0.2 -2
10
20 40 60 80 100 120 140 μt E t 0 [μ t ] µ CN t 20 40 60 80 100 120 140 20 40 60 80 100 120 140 μLF t z t Z t E t 0 [ ĨLF 5 t /μ LF t ] E t 0 [ Ĩ t /μ t ] Growth rate ỹLF t ỹ t E t 0 [ỹ t ]
Figure 4: AGT index dynamic. Baseline with λ = .8. Figure 6: Optimal policy scheme. Figure 8: Investment rates.
51 52 53
Baseline with λ = .8. Unit: Gt CO 2 Baseline with λ = .8. t
Figure 7: Interest rates.
Baseline with λ = .8.
Baseline with λ = .8.
Baseline with λ = .8. Unit: 10 trillions US$
Table 1 :
1 Carbon price evaluation -Baseline .08 • 10 -2 5.92 • 10 -2 6.29 11.51 17.96 199 775.08 .1 30.64 8.97 • 10 -2 6.03 • 10 -2 6.38 12.08 18.48 199 799.46 .2 29.26 8.87 • 10 -2 6.13 • 10 -2 6.46 12.75 19.08 199 835.86 .3 27.7 8.79 • 10 -2 6.21 • 10 -2 6.52 13.57 19.79 199 894.33 .4 25.91 8.73 • 10 -2 6.27 • 10 -2 6.57 14.6 20.58 199 971.57 .5 23.79 8.72 • 10 -2 6.28 • 10 -2 6.58 15.93 21.59 199 1,110.62 .6 21.2 8.79 • 10 -2 6.21 • 10 -2 6.52 17.76 22.91 199 1,356.75 .7 17.88 8.99 • 10 -2 6.01 • 10 -2 6.36 20.47 24.73 199 1,855.88 .8 13.35 9.47 • 10 -2 5.53 • 10 -2 5.96 25.08 27.58 147 1,541.66 .9 6.76 0.11 4.44 • 10 -2 4.98 34.85 32.76 82 1,046.54
λ ρ σ y σ µ r S ρ r T e T
0 31.88 9
Table 2 :
2 Carbon price evaluation -Large GDP shocks .13 7.44 • 10 -2 2.9 11.18 17.15 199 1,189.55 .1 28.2 0.12 7.60 • 10 -2 3.08 11.68 17.65 199 1,216.78 .2 26.95 0.12 7.74 • 10 -2 3.23 12.27 18.24 199 1,262.76 .3 25.52 0.12 7.86 • 10 -2 3.36 13 18.92 199 1,336.38 .4 23.85 0.12 7.94 • 10 -2 3.45 13.9 19.68 199 1,439.4 .5 21.86 0.12 7.96 • 10 -2 3.47 15.06 20.65 199 1,621.95 .6 19.39 0.12 7.87 • 10 -2 3.38 16.63 21.87 199 1,946.53 .7 16.21 0.12 7.60 • 10 -2 3.07 18.91 23.52 166 1,768.81 .8 11.86 0.13 6.95 • 10 -2 2.34 22.64 26.01 123 1,492.76 .9 5.72 0.14 5.54 • 10 -2 0.59 29.94 30.24 68 1,057.43
λ ρ σ y σ µ r S ρ r T e T
0 29.31 0
Table 3 :
3 Carbon price evaluation -Small GDP shocks .92 • 10 -2 4.09 • 10 -2 8.43 11.37 18.18 199 491.92 .1 32.19 5.85 • 10 -2 4.16 • 10 -2 8.46 11.97 18.68 199 504.08 .2 30.73 5.78 • 10 -2 4.22 • 10 -2 8.5 12.69 19.32 199 539.44 .3 29.09 5.73 • 10 -2 4.27 • 10 -2 8.52 13.56 20.01 199 578.9 .4 27.22 5.69 • 10 -2 4.31 • 10 -2 8.54 14.66 20.86 199 645.85 .5 25.02 5.69 • 10 -2 4.32 • 10 -2 8.54 16.08 21.91 199 756.72 .6 22.36 5.73 • 10 -2 4.27 • 10 -2 8.52 18.06 23.29 199 954.69 .7 18.95 5.87 • 10 -2 4.14 • 10 -2 8.45 21.04 25.23 199 1,360.06 .8 14.31 6.18 • 10 -2 3.82 • 10 -2 8.28 26.22 28.3 169 1,596.58 .9 7.47 6.92 • 10 -2 3.08 • 10 -2 7.84 37.74 34.19 94 1,036.41
λ ρ σ y σ µ r S ρ r T e T
0 33.51 5
Table 4 :
4 Carbon price evaluation -Highly effective GT .11 4.50 • 10 -2 5.03 5.35 17.12 199 685.81 .1 27.01 0.1 4.53 • 10 -2 5.06 5.68 17.75 199 742.45 .2 25.59 0.1 4.54 • 10 -2 5.08 6.06 18.46 199 821.93 .3 23.97 0.1 4.54 • 10 -2 5.08 6.54 19.27 199 927.89 .4 22.1 0.1 4.51 • 10 -2 5.05 7.14 20.26 199 1,091.92 .5 19.91 0.11 4.44 • 10 -2 4.98 7.93 21.48 199 1,355.49 .6 17.25 0.11 4.29 • 10 -2 4.84 9.03 23.07 199 1,821.79 .7 13.94 0.11 4.04 • 10 -2 4.6 10.68 25.29 158 1,569.13 .8 9.68 0.11 3.60 • 10 -2 4.15 13.48 28.63 111 1,218.52 .9 4.3 0.12 2.85 • 10 -2 3.35 19.16 34.41 59 790.66
λ ρ σ y σ µ r S ρ r T e T
0 28.29 0
Table 5 :
5 Carbon price evaluation -Less effective GT .33 • 10 -2 6.67 • 10 -2 6.88 20.8 19.96 199 1,063.11 .1 31.63 8.17 • 10 -2 6.83 • 10 -2 7 21.6 20.39 199 1,075.53 .2 30.25 8.01 • 10 -2 6.99 • 10 -2 7.11 22.57 20.92 199 1,103.45 .3 28.71 7.86 • 10 -2 7.14 • 10 -2 7.22 23.75 21.55 199 1,150.22 .4 26.96 7.73 • 10 -2 7.27 • 10 -2 7.31 25.24 22.25 199 1,213.49 .5 24.91 7.64 • 10 -2 7.36 • 10 -2 7.37 27.15 23.16 199 1,329.64 .6 22.44 7.62 • 10 -2 7.38 • 10 -2 7.39 29.72 24.32 199 1,533.17 .7 19.3 7.74 • 10 -2 7.26 • 10 -2 7.3 33.48 25.91 199 1,932.57 .8 14.95 8.16 • 10 -2 6.84 • 10 -2 7 39.72 28.39 154 1,689.05 .9 8.25 9.35 • 10 -2 5.66 • 10 -2 6.07 52.79 32.94 92 1,258.7
λ ρ σ y σ µ r S ρ r T e T
0 32.88 8
Table 6 :
6 Carbon price evaluation -Large emissions potential .92 • 10 -2 6.08 • 10 -2 6.42 12.2 20.02 199 1,007.51 .1 27.54 8.82 • 10 -2 6.18 • 10 -2 6.5 12.73 20.52 199 1,046.48 .2 26.27 8.73 • 10 -2 6.27 • 10 -2 6.57 13.35 21.11 199 1,104.98 .3 24.83 8.65 • 10 -2 6.35 • 10 -2 6.63 14.11 21.8 199 1,191.06 .4 23.19 8.61 • 10 -2 6.39 • 10 -2 6.67 15.07 22.57 199 1,303.89 .5 21.25 8.61 • 10 -2 6.39 • 10 -2 6.67 16.29 23.56 199 1,497.03 .6 18.89 8.69 • 10 -2 6.31 • 10 -2 6.6 17.96 24.83 199 1,828.48 .7 15.88 8.91 • 10 -2 6.09 • 10 -2 6.43 20.41 26.59 170 1,736.22 .8 11.79 9.42 • 10 -2 5.58 • 10 -2 6.01 24.5 29.34 127 1,456.48 .9 5.93 0.11 4.46 • 10 -2 4.99 32.94 34.26 71 1,001.47
λ ρ σ y σ µ r S ρ r T e T
0 28.69 8
Table 7 :
7 Carbon price evaluation -Low emissions potential .40 • 10 -2 5.60 • 10 -2 6.02 10.17 14.53 199 546.22 .1 36.87 9.28 • 10 -2 5.72 • 10 -2 6.12 10.79 15.06 199 548.52 .2 35.3 9.17 • 10 -2 5.83 • 10 -2 6.22 11.53 15.67 199 558.9 .3 33.52 9.07 • 10 -2 5.93 • 10 -2 6.3 12.43 16.39 199 580.72 .4 31.45 9.00 • 10 -2 6.01 • 10 -2 6.36 13.57 17.2 199 610.94 .5 28.99 8.96 • 10 -2 6.04 • 10 -2 6.39 15.06 18.23 199 677.41 .6 25.97 9.00 • 10 -2 6.00 • 10 -2 6.36 17.13 19.58 199 808.42 .7 22.06 9.16 • 10 -2 5.84 • 10 -2 6.22 20.29 21.46 199 1,098.21 .8 16.63 9.59 • 10 -2 5.42 • 10 -2 5.86 25.85 24.44 194 1,807.6 .9 8.59 0.11 4.39 • 10 -2 4.94 38.4 30.1 104 1,162.06
λ ρ σ y σ µ r S ρ r T e T
0 38.28 9
Table 8 :
8 Carbon price evaluation -Very Large public signal shocks
λ ρ σ y σ µ r S ρ r T e T
In climatology, several tipping points have been given, e.g. the Amazon rainforest dieback, the loss of Polar ice packs and melting of Greenland and Antarctic ice sheets, the disruption of Indian and West African monsoons, the loss of permafrost. The IPCC Fifth Assessment Report states that precise levels remain uncertain.
Global games are used in the literature on crises in financial markets (see[START_REF] Morris | Global Games: Theory and Applications[END_REF] for an overview). More generally, they allow us to model the problem of aggregating information in markets where agents have imperfect information on economic fundamentals and whose payoffs are interdependent. The "Value at Risk" criterion in portfolio selection is an instance of chanceconstrained stochastic programming (see, e.g.,[START_REF] Shapiro | Lectures on Stochastic Programming: Modeling and Theory[END_REF] for an extensive discussion of stochastic programming methods).
4 Even before the Industrial Revolution and the current debate on sustainable carbon dioxide concentrations in the atmosphere, the world climate went through some dramatic changes, like the socalled "Little Ice Age" of the 16th century, or the "Great Frost" of 1709 which is believed to have caused 600,000 deaths in
France.[START_REF] Nordhaus | A review of the stern review on the economics of climate change[END_REF] In the Integrated Assessment Model literature, the SSC corresponds to the sum of the marginal welfare losses (evaluated along the optimal policy path) caused by the immediate release of an ad-
See, e.g.[START_REF] Tsur | Pollution control in an uncertain environment[END_REF] or De Zeeuw & Zemel (2012) for analysis without AGT, and Bretschger & Vinogradova (2014) and[START_REF] Soretz | Efficient dynamic pollution taxation in an uncertain environment[END_REF] for studies on the sharing of investment between productive capital and abatement measures in an AK setting.
Throughout the paper, a random variable is topped with a tilde symbol (˜) to distinguish it from its possible values.
The case g > 1 corresponds to a steady growth in economic wealth. Sustainable states in that case are not stationary: the economy should instead follow a sustainable balanced path.
As stressed by[START_REF] Battisti | Innovations and the economics of new technology spreading within and across users: gaps and way forward[END_REF], a consistent literature has shown that, even when a clean or a cost-reducing technology is ready available in the market, its spreading takes several years.
These coordination problems with strategic complementarity are known as "beauty contests" (see, e.g.,[START_REF] Angeletos | Transparency of information and coordination in economies with investment complementarities[END_REF], 2007[START_REF] Morris | Social value of public information[END_REF]. Compared to this literature focusing on the learning of some exogeneous economic fundamentals that influence the firms' investment strategies, we suppose instead that the economic fundamental to be anticipated by the firms is precisely the result of their investments.
The environmental quality being a public good, this reasoning is grounded by the standard freerider argument that results in the underprovision of public goods.
12 Observe that the firm's relative loss (x it -x t ) 2 /2 does not appear in (16) since it corresponds to expenses (e.g. payroll outlays, external services) that are revenues for other agents in the economy.
Note that (34) can also be used as a tool to update the environmental policy each period taking as initial value the observed current gap.
While (35) is derived using the approximation 1+r t ≈ e rt , it should be noted that the discrepancy between the exact formula for the interest rate and (35) is only an artifact of the discrete time setup: the shorter the time period, the better the approximation.
If ρ is very large, coefficient a 1 is negative which generates oscillations in the AGT dynamic (36). We suppose that it is not the case and thus that fluctuations in the AGT index are only the consequences of the shocks affecting the economy.
Of course, date T at which one of the constraints is binding should be derived in the process.
These simulations were realized using Mathematica. The corresponding programs are available from the authors upon request.
The CCVE in this context could be compared to the SCC. While different from the usual interpretation of the SCC, it is in the spirit of approaches that account for catastrophic damages (see, e.g.,[START_REF] Weitzman | Tail-hedge discounting and the social cost of carbon[END_REF].
The unit used in the following is the gigaton or Gt shorthand, i.e. 10 9 metric tons. Theses levels are also commonly expressed in atmospheric concentration, the unit being the part per million or ppm shorthand, i.e. 0.01%. Each ppm represents approximately 2.13 Gt of carbon in the atmosphere as a whole, equivalent to 7.77 Gt of CO 2 . Conversion values can be found on the dedicated US department of energy website http://cdiac.ornl.gov/pns/convert.html#3.
An obvious root of this equation is x = 1. The other roots are complex numbers.
According to the IPCC Fifth Assessment Report, 700 ppm lead to a temperature increase of approximately 4°C, a situation where "many global risks are high to very high." Acemoglu et al. (2012) use the atmospheric CO 2 concentration that would lead to an approximate 6°C increase in temperatures.[START_REF] Stern | The economics of climate change: the Stern review[END_REF] reports that increases in temperature of more than 5°C will lead among other things to the melting of the Greenland Ice Sheet.
The data can be found on the world bank website: http://data.worldbank.org/indicator/EN.ATM.CO2E.KD.GD;
In[START_REF] Acemoglu | The environment and directed technical change[END_REF], the innovation function is calibrated so as to obtain a 2% long run growth. In DICE 2010, the economy grows at a rate equal to 1.9% from 2010 to 2100 and 0.9% from 2100 to 2200.
See[START_REF] King | Measuring the"world"real interest rate[END_REF], for a measure of world interest rates. According to this study, the 2005 quarterly values of the interest rate are 1.479%, 1.456%, 1.449% and 1.542%.
Appendix (For Online Publication) A Proof of Proposition 1
Minimizing (8) with respect to I it leads to the first-order condition -1 + δ t E V(x it + I it ; x it+1 )/∂x ω t , w it = 0 (47) while the envelop condition yields
implying ∂V(x it ; x it )/∂x = x it -x it + 1. Plugging this expression in (47) evaluated in expectation for period t + 1 yields
Reorganizing terms gives (9). Replacing into (2) and using (6), we obtain that firm i next period mix satisfies
As ´εit di = 0, we have
and thus x it+1 = µ t+1 +(1-τ )ε it . As idiosyncratic investments depend on the firms' current technologies and on signals that are normally distributed, their next period technologies are also normally distributed around µ t+1 with variance
which gives (10).
B Proof of Lemma 1
At each date t, the Bellman equation corresponding to (14) can be written as
D Proof of Proposition 3
Expression (28) gives the recursive equation
which can be solved as follows. Defining v t = (r e t + c) -1 or equivalently r e t = 1/v t -c where c is a constant to determine, (56) becomes
which gives
an equation that simplifies to
under the conditions c = -θ and
Provided that k = 1, the solution of the recurrence equation ( 57) is given by
with oscillations along its path if k < 0. The corresponding solution of (28) would converge to
which must be equal to the solution of (29), i.e. we must have
where r e is either equal to (A -1 + θ)/2 > 0 or -(A + 1 -θ)/2 < 0. Using (60) to substitute r e + 1 -θ for c in (58) yields (29). ( 58) is thus satisfied if (60) is true. Using and collecting terms yields
.
Identifying with (36) gives
and
The first two equations of (66) form a system involving only coefficients a 1 and a 2 that can be solved separately from the others. Observe also that a 2 = ρ and a 1 = a 0 1 ≡ 1 + (1 -λ)/γ is a degenerate solution of this system. More precisely, using 1 -λ + γ(2 -a 1 ) = γ(a 0 1 -a 1 + 1), we can express (66) as
and we get, assuming that the solution is a 2 = ρ and a 1 = a 0 1 , that a 3 = 1, a 5 = 1/γ but a 4 diverges unless ψ = γ 2 σ 2 y /2 = τ 2 σ 2 η /2 in which case a 4 is indefinite. Alternative solutions can be derived as follows. From the expression of a 2 , we get
which, plugged into the expression for a 1 , gives
that can also be expressed as (a 0 1 -a 1 )P (a 1 -1) = 0 where P (x) ≡ x(a 0 1 -θ -x) + ρξ is a second degree polynomial. Non-degenerate solutions must solve P (a 1 -1) = 0. As P (0) = P (a 0 1 -θ) = ρξ and P (x) is concave, a 1 is either lower than 1 (a 1 -1 is equal to the negative root of P (x) = 0) or greater than a 0 1 -θ + 1 = 2 -θ + (1 -λ)/γ (a 1 -1 is then equal to the positive root of P (x) = 0). As z t impacts positively µ t+1 for all t, it comes from (67) that a 5 > 0 which implies a 1 -1 < 1 + (1 -λ)/γ and thus rules out the positive root of P (a 1 -1) = 0. Consequently, a 1 -1 corresponds to the negative solution of P (x) = 0 which is given by
After substituting 1 + (1 -λ)/γ for a 0 1 , we get
The condition a 1 > 0 can be expressed as
and the other coefficients are deduced from
.
(72) As a 0 1 > 1 > a 1 we have a 2 > 0 and from (69), a 2 < ρ. We obtain from (68) that a 3 > 0, and we have a 3 < 1 if
. Replacing, the condition can be expressed as
G Proof of lemma 2
Using (64), we get
Using the expressions of a 3 and a 5 in (66), it can be expressed as
that can also be derived from (65), which becomes
. The two-period-ahead wealth index is deduced from ỹt+1 = ct+1 + ρẽ t+1 = qt+1 + μt+1 -μt+2 + ρẽ t+1 where μt+2 = a 1 μt+1 + a 2 ẽt+1 + a 3 qt+1 + a 4 + a 5 (τ ηt+1 ) + Z t+1 .
Replacing leads to ỹt+1 = (1 -a 3 )q t+1 + (1 -a 1 )μ t+1 + (ρ -a 2 )ẽ t+1 -a 4 -a 5 (τ ηt+1 ) -Z t+1 which gives
and
H Proof of Proposition 6
The expected AGT index satisfies E[μ t ] = E[q t ]ϕ/ξ for all t > T along an ENP, so that expected wealth satisfies
while it is given by
along an SEP. From (35), the expected interest rate along an ENP satisfies
where, solving the recursion from T to t > T , E[q t ] = q S -g t-T (q S -q T ) and E[ẽ t ] = e S -θ t-T (e S -e T ) with e T < e S = e N along an ENP and e S = e T < e N in the case of an SEP. Substituting gives (42). Now, from (13) and E[μ t ] = E[q t ]ϕ/ξ +(1-θ)(e N -e S )/ξ, we get
] -E[q t ])ϕ/ξ = E[r t ] + (1 -λ)(1 -g)g t-T (q S -q T )ϕ/ξ hence (41). Replacing in (37) and (42) yield
(a 5 γ) i (1 -g)g t+i-T (q S -q T )[γ + (1 -λ + γ(1 -g))ϕ/ξ] + γρ(1 -θ)θ t+i-T (e S -e T ) .
which gives (43). The SEP stationary state e S is derived as follows. Suppose the economy has reached an SEP at date T with corresponding GDP level q T and EQ level e T = e S . Using (43) to substitute for Z t in (36), we obtain µ t+1 = a 1 µ t + a 2 e t + a 3 q t + Γ t + a 4 + a 5 r S 1 -a 5 γ + a 5 τ η t |
00461538 | en | [
"phys.meca.solid"
] | 2024/03/05 22:32:15 | 2010 | https://hal.science/hal-00461538/file/ballard2010.pdf | Patrick Ballard
Frictional Contact Problems for Thin Elastic Structures and Weak Solutions of Sweeping Processes
The linearized equilibrium equations for straight elastic strings, beams, membranes or plates do not couple tangential and normal components. In the quasi-static evolution occurring above a fixed rigid obstacle with Coulomb dry friction, the normal displacement is governed by a variational inequality, whereas the tangential displacement is seen to obey a sweeping process, the theory of which was extensively developed by Moreau in the 1970s. In some cases, the underlying moving convex set has bounded retraction and, in these cases, the sweeping process can be solved by directly applying Moreau's results. However, in many other cases, the bounded retraction condition is not fulfilled and this is seen to be connected to the possible event of moving velocity discontinuities. In such a case, there are no strong solutions and we have to cope with weak solutions of the underlying sweeping process.
Motivation and outline
Background
The frictionless equilibrium of linearly elastic strings and beams (or membranes and plates) above a fixed rigid obstacle provides an archetypical example of variational inequality, the theory of which was extensively developed in the 1970s. This paper deals with the situation where Coulomb dry friction between the elastic structure and the obstacle should be assumed to occur in addition. More specifically, it focuses here on cases where the linearized equilibrium equation can be used and consider the quasi-static evolution problem given by the usual Coulomb friction law. Surprisingly, this seems to be the first time this class of problems has been investigated. One specific (and comfortable) feature of these problems is the fact that the linearized equilibrium equations do not couple the normal and tangential components of the displacement. The problem that governs the normal displacement is, therefore, the same as that arising in the frictionless situation, that is, a variational inequality at every instant. Solving this variational inequality at every instant gives the normal component of the reaction force exerted by the obstacle and therefore gives the threshold for the friction law, which generally depends on the time and the position. The evolution problem that governs the tangential displacement is shown to provide an archetypical example of a sweeping process in a Hilbert space, the theory of which was developed in the seventies by Moreau [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] with a view to applying it to elastoplastic systems.
The fact that the linearized equilibrium equations do not couple the normal and tangential components of the displacement lend the situation under consideration some similarity to perfect plasticity. Also, the moving tangential velocity discontinuities that will be exhibited in this paper should certainly be brought alongside the velocity discontinuities that are well known to spontaneously occur in perfect plasticity [START_REF] Suquet | Discontinuities and plasticity[END_REF].
This uncoupling is a specific feature of the straight thin elastic structures that are the only ones considered in this paper. The situation is rather different in the more usual situation where a massive elastic body is considered. Indeed, in that case, the linearized equilibrium equations couple normal and tangential components so that monotonicity is lost. This raises important mathematical difficulties in the analysis. An existence result for the corresponding evolution problem (quasi-static contact problem in linear elasticity with Coulomb friction) was obtained only in 2000 by Andersson [START_REF] Andersson | Existence result for quasistatic contact problem with Coulomb friction[END_REF] using the approach developed in the pioneering work of Jarušek [START_REF] Jarušek | Contact problems with bounded friction[END_REF]. Very little is known about uniqueness, but the lack of monotonicity makes the situation tricky [START_REF] Ballard | A counter-example to uniqueness in quasi-static elastic contact problems with friction[END_REF]. For a recent survey on the analysis of frictional contact problems for massive bodies, the reader is referred to [START_REF] Eck | Unilateral Contact Problems in Mechanics. Variational Methods and Existence Theorems[END_REF].
The basic evolution problem
Let us consider a straight elastic string which is uniformly tensed in its reference configuration and an orthonormal basis (e x , e y ) with e x chosen along the direction of the string. A fixed rigid obstacle is described by the function y = ψ(x). The string is loaded with a given body force f e x +g e y and displacements u p 0 e x +v p 0 e y , u p 1 e x + v p 1 e y are prescribed at extremities x = 0, 1. Let u e x + v e y denote the displacement field in the string and r e x + s e y denote the reaction force exerted by the obstacle on the string. Assuming that the linearized equilibrium equations can be used, one finds that the quasi-static evolution of that string above the obstacle with unilateral contact condition and Coulomb dry friction during the time interval [t 0 , T ] is governed by
u + f + r = 0, in ]0, 1[ × [t 0 , T ], r ( û -u) + μs | û| -| u| 0, ∀ û ∈ R, in ]0, 1[ × [t 0 , T ], u(0) = u p 0 , u(1) = u p 1 , on [t 0 , T ], v + g + s = 0, in ]0, 1[ , v -ψ 0, s 0, s(v -ψ) ≡ 0, in ]0, 1[ × [t 0 , T ], v(0) = v p 0 , v(1) = v p 1 , on [t 0 , T ]. ( 1
)
where μ is the friction coefficient, which is assumed to be given.
The last three lines of system (1) govern the normal component v of the displacement, and are not coupled with the other equations of system [START_REF] Adams | The biharmonic obstacle problem with varying obstacles and a related maximal operator[END_REF]. Therefore, at every instant, v obeys the same variational inequality as that governing the more usual frictionless situation. Assuming that this problem has been solved, the normal component s of the reaction is now supposed to be given in the study of the tangential problem, that is, the first three lines of system [START_REF] Adams | The biharmonic obstacle problem with varying obstacles and a related maximal operator[END_REF]. It is necessary of course to know what regularity s can be expected to show, and this question requires detailed analysis of the normal problem governed by the variational inequality. As we will see, the regularity of s is crucial to the analysis of the tangential problem.
Introducing for every t ∈ [t 0 , T ], the closed, convex subset of H 1 (0, 1; R) defined by
C(t) = u ∈ H 1 u(x = 0) = u p 0 , u(x = 1) = u p 1 ,
and ∀ϕ ∈ H 1 0 , u + f, ϕ H -1 ,H 1 0 μs, |ϕ| H -1 ,H 1 0 , ( 2
)
and equipping H 1 with the scalar product
(ϕ | ψ) H 1 = 1 0 ϕ (x) ψ (x) dx + ϕ(0) ψ(0) + ϕ(1) ψ(1), taking ϕ(x) = ϕ(x) -ϕ(0) -x (ϕ(1) -ϕ(0)) ∈ H 1 0 ,
the evolution problem that governs the tangential displacement u can be written in the following concise form:
-u(t) ∈ ∂ I C(t) [u(t)] ,
after eliminating the unknown reaction force r (see section 4 for details). In this differential inclusion, I C(t) [•] denotes the indicatrix function of C(t) (which equals 0 at any point of C(t) and +∞ elsewhere), and ∂ I C(t) [•] its subdifferential in the sense of the above scalar product in H 1 , that is, the cone of all the outward normals to C(t) (which is empty at any point not belonging to C(t), and reduces to {0} at an interior point, if any).
Weak solutions of sweeping processes
Let H be a Hilbert space and C(t) a set-valued mapping defined on a time interval [t 0 , T ] and the values of which are closed, convex and nonempty. A sweeping process is the evolution problem defined by
-u(t) ∈ ∂ I C(t) [u(t)] ,
in [t 0 , T ],
u(t 0 ) = u 0 ,
with the given initial condition u 0 ∈ C(t 0 ). This abstract evolution problem was introduced and studied by Jean Jacques Moreau [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] with a view to using it in the analysis of elastoplastic systems. To discuss the existence of solutions to the sweeping process, some regularity assumptions about the set-valued mapping C(t) must be made. Actually, regularity is needed only when the set retracts, thus effectively sweeping the point u(t). Jean Jacques Moreau defined and extensively studied the class of set-valued mappings C(t) with bounded retraction (see [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] or appendix A). In particular, set-valued mappings C(t) with bounded retraction admit a left limit C(t-), in the sense of Kuratowski (see appendix A), at any t ∈]t 0 , T ] and a right limit C(t+) at any t ∈ [t 0 , T [.
Taking an arbitrary subdivision P (finite partition into intervals of any sort) of [t 0 , T ] and denoting by I i the corresponding intervals (which are indexed according to their successive order) with origin t i (left extremity, which does not necessarily belong to I i ), one can build the piecewise constant set-valued mapping C P with closed convex values by using the following definition:
C P (I i ) = C i = C(t i ) if t i ∈ I i , C(t i +) if t i ∈ I i .
Given the initial condition u 0 ∈ C(t 0 ), the "catching-up algorithm" is based on the inductive projections given by
u i+1 = proj (u i , C i+1 )
to build a step function u P : [t 0 , T ] → H , defined by
u P (I i ) = u i .
This is simply a version of the implicit Euler algorithm for ordinary differential equations adapted to the differential inclusion involved. Assuming that C(t) has bounded retraction, Moreau [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] proved that the net u P (P covering all the subdivisions of [t 0 , T ]) converges strongly in H , uniformly on t ∈ [t 0 , T ], towards a function u : [t 0 , T ] → H , which Moreau calls a weak solution of the sweeping process. He then proved that this weak solution u : [t 0 , T ] → H has bounded variation and solves the sweeping process in the sense of "differential measures" (see [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] or appendix B). If C(t) has not only bounded retraction, but absolutely continuous retraction, it turns out that the weak solution u : [t 0 , T ] → H is absolutely continuous and is a strong solution of the sweeping process, that is
-u(t) ∈ ∂ I C(t) [u(t)] , for a. a. t ∈ [t 0 , T ].
The quasi-static evolution of the elastic string above the rigid obstacle when Coulomb friction is taken into account provides some natural examples of sweeping processes in the Hilbert space H = H 1 . Some of these examples will be given in this paper, in cases where the underlying sweeping process has bounded retraction and Moreau's theory provides a unique weak solution which is also a solution in the sense of differential measures. In some of these examples, this solution also turns out to be a strong solution, but this is not always the case. More interestingly, it is easy to design an evolution problem for the elastic string where the underlying sweeping process turns out not to have bounded retraction. Sticking to the standpoint of the numerical computations, such examples require an extension of the definition of weak solutions for sweeping processes to a more general class of set-valued mappings C(t) than that of bounded retraction. Since the catching-up algorithm requires the existence of a right limit C(t+) in the sense of Kuratowski, it turns out that the class of C(t), which is suitable for defining weak solutions of sweeping processes in general, seems to be the class of so-called Wijsman-regulated set-valued mappings which is exactly the class of those C(t) with closed convex values that admit a left limit C(t-), in the sense of Kuratowski, at any t ∈ ]t 0 , T ], and a right limit C(t+) at any t ∈ [t 0 , T [. Wijsman-regulated C(t) are also characterized by the condition that for every x ∈ H , the function:
t → proj [x; C(t)]
is regulated (that is, is the uniform limit of a sequence of step functions, or, equivalently, admits a left and a right limit at every t). The name given to this class of set-valued mappings originates from the fact that the class of all closed nonempty subsets of H can be equipped with a complete metrizable topology called the Wijsman topology. This is the weakest topology generated by the set functions C → d(x, C) when x covers H (here d(x, C) denotes the distance of the point x to the set C). Wijsman-regulated C(t) are exactly those set-valued mappings that are regulated in the sense defined by the Wijsman topology on the class of all closed non-empty subsets in H .
Weak solutions of sweeping processes associated with Wijsman-regulated C(t), when they exist, are proved to enjoy the same general properties as those established by Moreau in the case of weak solutions of sweeping processes based on C(t) with bounded retraction. Some examples of weak solutions of sweeping processes based on Wijsman-regulated C(t) that do not have bounded retraction are displayed in this paper. As we will see, these weak solutions do not necessarily have bounded variation. Examples will also be given of sweeping processes based on Wijsman-regulated C(t) that do not have any weak solution at all.
Frictional contact problems for the elastic string
Recalling that the tangential displacement of elastic strings obeys a sweeping process based on the set-valued mapping (2), a sufficient condition for C(t) to have bounded retraction is proved to be:
u p 0 , u p 1 ∈ BV ([t 0 , T ]; R) , f ∈ BV [t 0 , T ]; H -1 , (3) s ∈ BV ([t 0 , T ]; M) .
Here, BV stands for "Bounded Variation" and M denotes the Banach space of the Radon measures on [0, 1], that is, the topological dual of C 0 ([0, 1]; R). The first two lines in [START_REF] Ballard | A counter-example to uniqueness in quasi-static elastic contact problems with friction[END_REF] give regularity assumptions about the data involved in the evolution problem, but the last line refers to the regularity of the solution of the normal problem governed by the variational inequality and therefore can not be controlled directly. It may occur that these regularity conditions are met and a detailed example is discussed in Section 4.3. In such a case, Moreau's results provide a unique solution:
u ∈ BV [t 0 , T ]; H 1 ,
and if the regularity that is met with the data is not only that of functions with "bounded variation" in time, but that of "absolutely continuous" functions, then the same will be true of u, which is a strong solution of the sweeping process. In such a circumstance, the tangential velocity u will belong to H 1 (0, 1; R), at almost all values of t ∈ [t 0 , T ], and will therefore be spatially continuous. However, it may occur that the condition s ∈ BV([t 0 , T ]; M) is not fulfilled. A simple example of this occurrence is that of a string with a reference configuration lying on a rectilinear rigid obstacle (see Fig. 2). The data of the evolution problem are defined by u
p 0 = v p 0 = v p 1 ≡ 0, u p 1
is the function which takes the value 0 at t = 0 and 1 at every t > 0, and f = δ x=1/2-t , g ≡ 0 (the body force is a "moving transverse punctual force"). The unique solution of the normal problem is given by v ≡ 0, which entails s ≡f . Since for all t 1 < t 2 ∈ ]0, 1[,
δ t 2 -δ t 1 M = 2,
the normal reaction s : [t 0 , T ] → M is neither a function with bounded variation nor a continuous function. Assuming for the sake of convenience that μ > 2, one still can arbitrarily subdivide the time interval [0, 1/3] and perform the successive projections of Moreau's catching-up algorithm. It then can be seen that the corresponding approximating step functions u P converge strongly in H 1 , uniformly with respect to t ∈ [0, 1/3], towards the following function:
u(x, t) = 0, if 0 x 1/2 -t, x + t -1/2 t + 1/2 , if 1/2 -t x 1.
The graph of this function, together with that of the velocity u, is plotted in Fig. 3. The velocity can be seen to show a moving discontinuity; therefore, it does not belong to H 1 at any t. Consequently, the underlying C(t) does not have bounded retraction; however, it is Wijsman-regulated and the function u is a weak solution of the underlying sweeping process (in the sense of the Definition 9 in Appendix B). It is worth noting that since the velocity is discontinuous, its value at the point of the string just below the load is not defined. One therefore cannot check if the Coulomb friction law is satisfied by the solution in the strong sense (that is, pointwise). The picture looks like that of perfect plasticity [START_REF] Suquet | Discontinuities and plasticity[END_REF] where the spontaneous occurrence of velocity discontinuities requires to cope with weak solutions only. Extending Moreau's definition of weak solutions for sweeping processes to the case of Wijsman-regulated set-valued mappings leads to the appropriate definition of what should be called a weak solution of the frictional contact problem. This definition sticks to the standpoint of computational approximations. Another approach would consist of using a regularization procedure. A natural regularization method which could be used in the example under consideration would consist of "spreading out" the moving load a little bit by performing a spatial convolution. For example, the Dirac measure at x can be approximated by the function taking the value 1/(2ε) at ]x-ε, x+ε[ and 0 elsewhere. This suffices for the underlying C ε (t) to have bounded retraction. The unique solution u ε of the corresponding sweeping process is given explicitly in Section 4.4. It can therefore be seen that u ε converges strongly in H 1 uniformly with respect to t ∈ [0, 1/3], towards the previously calculated weak solution u.
Replacing the string by a beam
Replacing the string by an elastic beam in the evolution problem (1) requires changing only the last three lines governing the normal displacement v, whereas the tangential problem governed by the first three lines remains unchanged. In particular, the equilibrium equation satisfied by v is now an equation of order 4. The normal component s of the reaction force, which is now obtained after solving a variational inequality associated with the biharmonic operator, is therefore seen to be possibly a "moving Dirac measure" even in cases where all the data of the normal problem are C ∞ in space and time. This means that moving tangential velocity discontinuities should generically be expected to occur in the case of the beam, and the underlying sweeping process should be expected to admit only weak solutions, even when arbitrarily smooth data are involved.
In this paper, it is proved that it suffices to require that the data,
u p 0 , u p 1 , v p 0 , v p 1 : [t 0 , T ] → R, f, g : [t 0 , T ] → H -1 ,
should be regulated functions (that is, are the uniform limit of a sequence of step functions, or, equivalently, admit a left and a right limit at every t) to ensure that the moving set C(t) associated with the sweeping process governing the tangential problem will be Wijsman-regulated, so as to be able to speak about possible weak solutions. This claim, which relies on regularity analysis on the variational inequalities associated with the harmonic and biharmonic operators, holds true for strings as well as for beams.
However, these regularity assumptions are too weak to systematically ensure the existence of a weak solution to the underlying sweeping process. An example is provided that shows, in particular, that sweeping processes based on Wijsman-regulated set-valued mappings need not have weak solutions. The question as to what regularity assumptions about the data should be required to ensure the existence of a weak solution to the frictional contact problem is left open in this paper.
Statement of the evolution problem for an elastic string
The orthonormal basis (e x , e y ) will be used here in the affine Euclidean plane. Let us consider a string having the segment [0, 1] × {0} as its reference configuration. This configuration undergoes some homogeneous tension T 0 > 0 and is an equilibrium configuration when the string is free of body forces.
Next, let us consider the given external body force, f e x + g e y .
Taking
u e x + v e y ,
to denote the displacement field in the string, one finds that the linearized equations that govern the equilibrium of the string, which is assumed to be elastic with stiffness k, will read as follows:
k u + f = 0, in ]0, 1[ , u(0) = u p 0 , u(1) = u p 1 , T 0 v + g = 0, in ]0, 1[ , v(0) = v p 0 , v(1) = v p 1 ,
where u A fixed rigid obstacle is also considered and described by the function:
y = ψ(x).
The reaction force possibly exerted by this obstacle on the string will be written r e x + s e y .
In the above expression, r and s are, respectively, the tangential and normal components of the reaction force with respect to a reference configuration. It should be underlined here that, in the linearized framework that has been adopted here, r and s cannot be distinguished in this approximation from the tangential and normal components of the reaction force with respect to the deformed configuration, since the difference is of higher order.
Assuming that the contact between the string and the obstacle obeys the dry friction Coulomb law with a friction coefficient denoted by μ, one can read the equations that govern the quasi-static evolution of the elastic string above the obstacle formally as follows:
k u + f + r = 0, in ]0, 1[ × [t 0 , T ], r ( û -u) + μs | û| -| u| 0, ∀ û ∈ R, in ]0, 1[ × [t 0 , T ], u(0) = u p 0 , u(1) = u p 1 , on [t 0 , T ], T 0 v + g + s = 0, in ]0, 1[ , v -ψ 0, s 0, s(v -ψ) ≡ 0, in ]0, 1[ × [t 0 , T ], v(0) = v p 0 , v(1) = v p 1 , on [t 0 , T ].
It can be easily checked that the pointwise weak formulation of the Coulomb law used here is equivalent to the usual pointwise formulation. It is worth noting that the equations that govern the transverse component v of the displacement are not coupled with the ones that govern the tangential component. By changing the value μ of the friction coefficient, one can always suppose T 0 = k = 1. This choice will be made systematically in what follows.
Analysis of the "normal problem" for the string
The problem that governs the transverse component v of the displacement will be solved first. This problem is the same as that arising in the more usual frictionless situation. At every instant, the problem is classically governed by a variational inequality, which is solved using standard tools (see for example [START_REF] Kinderlehrer | An Introduction to Variational Inequalities and their Applications[END_REF]). The purpose of the following theorem is to express how the regularity of the dependence of the data on time can be transferred to the solution, in order to obtain some information on the regularity of the normal component s(t) of the reaction force as it will be used as input data in the analysis of the "tangential problem".
Theorem 1. Let us assume that
ψ ∈ H 1 (0, 1; R), g : [t 0 , T ] → H -1 and that the functions v p 0 , v p 1 : [t 0 , T ] → R satisfy the strong compatibility condition, inf t∈[t 0 ,T ] v p 0 (t) > ψ(0), inf t∈[t 0 ,T ] v p 1 (t) > ψ(1), ( 4
)
setting the following:
K (t) = ϕ ∈ H 1 (0, 1; R) ϕ(0) = v p 0 (t), ϕ(1) = v p 1 (t), ∀x ∈ ]0, 1[ , ϕ(x) ψ(x) ,
then there exists a unique function v
: [t 0 , T ] → H 1 (0, 1; R) such that • ∀ t ∈ [t 0 , T ], v(t) ∈ K (t), • ∀ t ∈ [t 0 , T ], ∀ϕ ∈ K (t), 1 0 v ϕ -v g, ϕ -v H -1 ,H 1 0 . Moreover, if v p 0 , v p 1 : [t 0 , T ] → R and g : [t 0 , T ] → H -1 are
regulated (with bounded variation, absolutely continuous, and Lipschitz-continuous, respectively), then the same is true of the function
v : [t 0 , T ] → H 1 ,
and therefore of the function
-v -g def = s : [t 0 , T ] → H -1 . Also, for every t ∈ [t 0 , T ], s(t) is a positive measure with support contained in [α, β] ⊂ ]0, 1[ (α, β independent of t),
and its total mass is a bounded function of t.
Proof. Step 1. Existence of v(t).
For every t ∈ [t 0 , T ], we take w(•, t) ∈ H 1 (0, 1; R) to denote the solution of the linear problem:
w + g = 0, in ]0, 1[ , w(0) = v p 0 , w(1) = v p 1 .
It can be readily checked that if
v p 0 , v p 1 : [t 0 , T ] → R and g : [t 0 , T ] → H -1
are regulated (with bounded variation, absolutely continuous, and Lipschitz-continuous, respectively), then the same will be true of the function w
: [t 0 , T ] → H 1 .
Let us then proceed by changing the unknown function
v(x, t) = v(x, t) -w(x, t).
Setting
K (t) = ϕ ∈ H 1 0 ∀x ∈ ]0, 1[ , ϕ(x) ψ(x) -w(x, t) ,
one must now prove the existence of a unique function v : [t 0 , T ] → H 1 0 (0, 1; R), having the required regularity in time and satisfying
• ∀ t ∈ [t 0 , T ], v(t) ∈ K (t), • ∀ t ∈ [t 0 , T ], ∀ϕ ∈ K (t), 1 0 v ϕ -v 0.
For every t ∈ [t 0 , T ], the use of the Lions-Stampacchia theorem [START_REF] Kinderlehrer | An Introduction to Variational Inequalities and their Applications[END_REF] associated with the Poincaré inequality gives a unique v(t) ∈ K (t).
Step 2. Properties of the function s
: [t 0 , T ] → H -1 .
It is deduced from the variational inequality satisfied by v(t) that at every t, the distribution s(t) = -v (t) is non-negative (that is, it takes a non-negative value at every C ∞ compactly supported non-negative test function). This classically entails that the distribution s(t) is actually a measure.
Since w is a regulated function on
[t 0 , T ] into H 1 ⊂ C 0 , given the compactness of the sets {0}×[t 0 , T ], {1}×[t 0 , T ] and the conditions (4), one can find α, β ∈]0, 1[ such that ∀x ∈ [0, α] , ∀t ∈ [t 0 , T ], ψ(x) -w(x, t) < 0, ∀x ∈ [β, 1] , ∀t ∈ [t 0 , T ], ψ(x) -w(x, t) < 0. ( 5
)
The support of the measure s(t) is therefore contained in [α, β].
It now remains necessary only to prove that the total mass of this measure is bounded with respect to t. Take s = -v to denote the measure s(t) at an arbitrarily fixed t. For any compact subset K ∈]0, 1[, there exists a non-negative function ξ ∈ C ∞ 0 (]0, 1[), which is identically 1 in K . For this function
s(K ) ξ ds ξ L 2 v L 2 . Since 1 0 v 2 = [0,1] (ψ -w) ds ψ -w + L ∞ s supp ψ -w + ,
where
x + = max{x, 0}, adopting K 1 = supp ψ -w + yields 1 0 v 2 ψ -w + L ∞ ξ 1 L 2 v L 2 , that is v L 2 ψ -w + L ∞ ξ 1 L 2 . It then suffices to take K 2 = [α, β]
to obtain the required estimate of the total mass of the non-negative measure s,
s (]0, 1[) = s ([α, β]) ξ 1 L 2 ξ 2 L 2 ψ -w + L ∞ , since w(t) L ∞ C v p 0 (t) + v p 1 (t) + g(t) H -1 ,
and since any regulated function is bounded.
Step 3. Regularity of the function v : [t 0 , T ] → H 1 0 (0, 1; R). The claimed regularity of the dependence of the solution on t will be ensured if there exists a constant C which is independent of t 1 , t 2 ∈ [t 0 , T ], and such that
v(t 2 ) -v(t 1 ) H 1 0 C w(t 2 ) -w(t 1 ) H 1 . ( 6
)
Taking arbitrary t 1 , t 2 ∈ [t 0 , T ] and recalling (5), we set the following:
ψ i (λα) = λ [ψ(α) -w(α, t i )] ψ i (λα + (1 -λ)β) = ψ (λα + (1 -λ)β) -w (λα + (1 -λ)β, t i ) , ψ i (λβ + (1 -λ)) = λ [ψ(β) -w(β, t i )] ,
for all λ ∈ [0, 1] and i ∈ {1, 2}. The functions ψ i defined in this way belong to H 1 0 and satisfy
ψ 2 -ψ 1 L 2 C w(t 2 ) -w(t 1 ) H 1 ,
where C is a real constant which is independent of t 1 , t 2 . Moreover, the functions
ψ i ∈ H 1 0 differ from ψ(•) -w(•, t i ) only at those x where ψ(x) -w(x, t i ) < 0.
Also, the two functions v(t i ) are concave, since their second derivatives are nonpositive measures. As they vanish at both ends, these functions are non-negative. Therefore, the function v(t i ), which solves the obstacle problem associated with ψ -w(t i ), is also the solution of the obstacle problem associated with ψ i . From the variational inequalities satisfied by v(t 1 ) and v(t 2 ), respectively, one deduces the following:
1 0 v (t 1 ) v (t 2 ) -ψ 2 + ψ 1 -v (t 1 ) 0, 1 0 v (t 2 ) v (t 1 ) -ψ 1 + ψ 2 -v (t 2 ) 0.
Taking the sum of these two inequalities, we obtain
1 0 v (t 2 ) -v (t 1 ) 2 1 0 v (t 2 ) -v (t 1 ) ψ 2 -ψ 1 ,
and, therefore, reach the desired conclusion ( 6) by the Cauchy-Schwarz inequality.
Analysis of the "tangential problem"
Structure of the evolution problem
Once the transverse problem has been solved, the function s : [t 0 , T ] → M becomes part of the input data in the study on the tangential problem. We now examine the structure of the corresponding evolution problem.
After the unknown r has been eliminated, the problem now consists of finding u : [t 0 , T ] → H 1 such that
• u(x, t = 0) = u 0 (x), • u(x = 0, t) = u p 0 (t), u(x = 1, t) = u p 1 (t), • ∀ϕ ∈ { u} + H 1 0 , u + f, ϕ -u H -1 ,H 1 0 μs, |ϕ| -| u| H -1 ,H 1 0 . For ϕ ∈ H 1 (0, 1; R), one can set ϕ(x) = ϕ(x) -ϕ(0) -x (ϕ(1) -ϕ(0)) ∈ H 1 0 .
The isomorphism
H 1 → H 1 0 × R × R ϕ → (ϕ, ϕ(0), ϕ(1))
together with the Poincaré inequality can then be used to endow H 1 with the scalar product defined by
(ϕ | ψ) H 1 = 1 0 ϕ (x) ψ (x) dx + ϕ(0) ψ(0) + ϕ(1) ψ(1). ( 7
)
Let us consider the function :
H 1 → R defined by (ϕ) = μ s, |ϕ| H -1 ,H 1 0 -f, ϕ H -1 ,H 1 0 -u p 0 ϕ(0) -u p 1 ϕ(1).
This definition is meaningful since it was noted in the proof of Theorem 1 that supp s ⊂ [α, β] ⊂]0, 1[. The function is clearly convex and continuous on H 1 .
With these notations, one can rewrite the evolution inequality as follows:
∀ϕ ∈ H 1 , u , ϕ -u H -1 ,H 1 0 -u(0) (ϕ(0) -u(0)) -u(1) (ϕ(1) -u(1)) (ϕ) -( u), that is, since u = u : ∀ϕ ∈ H 1 , - 1 0 u ϕ -u -u(0) (ϕ(0) -u(0)) -u(1) (ϕ(1) -u(1)) (ϕ) -( u),
which, in terms of the subdifferential of the function , simply amounts to
-u(t) ∈ ∂ [ u(t)] ,
where the subdifferential is understood in the sense of the scalar product [START_REF] Jarušek | Contact problems with bounded friction[END_REF]. Since is positively homogeneous of degree 1, the conjugate function * is the indicatrix (in the sense of convex analysis) function of some closed convex set -C(t). It can then be easily calculated that
C(t) = u ∈ H 1 ∀ϕ ∈ H 1 , 1 0 u ϕ + u(0)ϕ(0) + u(1)ϕ(1) + (ϕ) 0 , = u ∈ H 1 u(x = 0) = u p 0 , u(x = 1) = u p 1 , and ∀ϕ ∈ H 1 0 , u + f, ϕ H -1 ,H 1 0 μs, |ϕ| H -1 ,H 1 0 ,
and the problem to be solved is equivalent to that of finding u : [t 0 , T ] → H 1 such that
• u(t 0 ) = u 0 , • -u(t) ∈ ∂ I C(t) [u(t)] , for a.a. t ∈ [t 0 , T ],
where the subdifferential should be understood with respect to the scalar product [START_REF] Jarušek | Contact problems with bounded friction[END_REF]. The tangential problem, therefore, obeys a sweeping process (see appendix B) in the Hilbert space H 1 .
Existence and uniqueness of strong solutions
In this section, it is established that the sweeping process that governs the tangential problem can be solved, in some restrictive circumstances, using the results obtained by Moreau (cf. [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] or Appendix B).
Theorem 2. Let f, s : [t 0 , T ] → H -1 , and u p 0 , u p 1 : [t 0 , T ] → R.
Let us assume that for every t ∈ [t 0 , T ], s(t) is a non-negative measure with support contained in some fixed compact interval [α, β] ⊂]0, 1[, and let us consider the set-valued mapping defined by
C(t) = u ∈ H 1 u(x = 0) = u p 0 , u(x = 1) = u p 1 ,
and ∀ϕ ∈ H 1 0 , u + f, ϕ H -1 ,H 1 0 μs, |ϕ| H -1 ,H 1 0 , Some initial condition u 0 ∈ C(0) is also given. If the functions u p 0 , u p 1 : [t 0 , T ] → R, f : [t 0 , T ] → H -1
, s : [t 0 , T ] → M have bounded variation and are right continuous at every t ∈ [t 0 , T [, then the set-valued mapping C(t) has bounded retraction, and there exists a unique weak solution u ∈ BV([t 0 , T ]; H 1 ) of the sweeping process based on C(t) which agrees with the initial condition u 0 . This weak solution is also the unique solution in the sense of "differential measures", which is right-continuous at every t
∈ [t 0 , T [ (see Appendix B).
If, in addition, the functions u
p 0 , u p 1 : [t 0 , T ] → R, f : [t 0 , T ] → H -1 , s : [t 0 , T ] → M
are absolutely continuous (respectively Lipschitz-continuous), then the set-valued mapping C(t) has absolutely continuous (respectively Lipschitz-continuous) retraction, the solution u : [t 0 , T ] → H 1 is absolutely continuous (respectively Lipschitz-continuous) and is the unique strong solution of the sweeping process in the sense that
• u(t 0 ) = u 0 , • -u(t) ∈ ∂ I C(t) [u(t)] , for a.a. t ∈ [t 0 , T ].
Proof. Taking e(•, •) to denote the "excess" (see Appendix A) associated with the scalar product ( 7) on H 1 , in order to prove all the claims about the retraction of C(t), one can prove that at all
t 1 t 2 ∈ [t 0 , T ] e (C(t 1 ), C(t 2 )) C u p 0 (t 2 ) -u p 0 (t 1 ) + u p 1 (t 2 ) -u p 1 (t 1 ) + f (t 2 ) -f (t 1 ) H -1 + μs(t 2 ) -μs(t 1 ) M ,
for some real constant independent of t 1 , t 2 . We take w i (i = 1, 2) to denote the unique solution in H 1 of the linear problem
w i + f (t i ) = 0, w i (0) = u p 0 (t i ), w i (1) = u p 1 (t i ), s i = μs(t i ), and 14
C i = u ∈ H 1 0 ∀ϕ ∈ H 1 0 , u , ϕ H -1 ,H 1 0 s i , |ϕ| H -1 ,H 1 0
, so that, according to these notations
C(t i ) = w i + C i .
Since the "excess" obeys a triangle inequality (see Proposition 2 in Appendix A):
e (C(t 1 ), C(t 2 )) w 2 -w 1 H 1 + e C 1 , C 2 .
The desired inequality will, therefore, be proved provided by
e C 1 , C 2 C s 2 -s 1 M ,
that is, arbitrarily choosing some u 1 ∈ C 1 :
d u 1 , C 2 C s 2 -s 1 M , or inf v∈C 2 u 1 -v L 2 C s 2 -s 1 M . ( 8
)
Since
u 1 ∈ C 1 , u 1 is a measure with support contained in [α, β],
and we take v 0 to denote the unique function in H 1 0 such that
v 0 = inf sup u 1 , 0 , s 2 + sup inf u 1 , 0 , -s 2 ,
where the "inf" and "sup" should be understood with respect to the partial order in the space of measures. From
-s 2 v 0 s 2 , we get v 0 ∈ C 2 and -s 2 -s 1 v 0 -u 1 s 2 -s 1 , which yields v 0 -u 1 M = v 0 -u 1 M s 2 -s 1 M = s 2 -s 1 M .
Since the imbedding of M in H -1 is continuous (in dimension one),
u 1 -v 0 L 2 C s 2 -s 1 M ,
for a constant C which is independent of v 0 and u 1 . The desired conclusion (8) has now been reached. Theorem 2 is now a straightforward consequence of Moreau's results (Theorems 8 and 10) as regards the solvability of sweeping processes based on set-valued mappings with bounded retraction.
An example of an explicit solution
Let us consider the case of the evolution of a string above a fixed rigid wedgeshaped obstacle.
At instant t = 0, the middle of the string undergoes grazing contact with the top of the obstacle. Between instants t = 0 and t = 1, a "vertical" displacement of amplitude y = -1/4 is imposed on both ends of the string. Then, between instants t = 1 and t = 2, a right "horizontal" displacement of the extremities of the string is prescribed at a constant speed (see Fig. 1).
More specifically, this amounts to studying the quasi-static evolution problem for the string associated with the data ψ(x) = -|x -1/2|, and
u p 0 (t) = 0, v p 0 (t) = - t 4 , u p 1 (t) = 0, v p 1 (t) = - t 4 , for 0 t 1, u p 0 (t) = t -1 4 , v p 0 (t) = - 1 4 , u p 1 (t) = t -1 4 , v p 1 (t) = - 1 4 , for 1 t 2,
It is easily checked that the unique solution of this evolution problem is given by
v(x, t) = - t 2 x - 1 2 , u(x, t) = 0, s = t δ x=1/2 , r = 0, at 0 t 1, v(x, t) = - 1 2 x - 1 2 , u(x, t) = t -1 2 x - 1 2 , s = δ x=1/2 , r = (1 -t) δ x=1/2 ,
at 1 t min(2, 1 + μ), and in the case μ < 1:
v(x, t) = - 1 2 x - 1 2 , u(x, t) = 1 4 (t -1 -μ) + μ 2 x - 1 2 , s = δ x=1/2 , r = -μ δ x=1/2 , at 1 + μ t 2.
Thanks to Theorem 2, the underlying set-valued mapping C(t) has absolutely continuous (and even Lipschitz-continuous) retraction, and u is a strong solution of the underlying sweeping process.
Since dry friction is rate-independent, it is natural to attempt to concentrate the episodes of motion prescribed on extremities of the string during the isolated instants t = 0, 1. Setting
u p 0 (0) = u p 1 (0) = v p 0 (0) = v p 1 (0) = 0, one considers the following data u p 0 (t) = 0, v p 0 (t) = - 1 4 , u p 1 (t) = 0, v p 1 (t) = - 1 4 , for 0 < t < 1, u p 0 (t) = 1 4 , v p 0 (t) = - 1 4 , u p 1 (t) = 1 4 , v p 1 (t) = - 1 4 , for 1 t 2.
The motion of the string is now given by
v(x, t) = - 1 2 x - 1 2 , u(x, t) = 0, s = δ x=1/2 , r = 0,
at 0 < t < 1, and then in the case where μ 1 by
v(x, t) = - 1 2 x - 1 2 , u(x, t) = 1 2 x - 1 2 , s = δ x=1/2 , r = -δ x=1/2 ,
at 1 t 2, and in the case where μ 1 by
v(x, t) = - 1 2 x - 1 2 , u(x, t) = 1 4 (1 -μ) + μ 2 x - 1 2 , s = δ x=1/2 , r = -μ δ x=1/2 ,
for 1 t 2. In this situation, the moving set C(t) moves only by translation, but this translation involves two steps. The set-valued mapping C(t) has right-continuous retraction, the retraction is no longer absolutely continuous, and the function u is a solution of the sweeping process only in the sense of differential measures (see Definition 10).
Another example that eludes the theory
Let us consider the example of a string tightly stretched just above a rigid rectilinear ground. First, a punctual downward force of unit amplitude is applied to the middle of the string. Assuming that the friction coefficient is large (greater than 2), a right displacement of unit amplitude is prescribed on the right extremity of the string. The punctual force then starts to move to the left at a constant speed (see Fig. 2).
More specifically, this amounts to studying the quasi-static evolution problem for the string associated with the following data: ψ ≡ 0, u is the function which takes the value 0 at t = 0 and 1 at every t > 0. In addition, the body force,
f = δ x=1/2-t ,
has to be taken into account. The unique solution of the transverse problem is given by v ≡ 0, which entails s ≡f . Since at all t 1 < t 2 ∈ ]0, 1[:
δ t 2 -δ t 1 M = 2, δ t 2 -δ t 1 H -1 = √ t 2 -t 1 1 -(t 2 -t 1 ),
we have the following regularity for s:
s / ∈ BV ([0, 1/3]; M) , s / ∈ BV [0, 1/3]; H -1 , s / ∈ C 0 ([0, 1/3]; M) , s ∈ C 0 [0, 1/3]; H -1 .
This regularity is not sufficiently strong to be able to use Theorem 2 to solve the underlying sweeping process by means of Moreau's results. However, one can consider subdividing [t 0 , T ], performing the successive projections of the catching-up algorithm, and then attempting to take a limit as the size of the largest interval of the subdivision tends to zero. In the example under consideration, strong convergence in H 1 occurring uniformly with respect to time is obtained, giving the following weak solution (in line with Definition 9) of the sweeping process:
u(x, t) = 0, if 0 x 1/2 -t, x + t -1/2 t + 1/2 , if 1/2 -t x 1.
However, the associated velocity,
u(x, t) = 0, if 0 x < 1/2 -t, 1 -x (t + 1/2) 2 , if 1/2 -t < x 1,
shows spatial discontinuity just below the load (see Fig. 3). Therefore, this weak solution does not belong to BV ([0, 1/3]; H 1 ), and the underlying set-valued mapping C(t) cannot have bounded retraction in the Hilbert space H 1 (see Theorem 8). Note, incidentally, that the value of the velocity just below the load is not defined, so that pointwise formulation of the Coulomb law cannot be checked in this problem. The concept of the weak solution corresponds to subdividing the time interval and introducing the discrete locations of the load associated with the subdivisions. Another way of proceeding would be to "spread out" the load a little bit, by means of a spatial convolution with an approximation of the identity. This is enough to make the underlying set-valued mapping have absolutely continuous (and even Lipschitz-continuous) retraction, and thus to ensure the existence of a strong solution, with a spatially continuous velocity field, in particular. This naturally raises the question as to the existence of a limit, as the regularization tends to identity and the possibility that this limit may coincide with the weak solution, that is, the limit of the solutions of the time-discretized problems.
As an example, let us look at the load, which is homogeneous over the spatial interval [1/2t -ε, 1/2t + ε], and of amplitude 1/(2ε), where 0 < ε < 1/6. It can be easily confirmed that the strong solution of the underlying sweeping process is
u ε (x, t) = 0, if 0 x x ε (t), μ 4ε (x -x ε (t)) 2 , if x ε (t) x 1 2 -t + ε, 1 + μ 2ε 1 2 -t + ε -x ε (t) (x -1), if 1 2 -t + ε x 1.
where
x ε (t) = 1 - 1 2 + t -ε 2 + 4ε μ ∈ 1 2 -t -ε, 1 2 -t + ε .
It is worth noting in this example that u ε converges towards u as ε tends to 0, in a strong sense: strong convergence in H 1 , uniformly with respect to t ∈ [0, 1/3]. The solution u ε provides an explanation of a surprising feature of the solution u of the non-regularized problem. Although the friction coefficient chosen was large enough to prevent any slipping, the elastic energy associated with u decreases with respect to time. This fact can be explained as follows. The solution u ε of the regularized problem always shows some slipping, and it can be checked that the accumulated dissipation (the time integral of the power of the friction force) tends, as ε → 0, not towards zero, but towards some finite value. It is, therefore, logical that the weak solution u of the "limit" problem should keep some memory of this dissipation, although showing no slipping itself.
Weak solutions
In this section it is proved, after adopting some fairly general regularity hypotheses about the data involved in the frictional problem, that the set-valued mapping of the underlying sweeping process is Wijsman-regulated. This enables us to state the problem of the possible existence of a weak solution of the frictional contact problem. However, the question of existence of such a weak solution is left open at the moment.
More specifically, we propose to prove that the regularity obtained for the function s(t) by solving the normal problem yields a Wijsman-regulated set-valued mapping C(t).
Proposition 1. Let f, s : [t 0 , T ] → H -1 , as well as u p 0 , u p 1 : [t 0 , T ] → R. Let us assume that for every t ∈ [t 0 , T ], s(t) is a non-negative measure having a support which is contained in a fixed compact interval [α, β] ⊂ ]0, 1
[, and a total mass bounded independently of t. Let us consider the set-valued mapping defined by
C(t) = u ∈ H 1 u(x = 0) = u p 0 , u(x = 1) = u p 1 ,
and ∀ϕ ∈ H 1 0 , u + f, ϕ
H -1 ,H 1 0 μs, |ϕ| H -1 ,H 1 0 , If the functions f, s : [t 0 , T ] → H -1 , u p 0 , u p 1 : [t 0 , T ] → R are regulated, then the set-valued mapping C(t) is Wijsman-regulated.
Proof. As in the proof of Theorem 2, w(t) is defined as the unique solution (at fixed t) of the linear problem
w + f (t) = 0, w(0) = u p 0 (t), w(1) = u p 1 (t),
and
C(t) = u ∈ H 1 0 ∀ϕ ∈ H 1 0 , 1 0 u ϕ s(t), |ϕ| H -1 ,H 1 0 .
According to these notations
C(t) = w(t) + C(t).
It should be clear that if the three functions f : [t 0 , T ] → H -1 , u p 0 , u p 1 : [t 0 , T ] → R are regulated, then the same will be true of the function w : [t 0 , T ] → H 1 . Setting
C n = u ∈ H 1 0 ∀ϕ ∈ H 1 0 , 1 0 u ϕ s n , |ϕ| H -1 ,H 1 0 , C = u ∈ H 1 0 ∀ϕ ∈ H 1 0 , 1 0 u ϕ s, |ϕ| H -1 ,
u n = inf sup u , 0 , s n + sup inf u , 0 , -s n ,
where the infimum and supremum should be understood in terms of the partial ordering in the space of measures. As -s n u n s n , we obtain u n ∈ C n . Now, remember that a sequence ( f n ) in the dual space X of a Banach space X converges weakly-star towards f if and only if f n is bounded, and if f n , x → f, x for every x in a dense subset of X (see [START_REF] Yosida | Functional Analysis[END_REF], theorem 10, p.125). Since the total mass of s n is bounded and since the restrictions of functions in H 1 0 to the interval [α, β] are dense in C 0 ([α, β]), it is deduced that the strong convergence of s n towards s in H -1 entails the weak-star convergence of s n towards s in M([α, β]). From the definition of u n in terms of u ∈ C, then we have the weak-star convergence of u n towards u in M([α, β]). First, this entails pointwise convergence almost everywhere of u n towards u , and then, by dominated convergence, strong convergence in L 2 of u n towards u ; hence u ∈ lim inf n→∞ C n .
Upon combining all these elements, we obtain
lim sup n→∞ C n ⊂ C ⊂ lim inf n→∞ C n ,
which is the conclusion we were looking for.
Replacing the string by a beam
Let us consider a straight beam which is simply supported at both ends and has as its initial configuration the segment [0, 1] × {0}. The linearized equations that govern the equilibrium of the beam, which is assumed to be elastic, read as follows: The equations governing the quasi-static evolution of the beam above a fixed rigid obstacle of equation y = ψ(x) with Coulomb dry friction of coefficient denoted by μ, can be written as follows:
k u + f = 0, in ]0, 1[ , u(0) = u p 0 , u(1) = u p 1 , l v -g = 0, in ]0, 1[ , v(0) = v p 0 , v(1) = v p 1 , v (0) = v (1) = 0,
u + f + r = 0, in ]0, 1[ × [t 0 , T ], r û -u + μs | û| -| u| 0, ∀ û ∈ R, in ]0, 1[ × [t 0 , T ], u(0) = u p 0 , u(1) = u p 1 , on [t 0 , T ], v -g -s = 0, in ]0, 1[ , v -ψ 0, s 0, s (v -ψ) ≡ 0, in ]0, 1[ × [t 0 , T ], v(0) = v p 0 , v(1) = v p 1 , on [t 0 , T ], v (0) = v (1) = 0, on [t 0 , T ].
The equations governing the normal component v of the displacement are still uncoupled with those governing the tangential component.
Another example
It could seem at first sight that the case of the beam brings nothing more to the case of the string, except that the order of the differential operator in the variational inequality that governs the normal displacement is 4 instead of 2, whereas the problems governing the tangential displacement remains formally the same in both cases. This is true, but the fact that the operator governing the normal displacement is now of order 4 has some important effects. In particular, one can expect the solutions of the underlying sweeping process be be weak solutions, even when arbitrarily smooth data are available. This can be confirmed by analysing the problem with the geometry shown in Fig. 4. In the initial configuration, the beam undergoes grazing contact with a smooth obstacle. The amplitude of the force is made to increase gradually with time t. It can easily be checked that the contact zone in the solution reduces to a single point, provided the amplitude of the force is small enough, and that this punctual contact zone is associated with a point on the obstacle that moves to the left of the figure with time. Consequently, the normal reaction s is a Dirac Fig. 4. Frictional contact of a simply supported beam measure whose support moves with time, as in the example given in Fig. 2. This fact will be true even in cases where the external force is "spread out" a little bit so as to be as smooth as desired. Therefore, one cannot expect to obtain s ∈ BV ([t 0 , T ]; M) , by requiring the data to be smooth. The tangential problem will, therefore, generally have only weak solutions, even with smooth data.
About weak solutions
In this section, the regularity that can be expected to occur with the function s(t), and therefore with the set-valued mapping C(t), will be analysed in the case of beams, where the variational inequality is associated with the biharmonic operator instead of the harmonic one. It is worth noting that under the same regularity assumptions about the data, the function s(t) shows the same regularity here as in Theorem 1. This is stated in the following theorem, in which combines several regularity results that are known for variational inequalities associated with the biharmonic operator.
Once Theorem 3 has been proved, Proposition 1 ensures that the underlying set-valued mapping C(t) is Wijsman-regulated, provided the data f, g
: [t 0 , T ] → H -1 , u p 0 , v p 0 , u p 1 , v p 1 : [t 0 , T ] → R are regulated functions. Theorem 3. Let us assume that ψ ∈ H 3 (0, 1; R), g : [t 0 , T ] → H -1 and that the functions v p 0 , v p 1 : [t 0 , T ] → R satisfy the strong compatibility condition, inf t∈[t 0 ,T ] v p 0 (t) > ψ(0), inf t∈[t 0 ,T ] v p 1 (t) > ψ(1),
setting:
K (t) = v ∈ H 2 (0, 1; R) v(0) = v p 0 (t), v(1) = v p 1 (t), ∀x ∈ ]0, 1[ , v(x) ψ(x) ,
then there exists a unique function v
: [t 0 , T ] → H 2 (0, 1; R) such that • ∀t ∈ [t 0 , T ], v(t) ∈ K (t), • ∀t ∈ [t 0 , T ], ∀ v ∈ K (t), 1 0 v v -v g, v -v H -1 ,H 1 0 . Moreover, if v p 0 , v p 1 : [t 0 , T ] → R and g : [t 0 , T ] → H -1
are regulated, then the same will be true of the function v : [t 0 , T ] → H 3 , and, therefore, of the function v -g def = s : [t 0 , T ] → H -1 . Also, for every t ∈ [t 0 , T ], s(t) is a non-negative measure with support contained in [α, β] ⊂ ]0, 1[(α, β are independent of t), whose total mass is a bounded function of t.
Proof. This additional regularity (H 3 instead of H 2 ) shown by the solutions of the obstacle problem associated with the biharmonic operator is a well-known fact.
Here we reproduce the proof by penalization displayed in [8, p. 270] (the reader will find there the bibliographical references on the subject), because it can readily be transposed to higher space dimensions and, in particular, to the case of the plate. To prove that the mapping v : [t 0 , T ] → H 3 thus defined is regulated, we shall use the fact that a mapping with values in a complete metric space is regulated if and only if it admits a left limit and a right limit at every point. Thus, the problem is made to focus on the stability of the solution to the biharmonic obstacle problem with respect to the data. This stability problem was studied by Adams [START_REF] Adams | The biharmonic obstacle problem with varying obstacles and a related maximal operator[END_REF], whose results are very similar to those needed here. Our method of proof is on very similar lines to those used in [START_REF] Adams | The biharmonic obstacle problem with varying obstacles and a related maximal operator[END_REF].
Step 1. Existence and uniqueness of the function v
: [t 0 , T ] → H 2 .
At every t ∈ [t 0 , T ], we take w(•, t) ∈ H 2 (0, 1; R) to denote the solution of the linear problem
w -g = 0, in ]0, 1[ , w(0) = v p 0 , w(1) = v p 1 , w (0) = w (1) = 0.
It should be clear that w(•, t) ∈ H 3 (0, 1; R) and that the linear mapping
R × R × H -1 → H 3 v p 0 (t), v p 1 (t), g(t) → w(t)
is continuous. In particular, if the data are regulated functions of the variable t, then the same will be true of the function w : [t 0 , T ] → H 3 . Next, we proceed with changing the unknown function
v(x, t) = v(x, t) -w(x, t),
and set
K (t) = v ∈ H 1 0 ∩ H 2 ∀x ∈ ]0, 1[ , v(x) ψ(x) -w(t, x) .
By the Lions-Stampacchia theorem, there exists a unique v(t) ∈ K (t) such that
∀ v ∈ K (t), 1 0 v v -v 0, (9)
provided that the bilinear form (v, w) → 1 0 v w is coercive on the Hilbert space
H 1 0 ∩ H 2 equipped with the norm v H 1 0 ∩H 2 = v 2 L 2 + v 2 L 2 . Take v ∈ H 1 0 ∩ H 2 ⊂ C 1 . There exists x 0 ∈]0, 1[ such that v (x 0 ) = 0. We obtain v (x) 2 = 2 x x 0 v v 2 1 0 v 2 1 0 v 2 , which entails 1 0 v 2 2 1 0 v 2 , ( 10
)
(this is, in fact, the desired coerciveness); therefore, the existence of a unique v(t) ∈ K (t) solving the variational inequality.
It is now proposed to prove that it is always possible to reduce the problem to the case where the obstacle is described by a function which vanishes at the extremities x = 0, 1. The function ψ(x) will be constructed as in the proof of Theorem 1. Since w : [t 0 , T ] → H 3 is regulated, by the conditions pertaining in (4), one can find
α, β ∈]0, 1[ such that ∀x ∈ [0, α] , ∀t ∈ [t 0 , T ], ψ(x) -w(x, t) < 0, ∀x ∈ [β, 1] , ∀t ∈ [t 0 , T ], ψ(x) -w(x, t) < 0.
The function ψ(x, t) can then be defined by
ψ (λα, t) = λ 3 -3λ 2 + 3λ [ψ(α) -w(α, t)] -λ 3 -3λ 2 + 2λ ψ (α) - ∂w ∂x (α, t) α + λ 3 -2λ 2 + λ ψ (α) - ∂ 2 w ∂x 2 (α, t) α 2 2 , ψ (λα + (1 -λ)β, t) = ψ (λα + (1 -λ)β) -w (λα + (1 -λ)β, t) , ψ (λβ + (1 -λ), t) = λ 3 -3λ 2 + 3λ [ψ(β) -w(β, t)] -λ 3 -3λ 2 + 2λ ψ (β) - ∂w ∂x (β, t) (1 -β) + λ 3 -2λ 2 + λ ψ (β) - ∂ 2 w ∂x 2 (β, t) (1 -β) 2 2 ,
for every λ ∈ [0, 1]. It can be readily checked that ψ(t) ∈ H 1 0 ∩ H 3 and that
ψ(t) H 3 C ψ -w(t) H 3 , (11)
for a real constant C which depends only on α and β (and is therefore independent of t and w(t)). Moreover, v is convex and vanishes at extremities x = 0, 1. It is therefore non-positive, and v(t) is a concave function of x. Hence, it is non-negative. Since the function ψ(•, t) differs from ψ(•) -w(•, t) only at those values of x where the latter is negative, this entails that v, which solves the obstacle problem associated with ψ -w, also solves the obstacle problem associated with ψ.
Step 2. H 3 regularity of the solution at every instant.
In step 2, an arbitrary t in [t 0 , T ] is fixed once for all. Define g = ψ ∈ H -1 to be able to proceed with changing the unknown function:
v = v -ψ, so that, setting K = v ∈ H 1 0 ∩ H 2 ∀x ∈ ]0, 1[ , v(x) 0 , one obtains v ∈ K and ∀ v ∈ K , 1 0 v v -v g, v -v H -1 ,H 1 0 .
As in [8, p. 270], for every ε > 0, the penalized function p ε is defined as the unique solution in H 2 (0, 1; R) of the linear boundary problem
p ε -εp ε = v, in ]0, 1[ , p ε (0) = p ε (1) = 0.
It can be readily seen that
• p ε ∈ H 4 (0, 1; R), • p ε (0) = p ε (1) = 0. Moreover, if p ε (x 0 ) = min [0,1] p ε for some x 0 ∈]0, 1[, then p ε (x 0 ) 0; therefore, p ε (x 0 ) v(x 0 ) 0. This entails ∀ε > 0, p ε ∈ K . But, for all v ∈ K , 1 0 v v -v 1 0 v v -v g, v -v H -1 ,H 1 0 .
Applying this inequality to the case v = p ε , one gets
1 0 p ε p ε g, p ε H -1 ,H 1 0 .
But g = -G for some G ∈ L 2 , and one obtains
1 0 p ε p ε 1 0 G p ε , that is, 1 0 p ε 2 - 1 0 G p ε ,
and as a result:
p ε L 2 G L 2 = g H -1 .
By Poincaré inequality
p ε L 2 C p ε L 2 ,
for a constant C independent of ε. Recalling p ε ∈ H 1 0 ∩ H 2 and inequality (10), one obtains
∀ε > 0, p ε H 3 C g H -1 , (12)
for a constant C independent of ε, as well as of g. This inequality yields
p ε L 2 C g H -1 , first, and p ε -v L 2 Cε g H -1
, then, which shows that p ε tends towards v strongly in L 2 , as ε tends to 0+. Also, by virtue of ( 12), there exists a subsequence converging weakly in H 3 . Since weak convergence in H 3 is, in particular, strong convergence in L 2 , the limit must be v, which therefore belongs to H 3 .
Step 3. Regularity of the dependence of the solution on time.
Since a function with values in a complete metric space is regulated if and only if it admits a left limit and a right limit at every point, it suffices to prove the following stability result:
lim n→+∞ w -w n H 3 = 0 ⇒ lim n→+∞ v -v n H 3 = 0,
where v (respectively v n ) is the solution of inequality [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] involving the data w (respectively w n ). The proof of this stability result is largely inspired by Adams' technique [START_REF] Adams | The biharmonic obstacle problem with varying obstacles and a related maximal operator[END_REF]. Denote s n = v n (respectively, s = v ). These distributions are non-negative (that is, they take non-negative values at every C ∞ test-function with compact support), and there are, therefore, some measures. A double integration by parts yields
1 0 v -v n 2 = 1 0 (v -v n ) d (s -s n ) 1 0 (w n -w) d (s -s n ) , since v n = ψ -w n on supp s n (v = ψ -w on supp s) and v n ψ -w n on [0, 1] (v ψ -w on [0, 1]). This entails lim n→+∞ v -v n H 2 = 0, (13)
provided the total mass of the non-negative measure s n = v n is bounded independently of n. To prove this, take
[α, β] ⊂]0, 1[ such that ψ -w n < 0 on ]0, 1[\[α, β]. Since v n 0, supp s n ⊂ [α, β].
Moreover, for every compact set K ∈]0, 1[, one can find a non-negative function ξ ∈ C ∞ 0 (]0, 1[), which equals 1 identically on K . This entails
s n (K ) ξ ds n ξ L 2 v n L 2 . Since 1 0 v n 2 = [0,1] (ψ -w n ) ds n ψ -w n + L ∞ s n supp ψ -w n + ,
where
x + = max{x, 0}, the choice K = supp ψ -w n + yields 1 0 v n 2 ψ -w n + L ∞ ξ 1 L 2 v n L 2 ,
that is
v n L 2 ψ -w n + L ∞ ξ 1 L 2 . It then suffices to set K = [α, β]
to obtain the desired estimate of the total mass of the non-negative measure s n :
s n (]0, 1[) = s n ([α, β]) ξ 1 L 2 ξ 2 L 2 ψ -w n + L ∞ . (14)
Next, from inequalities [START_REF] Suquet | Discontinuities and plasticity[END_REF] and [START_REF] Rockfellar | Variational Analysis[END_REF], we find that
v n H 3 C,
for some real constant C independent of n. Consequently, there exists a subsequence of (v n ) converging weakly in H 3 . But in view of (13), this weak limit must be v. Recalling that the weak topology of a closed ball in a separable Hilbert space is metrizable and that a sequence with values in a compact metric space having a unique cluster value must converge towards it, one can deduce that the whole sequence v n converges weakly towards v in H 3 . we now propose to prove that this convergence is actually strong. One has
1 0 v n -v 2 = - 1 0 v n -v (ds n -ds) .
But, since v n (0) = v (0) = 0 and the sequence v n converges weakly towards v in H 3 , the sequence v n -v must converge pointwise towards 0 and be bounded by a constant C which is independent of x and n. By Egoroff's theorem, there exists a measurable subset M of [0, 1] such that the sequence v n -v converges towards 0 uniformly on [0, 1] \ M, where s(M) is as small as desired. Thus
[0,1]\M v n -v (ds n + ds) ε [s ([0, 1]) + s n ([0, 1])] ,
which is controlled by estimate (14). Moreover
M v n -v (ds n + ds) v n L ∞ + v L ∞ [s (M) + s n (M)] .
Since v n L ∞ is bounded, the desired conclusion will be reached as soon as
ξ ds n = - 1 0 ξ v n ,
this is a consequence of the weak convergence in H 3 of v n towards v.
Existence of weak solutions and related open problems
The following example is presented to show that, with the regularity that was proved above of the friction threshold s (Theorems 1 and 3), there may exist no weak solution to the frictional quasi-static problem. Incidentally, this example shows that a sweeping process associated with an arbitrary Wijsman-regulated set-valued mapping need not have any weak solution.
Example. Let us consider the initial condition defined by u 0 (x) = 1 -2|x -1/2| with x ∈]0, 1[. The displacements prescribed at the extremities, as well as the body forces, are assumed to vanish identically u p 0 ≡ u p 1 ≡ 0, f ≡ 0. Assuming that the friction coefficient is larger than 2 in order to prevent any slipping, one assumes the measure s(t) to be a "moving Dirac measure" δ p(t) at position x = p(t). The position p(t) will be an oscillating function around x = 1/2, which is continuous but shows unbounded variation. To define the function p(t), take a sequence α n in ]0, 1/4[ converging towards 0 such that ∞ n=0 α n = ∞. Then set
p(0) = 1/2, p(t) = 1/2 + (-1) n 2 2n+2 α n t -1/2 2n+2 if t ∈ 1/2 2n+2 , 1/2 2n+1 , 1/2 + (-1) n 2 2n+1 α n t -1/2 2n if t ∈ 1/2 2n+1 , 1/2 2n .
It can be readily checked that the support of the measure δ p(t) is contained in [1/4, 3/4], its total mass equals 1, and δ p(t) ∈ C 0 ([0, 1]; H -1 ). From Proposition 1, it follows that the set-valued mapping C(t) associated with the underlying sweeping process is Wijsman-regulated. When these two sets equal a set L (necessarily closed), it will be said that the sequence C n converges in the sense of Kuratowski towards L, which will be written:
lim n→∞ C n = L .
Definition 4. A sequence C n : N → P(E) of subsets of E will be said to converge in the sense of Wijsman towards a closed set
L ⊂ E if ∀x ∈ E, lim n→∞ d(x, C n ) = d(x, L).
The interest of convergence in the sense of Wijsman is that it is induced by a natural topology in the class of all nonempty closed subsets of E: the weak topology generated by the family of functions d(x, •), when x covers E, which is called Wijsman's topology. Theorem 4. [START_REF] Beer | A Polish topology for the closed subsets of a Polish space[END_REF] Let (E, d) be a complete separable metric space. Then the class of nonempty closed subsets of E equipped with Wijsman's topology is separable, and there is a complete metric compatible with the topology.
A link between convergence in the sense of Hausdorff and convergence in the sense of Kuratowski is provided by the following proposition (a proof of which can be found in [START_REF] Moreau | Multiapplications à rétraction finie[END_REF]).
lim n→∞ h(C n , L) = 0 ⇒ lim n→∞ C n = L .
If all the C n are contained in a fixed compact set K ⊂ E (∀n ∈ N, C n ⊂ K ), then the converse is true.
A link between convergence in the sense of Kuratowski and convergence in the sense of Wijsman is provided by the following proposition. Proof. This is a straightforward consequence of following two simple statements:
∀x ∈ E, d(x, L) lim sup n→∞ d(x, C n ) ⇒ L ⊂ lim inf n→∞ C n , ∀x ∈ E, d(x, L) lim inf n→∞ d(x, C n ) ⇒ L ⊃ lim sup n→∞ C n .
Definition 5. [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] A set-valued mapping C : [t 0 , T ] → P(E) will be said to have bounded retraction if ret (C; t 0 , T )
def = sup n i=1 e (C(t i-1 ), C(t i )) < ∞,
where the supremum is taken over all the finite sequences t 0 t 1 t 2 • • • t n = T . The function t → ret (C; t 0 , t) thus defined is non-decreasing. Theorem 5. [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] Let C : [t 0 , T ] → P(E) be a set-valued mapping with bounded retraction. Then, C(t) admits a left limit C(t-) in the sense of Kuratowski at every t ∈]t 0 , T ], and a right limit C(t+), at every t ∈ [t 0 , T [. Definition 6. A set-valued mapping C : [t 0 , T ] → P(E) will be said to have absolutely continuous retraction if, for all ε > 0, some η > 0 can be found such that for all finite collection ]σ i , τ i [⊂ [t 0 , T ] of non-overlapping open intervals, the following statement:
i (τ i -σ i ) < η ⇒ i e (C(σ i ), C(τ i )) < ε,
holds true, and to show Lipschitz-continuous retraction if there exists L 0 such that ∀s t ∈ [t 0 , T ], e (C(s), C(t)) L(ts).
The following proposition accounts for the terminology used here.
Proposition 5. [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] Let C : [t 0 , T ] → P(E) be a set-valued mapping. The following two claims are then equivalent:
(i) C has absolutely continuous (respectively Lipschitz-continuous) retraction.
(ii) C has bounded retraction and the non-decreasing real-valued function τ → ret (C; t 0 , τ ) is absolutely continuous (respectively Lipschitz-continuous).
On similar lines, we have the following proposition: Proposition 6. [START_REF] Moreau | Multiapplications à rétraction finie[END_REF] Let C : [t 0 , T ] → P(E) be a set-valued mapping with bounded retraction. The following three claims are then equivalent:
(i) C has right-continuous retraction at t ∈ [t 0 , T [ (that is, the real-valued function τ → ret (C; t 0 , τ ) is right-continuous at t). (ii) lim τ →t+ e(C(t), C(τ )) = 0. (iii) C(t) ⊂ C(t+).
Classically, a function f : [t 0 , T ] → E is said to be regulated if there is a sequence of step functions converging towards f uniformly with regard to t ∈ [t 0 , T ]. In the specific case where the metric space is complete, a function f : [t 0 , T ] → E is regulated if and only if it admits a left limit f (t-) at every t ∈ ]t 0 , T ] and a right limit f (t+) at every t ∈ [t 0 , T [. Definition 7. A set-valued mapping C : [t 0 , T ] → P(E) with non-empty closed values, will be said to be Wijsman-regulated if it is regulated as a mapping with values in the class of nonempty closed subsets of E equipped with Wijsman's topology.
In what follows, only the specific case where the metric space (E, d) is a separable Hilbert space H will be considered. The scalar product will be denoted by (• | •), the norm by • and the closed ball with center c and radius r by B(c, r ). The notation C(H ) will stand for the class consisting of the non-empty closed convex subsets of H . Theorem 6. Let C n : N → C(H ) be a sequence of nonempty closed convex subsets of H . If this sequence has a non-empty limit L in the sense of Kuratowski, then L is convex, and the following statement holds true:
∀x ∈ H, lim n→∞ proj [x, C n ] = proj [x, L] .
Proof. Fix x ∈ H arbitrary and set
x n = proj [x, C n ] , l = proj [x, L] .
It has to be proved that the sequence (x n ) converges strongly towards l. Let c ∈ L be arbitrary. The definition of lim n→∞ C n (convergence in the sense of Kuratowski) gives:
∀m ∈ N, ∃N c,m ∈ N, ∀n N c,m , d (c, C n ) < 1 m + 1 . ( 15
)
Setting c = l, m = 0 and removing finitely many terms of the sequence, if necessary, we obtain
d (l, C n ) < 1.
Hence, the sequence (x n ) takes values in the closed ball having center x and radius 1 + 2 l -x . Therefore, a subsequence, still denoted by (x n ), converges weakly towards l ∈ B(x, 1 + 2 l -x ).
Next, fix c ∈ L and m ∈ N arbitrarily. From statement (15), we can find
N ∈ N such that ∀n N , ∃b n ∈ B(0, 1), c + b n m + 1 ∈ C n .
With n N , we obtain
x -x n c + b n m + 1 -x n 0, ; therefore, x -x n c -x n 1 + 2 l -x m + 1 .
Taking the infimum limit n → ∞ in this inequality, one obtains
x -l c -x l + lim inf n→∞ x n 2 1 + 2 l -x m + 1 ; therefore, ∀m ∈ N, ∀c ∈ L , x -l c -l H 1 + l -x m + 1 , which yields l = l,
because of the uniqueness of the projection of a point onto a closed convex subset of a Hilbert space. Remembering that the weak topology in a closed ball of a separable Hilbert space is metrizable and that a sequence in a compact metric space that has a unique cluster value converges towards it, it has been actually proved that the whole sequence converges weakly towards l (with no need to extract any subsequences). Finally, since x -
x n = d(x, C n ), setting c = l in statement (15) yields ∀m ∈ N, ∃N m ∈ N, ∀n N m , x -x n x -l + 1 m + 1 ,
C n = L , (ii) ∀x ∈ H, lim n→∞ d (x, C n ) = d (x, L) , (iii) ∀x ∈ H, lim n→∞ proj [x, C n ] = proj [x, L] .
Proof. The identity:
d(x, C n ) = d (x, proj [x, C n ]) ,
gives (iii) ⇒ (ii), Proposition 4, (ii) ⇒ (i), and finally, Theorem 6 is exactly (i) ⇒ (iii).
In particular, with sequences of non-empty closed, convex subsets in a separable Hilbert space, convergence in the sense of Kuratowski and in the sense of Wijsman is the same. In the specific case of finite-dimensional Hilbert spaces, this fact was first proved by Wijsman in 1966 (see [START_REF] Rockfellar | Variational Analysis[END_REF]) for sequences of non-empty closed subsets which are not necessarily convex. Corollary 1 is simply a particular case of more general extensions of Wijsman's theorem to infinite dimensions which were reviewed in [START_REF] Beer | Wijsman convergence: a survey[END_REF] in 1994. The aim of the following example is to show that in an infinite-dimensional Hilbert space, the additional assumption of convexity cannot be relaxed.
Example. Take e n to denote the vectors of the canonical basis of l 2 . For all n ∈ N, set (i) C is Wijsman-regulated. (ii) C admits a non-empty left limit in the sense of Kuratowski (notation C(t-)) at every t ∈ ]t 0 , T ] and a non-empty right limit (notation C(t+)) at every t ∈ [t 0 , T [. (iii) For all x ∈ H , the mapping
C n = 2e 0 , e n , L = 2e 0 . It can be readily checked that lim n→∞ C n = L , but d (0, C n ) = 1, d (0, L) = 2.
[t 0 , T ] → H t → proj [x, C(t)] is regulated.
Proof. This is straightforward consequence of Theorem 4 and Corollary 1.
We are now able to list some classes of Wijsman-regulated set-valued mappings. Proof. This is a straightforward consequence of Theorem 5 and Proposition 7.
Hence, in the case of set-valued mappings with non-empty closed convex values in a Hilbert space, the class consisting of the Wijsman-regulated set-valued mappings contains the class consisting of the set-valued mappings with bounded retraction. Another important class of Wijsman-regulated set-valued mappings is that of those set-valued mappings that are regulated in the sense of Hausdorff distance. • First let us prove that the infimum limit is non-empty. Any function u ∈ BV ([t 0 , T ], H ) is classically associated with its differential measure or Stieltjes measure du ∈ M([t 0 , T ], H ). It satisfies, in particular, ]s,t] du = u(t+)u(s+). Definition 10. [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] Let C : [t 0 , T ] → C(H ) be a set-valued mapping whose values are nonempty closed and convex. The function u ∈ BV ([t 0 , T ], H ) will be said to be a solution of the sweeping process in the sense of "differential measures" if there exists (non uniquely) a non-negative real measure μ, as well as a function u ∈ L
p 0 e x + v p 0 e y and u p 1 e x + v p 1 e
1 y are the prescribed displacements at both ends x = 0 and x = 1.
Fig. 1 .
1 Fig. 1. Elastic string in frictional contact with a wedge-shaped obstacle
Fig. 2 .
2 Fig. 2. Frictional contact between an elastic string and a rigid floor
Fig. 3 .
3 Fig.3. Longitudinal displacement and velocity at the initial instant as well as at some later instant (dashed lines)
0 e y and u p 1 e x + v p 1 e
11 where the traction stiffness k and the flexion stiffness l will equal 1 in what follows by choosing the unit appropriately, and u p 0 e x + v p y are the prescribed displacements at extremities x = 0 and x = 1, respectively.
lim n→+∞ s n (M) = s (M) ,has been proved. But, it suffices to establish that for all functions ξ ∈ C ∞ 0
Definition 3 .
3 Let C n : N → P(E) be a sequence of subsets of E. The two closed sets (possibly empty) defined bylim inf n→∞ C n = x ∈ E lim sup n→∞ d(x, C n ) = 0 , lim sup n→∞ C n = x ∈ E lim inf n→∞ d(x, C n ) = 0 ,
Proposition 3 .
3 Let C n : N → P(E) be a sequence of subsets of E, and L a closed set. If C n converges towards L in the sense of Hausdorff, then C n converges towards L in the sense of Kuratowski:
Proposition 4 .
4 Let C n : N → P(E) be a sequence of subsets of E, and L a closed set. If C n converges towards L in the sense of Wijsman, then C n will converge towards L in the sense of Kuratowski.
Proposition 7 .
7 Let C : [t 0 , T ] → C(H ) be an arbitrary set-valued mapping with non-empty closed convex values. The following three statements are then equivalent:
Theorem 7 .
7 A set-valued mapping C : [t 0 , T ] → C(H ) is said to be regulated in the sense of the Hausdorff distance if there exists a sequence C n : [t 0 , T ] → P(H ) of piecewise constant set-valued mappings such that the sequence of real-valued functions t → h(C n (t), C(t)) converges uniformly towards 0. Any set-valued mapping C : [t 0 , T ] → C(H ) which is regulated in the sense of the Hausdorff distance is Wijsman-regulated. Moreover, in those cases where the values of C are contained in a fixed compact subset K ⊂ H (∀t ∈ [t 0 , T ], C(t) ⊂ K ), then the converse is true.Proof. Necessary condition.Let us consider a set-valued mapping C : [t 0 , T ] → C(H ) which is regulated in the sense of the Hausdorff distance. Based on Proposition 7, the conclusion targeted will be reached if at an arbitrary t ∈ [t 0 , T [, it can be proved that lim inf τ →t+ C(τ ) = ∅ and lim inf τ →t+ C(τ ) = lim sup τ →t+ C(τ ).
which shows that C admits the left and right limits:C(t-) = τ ∈[t 0 ,t[ C(τ ), C(t+) = τ ∈]t,T ] C(τ ),which are non-empty since they contain C(T ). Proposition 7 now yields the conclusion targeted.
It then proceeds in an inward normal direction, as if it were pushed by the boundary so as to go on belonging to C(t). The name "sweeping process", which was coined by Jean Jacques Moreau, refers to this vivid mechanical interpretation.
In kinematic terms, C(t) is a moving convex set and u(t) a point in that set (u(t) ∈ C(t) since ∂ I C(t) [•] is empty at any point which does not belong to C(t)). The evolution problem under consideration, therefore, has a geometrical interpretation, which is especially clear if C(t) has a non-empty interior. Indeed, whenever u(t) is an interior point, ∂ I C(t) [u(t)] reduces to {0} and the point u(t) must remain at rest until meeting the boundary of C(t).
and s are non-negative measures with their support in [α, β], having a total mass which is bounded independently of n) and taking into account Theorem 4, one must now prove that if the sequence (s n ) converges strongly towards s in H -1 , then lim n→∞ C n = C, in the sense of Kuratowski.Choosing u ∈ lim sup n→∞ C n arbitrarily, one finds that there exists a subsequence of (s n ), which is still denoted by (s n ), and a sequence (u n ) in H 1 0 such that (u n ) converges strongly towards u in L 2 and
H 1 0 ,
(where s n ∀ϕ ∈ H 1 0 , ∀n ∈ N, 0 1 u n ϕ s
n , |ϕ| H -1 ,H 1 0 . If n tends to infinity, it can be seen that u ∈ C; hence, lim sup n→∞ C n ⊂ C. Now let us take arbitrary u ∈ C. Noting that u is a measure with support in [α, β], set
There exists a piecewise constant set-valued mapping C n 0 such that ∀t ∈ [t 0 , T ], h C n 0 (t), C(t) 1 2 , and a finite collection {a k } of elements of H such that all the C n 0 (t) contain at least one of the a k . Let B be a closed ball with center a 0 and a radius larger than 2 plus the maximum of the distance from a 0 to one of the a k . Then, for all n ∈ N, there exists a piecewise constant set-valued mapping C Fix m ∈ N. Based on Mazur's theorem, there exists a convex combination c m of the x n such that d(l, c m ) < 1/(m + 1). Since all the x n in that convex combination can be chosen with arbitrarily large ranks, one can assume∃η > 0, ∀τ ∈ ]t, t + η[ , d (x n , C(τ )) < 1 m + 1 ,for all the x n in that convex combination. In addition, the convexity ofC(τ ) + B(0, 1/(m + 1)) entails ∀τ ∈ ]t, t + η[ , d (c m , C(τ )) < 1 m + 1 ,and the conclusion targeted is reached, sinced(l, C(τ )) d(l, c m ) + d(c m , C(τ )).• It still remains to be proved that the infimum limit equals the supremum limit.Let h ∈ lim sup τ →t+ C(τ ), and ε > 0. Since C(t) is regulated in the sense of Hausdorff distance, one can find a set C Let C(t) be a Wijsman-regulated set-valued mapping with values contained in a fixed compact set. By using both Propositions 7 and 3, this set-valued mapping admits left and right limits in the sense of Hausdorff at every t. Therefore, choosing n ∈ N and t ∈ [t 0 , T ] arbitrarily, one obtains∃η t > 0, ∀τ ∈ ]t -η t , t[ , h (C(τ ), C(t-)) < 1/(n + 1), ∀τ ∈ ]t, t + η t [ , h (C(τ ), C(t+)) < 1/(n + 1).From the open sets ]t -η t , t + η t [ defining a covering of the compact [t 0 , T ], a finite subcovering defined byt 0 < t 1 < t 2 < • • • < t n = T can be extracted. Let us define a piecewise constant set-valued mapping C From this definition, for all τ ∈]t i-1 , t i [, one obtains h (C n (τ ), C(τ )) h (C(τ ), C(t i -)) + h (C(t i -), C n (t i -)) + h (C n (t i -), C n (τ )) ,which shows that C(t) is regulated in the sense of the Hausdorff distance. Every set-valued mapping C : [t 0 , T ] → C(H ) which is continuous in the sense of the Hausdorff distance:∀ε > 0, ∃η > 0, ∀τ ∈ ]t -η, t + η[ , h (C(τ ), C(t)) < ε, is Wijsman-regulated.Another class of Wijsman-regulated set-valued mappings is provided by the class of non-increasing set-valued mappings with non-empty closed convex values. Let C : [t 0 , T ] → C(H ) be a set-valued mapping with non-empty closed convex values, which is assumed to be non-increasing in the sense that ∀t 1 , t 2 ∈ [t 0 , T ], t 1 t 2 ⇒ C(t 2 ) ⊂ C(t 1 ).
that is
lim τ →t+ d (l, C(τ )) = 0.
< 1 n + 1 + 1 n + 1 + 0 = 2 n + 1 ,
Corollary 2. Proposition 9. Then C(t) is Wijsman-regulated.
Proof. It can be readily checked that
lim sup
τ →t-
Sufficient condition.
This entails
∀n ∈ N, ∃x n ∈ B, ∃η n > 0,
∀τ ∈ ]t, t + η n [ , d (x n , C(τ )) < 1 n + 1 .
τ →t+ C(τ ),
n : [t 0 , T ] → P(H ) such that ∀t ∈ [t 0 , T ], h (C n (t), C(t)) 1 n + 1 , and B ∩ C n (t) = ∅.
One can then extract a subsequence, which is still written (x n ), that converges weakly towards l ∈ B. It is now proposed to prove that l ∈ lim inf m ⊂ H and a real number η > 0 such that
∀τ ∈ ]t, t + η[ , h (C m , C(τ )) < ε 3 . Since h ∈ lim sup τ →t+ C(τ ), ∃τ ∈ ]t, t + η[ , d h, C(τ ) < ε 3 .
Therefore, for all τ ∈]t, t + η[,
d (h, C(τ )) d h, C(τ ) + h C(τ ), C m + h (C m , C(τ )) < ε, which proves that h ∈ lim inf τ →t+ C(τ ). n by ∀i, C n (t i ) = C(t i ), et ∀i, ∀τ ∈ t i-1 , t i , C n (τ ) = C t i-1 + t i 2 . C(τ ) ⊂ τ ∈[t 0 ,t[ C(τ ) ⊂ lim inf
1 loc ([t 0 , T ]; H ) such that du = u μ and∀t ∈ [t 0 , T ], -u (t) ∈ ∂ I C(t) [u(t)] .Proposition 15.[START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] Let C : [t 0 , T ] → C(H ) be a set-valued mapping with nonempty closed convex values, and u 1 , u 2 ∈ BV ([t 0 , T ], H ) be two solutions in the sense of differential measures of the associated sweeping process. These two solutions are assumed to be both right-continuous, and to agree with the same initial condition u 1 (t 0 ) = u 2 (t Then every weak solution of the associated sweeping process (which is a function with bounded variation by virtue of Theorem 8 and right-continuous by virtue of Propositions 6 and 11) will also be a solution in the sense of differential measures.If, in addition, C : [t 0 , T ] → C(H ) is assumed to show absolutely continuous retraction, then every weak solution will be a strong solution in the sense for a.a. t ∈ [t 0 , T ], -u (t) ∈ ∂ I C(t) [u(t)] .
0 ) = a. Then ∀t ∈ [t 0 , T ], u 1 (t) = u 2 (t).
Theorem 10.
[START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF]
Let C : [t 0 , T ] → C(H ) be a set-valued mapping with nonempty closed convex values and which is assumed to have bounded right-continuous retraction.
Next, set
so that the sweeping process based on the associated C n (t) admits a weak solution u n (t), which can be explicitly computed. It can easily be checked that for all m n:
This estimate entails lim n→∞ u n (t) = 0, at all t ∈]0, 1]. If we go back to the sweeping process based on C(t), and taking u P (t) to denote the piecewise constant function associated with a given subdivision P by use of the catching-up algorithm, it can be readily checked that the net u P (t) converges pointwise towards the following function:
The convergence cannot be uniform on [0, 1], because otherwise the limit would be right-continuous at 0, in view of Proposition 11. The sweeping process based on C(t), which was found above to be Wijsman-regulated, therefore does not have any weak solution in the sense of Definition 9.
It might seem that pointwise convergence of the net u P (t) could be allowed by weakening the definition of a weak solution. However, one can model a rigid motion of a segment C(t) in R 2 such that C(t) is Wijsman-regulated and the corresponding net u P (t) does not converge, even pointwisely. Our Definition 9 of weak solutions of sweeping processes by Wijsman-regulated set-valued mapping therefore seems to be appropriate. However, since a weak solution does not necessarily exist, some problems still remain to be solved.
Open problem 1.
Find regularity assumptions about s(t) compatible with a "moving Dirac measure", where the existence of a weak solution to the underlying sweeping process could be proved. Of course, the regularity assumptions will have to be weak enough to be ensured by requiring that the data involved in the "normal problem" show some regularity.
Open problem 2.
In cases where regularizing s(t) by performing spatial convolution with a mollifier gives a set-valued mapping C(t) with bounded retraction, is it true that the corresponding solutions of the associated sweeping processes converge uniformly with t towards a limit? If so and a weak solution of the sweeping process based on C(t) does exist, are both limits necessarily equal? Open problem 3. In cases where the sweeping process based on C(t) admits a weak solution u(t), is it true that u is a function of bounded variation of x at every t? where the supremum should be understood with respect to the order on [0, +∞], so that
The Hausdorff "distance" between the two subsets A and B of E is defined by
A key fact, which is recalled in the following proposition, is that the excess gives rise to a triangular inequality.
Proposition 2. For all A, B, C ⊂ E, we have (i) e(A, B)
(iii) e(A, C) e(A, B) + e(B, C), (iv) h(A, C) h(A, B) + h(B, C).
The class of all non-empty closed bounded subsets of E equipped with the Hausdorff distance is a metric space. Hence, the Hausdorff distance defines a notion of limit for sequences C n : N → P(E) of subsets of E. Definition 2. A sequence C n : N → P(E) of subsets of E will be said to converge in the sense of Hausdorff towards a closed subset
In practice, convergence in the sense of Hausdorff is often too strong, as seen in the following example.
Example. In Euclidean R 2 , let us consider the sequence C n : N → P(R 2 ) defined by
and take + to denote the closed half-space y 0. As:
the sequence C n does not converge in the sense of Hausdorff towards + .
Appendix B: Weak solutions of sweeping processes
In this appendix, H is a separable Hilbert space, and all the set-valued mappings C : [t 0 , T ] → C(H ) will be assumed to take only non-empty closed convex values.
Given a closed convex subset K of H , ∂ I K will denote the subdifferential of the indicatrix function (in the sense of convex analysis) of K . Hence, ∂ I K (x) is the cone of all the outward normals to K at x. It will be empty if x / ∈ K and reduces to {0} at any interior point x of K . Given a set-valued mapping C : [t 0 , T ] → C(H ) with non-empty closed convex values, we will use the term "sweeping process" to refer to the evolution problem consisting of finding a function u
where u 0 denotes a given initial condition. This evolution problem has a clear geometrical interpretation in kinematic terms when C(t) has a non-empty interior. As long as the point u(t) is an interior point in the moving convex set C(t), it will remain at rest. When, by the evolution of C(t), the point u(t) meets the boundary of C(t) at some instant t, it proceeds in an inward normal direction, so as to go on belonging to C(t), exactly as if it were being pushed by the boundary of the moving convex set. A definition of weak solutions of sweeping processes was first proposed by Moreau [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] in the case of set-valued mappings with bounded retraction. He proved their existence before showing that they are actually strong solutions in some sense. In the problems analysed in the present paper, some sweeping processes appear that have weak solutions that are not strong solutions. Of course, the underlying set-valued mappings do not have bounded retraction. Thus, one is led to extend Moreau's definition of weak solutions of sweeping processes to a larger class of set-valued mappings than that showing bounded retraction. Since these set-valued mappings must have a right limit C(t+) in the sense of Kuratowski, at every t, one is naturally led to consider the larger class consisting of all the Wijsman-regulated set-valued mappings.
In this appendix, we first define weak solutions of sweeping processes based on Wijsman-regulated set-valued mappings, and these weak solutions, when they exist, are proved to enjoy the same general properties as those of the weak solutions of sweeping processes based on set-valued mappings with bounded retraction. Moreau's [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] existence results obtained in the case of set-valued mappings with bounded retraction are then briefly recalled without going into the proofs. Definition 8. We define P as a subdivision of the real interval [t 0 , T ] (notation P ∈ subd([t 0 , T ])) if it is a finite partition of [t 0 , T ] into intervals of any sort (some of them possibly reduced to single points).
A P ∈ subd([t 0 , T ]) will be said to be a refinement of P ∈ subd([t 0 , T ]) (notation P P) if every interval of P is contained in an interval of P. A mapping defined on [t 0 , T ] will be said to be piecewise constant if it is constant in every interval of some P ∈ subd([t 0 , T ]). Definition 9. Let C : [t 0 , T ] → C(H ) be a Wijsman-regulated set-valued mapping taking non-empty closed convex values. For P ∈ subd([t 0 , T ]), I 0 , I 1 , I 2 , . . . will denote the ordered sequence of the corresponding intervals, and t i the origin (left extremity) of I i . We will also take C P to denote the piecewise constant set-valued mapping with non-empty closed convex values defined by
Given the initial value a ∈ C(t 0 ), set inductively ("catching-up" algorithm):
to define the piecewise constant function u P : [t 0 , T ] → H by
When the net (u P ) converges uniformly in [t 0 , T ], towards some limit u : [t 0 , T ] → H in the sense that
the function u : [t 0 , T ] → H will be said to be a weak solution of the sweeping process based on the set-valued mapping C(t), starting at initial condition a.
Proposition 10. Let C : [t 0 , T ] → C(H ) be a Wijsman-regulated set-valued mapping, and u, u be two weak solutions of the associated sweeping process. Then, the real-valued function
is non-increasing.
Proof. If u and u start at initial values a and a , these functions are the limits of (generalized) sequences u P and u P of the piecewise constant functions inductively defined from these initial data. As the successive values of u P and u P are obtained by performing projections onto the convex sets C i , the contraction property of such projections entails that ∀P ∈ subd ([t 0 , T ]) , ∀s t, u P (t)u P (t) u P (s)u P (s) .
It then suffices to go to the limit of the two members of this inequality to obtain the conclusion required.
Proposition 11. Let u : [t 0 , T ] → H be a weak solution of the sweeping process based on the set-valued mapping C(t), which is assumed to be Wijsman-regulated. Then u admits a left limit u(t-) and a right limit u(t+) at every t ∈ [t 0 , T ] (with appropriate adjustments at t 0 and T ) and
Proof. The existence of u(t-) and u(t+) is ensured by the fact that u is regulated. At an arbitrary t ∈ [t 0 , T ], we take P to denote the set of all subdivisions in subd([t 0 , T ]) containing {t}. Based on the definition of C P ,
and therefore, based on the definition of u P :
which entails
Taking a limit with respect to P ∈ P, one can readily see that u(t) ∈ C(t).
As the convergence of the net u P , P ∈ P, is uniform with t, the following commutation of limits holds: Proposition 13. Let I 0 , I 1 , I 2 , . . . be a subdivision of [t 0 , T ] into intervals containing their respective origins t 0 , t 1 , t 2 , . . ., and u : [t 0 , T ] → H a function such that (i) For all i , u |I i is a weak solution of the sweeping process based on C I i (which entails the existence of u(t i -) for i > 0). (ii) For i > 0:
Then u is a weak solution of the sweeping process based on C in [t 0 , T ].
The following theorem is due to Moreau. It claims that provided the set-valued mapping has bounded retraction, the corresponding sweeping process admits a weak solution starting from any arbitrary initial condition. Theorem 8. [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] Let C : [t 0 , T ] → C(H ) be a set-valued mapping with nonempty closed convex values, which is assumed to have bounded retraction. Then there exists a weak solution u of the sweeping process starting at any given initial condition a ∈ C(t 0 ). This weak solution is such that
In particular, the function u has bounded variation. If, in addition, C(t) has rightcontinuous (respectively absolutely continuous, respectively Lipschitz-continuous) retraction, then the weak solution u is right-continuous (respectively, absolutely continuous,respectively Lipschitz-continuous).
This weak solution depends continuously on the data (the set-valued mapping C(t) and the initial condition) involved in the sweeping process in the sense displayed by the following theorem. Theorem 9. [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] Let C, C : [t 0 , T ] → C(H ) be two set-valued mappings with non-empty closed convex values and bounded retraction. Then every pair (u, u ) of weak solutions of the associated sweeping processes will satisfy the following estimate:
∀t ∈ [t 0 , T ], u(t)u (t) 2 u(t 0 )u (t 0 )
h C(τ ), C(τ ) ret(C; t 0 , t) + ret(C ; t 0 , t) .
Theorem 9 can be used to obtain an estimate of the error occurring when the catching-up algorithm is used to approximate the weak solution of a sweeping process with bounded retraction. Proposition 14. [START_REF] Moreau | Evolution problem associated with a moving convex set in a Hilbert space[END_REF] Let C : [t 0 , T ] → C(H ) be a set-valued mapping with nonempty closed convex values and bounded retraction. Consider an arbitrary subdivision P ∈ subd([t 0 , T ]) of the interval [t 0 , T ], let I 0 , I 1 , I 2 , . . . be the corresponding finite sequence of intervals, and μ be some majorant of ret(C; s, t), for arbitrary [s, t] ∈ I i . Still denoting by u P the piecewise constant function provided by the catching-up algorithm, one has u(t)u P (t) 2 μ ret(C; t 0 , t). |
01767925 | en | [
"spi.tron",
"info.info-ar",
"info.info-it",
"info.info-ts"
] | 2024/03/05 22:32:15 | 1997 | https://hal.science/hal-01767925/file/article_sympo97.pdf | Michel Jézéquel
email: [email protected]
Claude Berrou
email: [email protected]
Catherine Douillard
email: [email protected]
Pierre Pénard
email: [email protected]
CHARACTERISTICS OF A SIXTEEN-STATE TURBO-ENCODER/DECODER (TURBO4)
This paper presents the characteristics of an integrated circuit called "turbo4" which can be used as a turbo-encoder or as a turbo-decoder. The turbo-encoder is built using a parallel concatenation of two recursive systematic convolutional codes with constraint length K=5. The turbo-decoder is cascadable, each circuit processing one iteration of the turbo-decoding algorithm. It is designed around 2 sixteenmodified Viterbi decoders and 2 matrices of 64 x 32 bits for interleaving and deinterleaving. Some measures for Gaussian and Rayleigh channels and for different coding rates are presented.
INTRODUCTION
Turbo codes are a new family of error correcting codes introduced by C. Berrou and al. [START_REF] Berrou | Near Shannon limit error-correcting coding and decoding: turbo-codes[END_REF][START_REF] Berrou | Near optimum error correcting coding and decoding : turbo-codes[END_REF]. They implement a parallel concatenation of recursive and systematic convolutional codes, possibly punctured. The decoding process is iterative. Therefore, the turbo-decoder can be implemented in a modular pipelined structure, in which each module is associated with one iteration. Then, performance in Bit Error Rate (BER) terms is a function of the number of chained modules. Turbo codes show results which are very close to the theoretical channel limit.
Turbo codes have been implemented in two different integrated circuits. The first one called "CAS5093" and distributed by COMATLAS is built around 5 eight-state modified Viterbi decoders [START_REF] Berrou | A low complexity soft-output Viterbi decoder architecture[END_REF] and 4 matrices of 32 x 32 bits for interleaving and deinterleaving. This circuit contains 2.5 modules. The second one called turbo4 includes one module and is cascadable, so the user can create a decoder consisting of several modules.
This paper presents the characteristics of turbo4.
It is organised as follows : the next section gives the main technical characteristics of turbo4. Section 3 is dedicated to the architecture of the circuit. Finally, we conclude by presenting some results of simulations and tests.
MAIN CHARACTERISTICS OF TURBO4
Turbo4 can be used as an encoder or as a decoder. It is designed with a 2 -metal, 0.8µm CMOS technology. The chip with a size of 78 mm 2 contains 0.6 M transistors. Features of the circuit are shown below: iterations for the decoding process (4 circuits for the decoder):
• 9 dB @ BER 10 -7 , R=1/2 • 8 dB @ BER 10 -7 , R=2/3 Turbo4 is built around 4 blocks: the encoder, the decoder, the interleaver/deinteleaver and the synchronisation/supervision block.
TURBO-ENCODER
The turbo-encoder is built using a parallel concatenation of two recursive systematic convolutional codes (figure 1). The incoming data (XI) is fed into a first encoder that produces redundancy Y1 while the second encoder receives interleaved data and produces redundancy Y2.
The required coding rate is obtained by puncturing Y1, Y2 and, possibly XI. For a 1/2 coding rate, the puncturing function is included in the circuit. In this case, composite redundancy is output following the sequence: Y2 Y1 Y1 Y1. For other coding rates, the puncturing function has to be designed on the board. A synchronisation block is added in order to make synchronisation between the encoder and the decoder possible.
TURBO-DECODER
The decoder processes one turbo-decoding iteration. It is designed (figure 2) around 2 sixteenstate SOVAs (Soft Output Viterbi Algorithm, an acronym proposed by J. Hagenauer [START_REF] Hagenauer | A Viterbi algorithm with soft-decision outputs and its applications[END_REF]) and 2 matrices of 64x32 bits for interleaving and deinterleaving. Some delay lines are added in order to compensate for latency of other blocks like SOVAs.
The decoder is cascadable, so the user can choose the number of iterations, each circuit corresponding to one iteration. Its programmability makes it possible to adapt some coefficients to the number of the iteration. Programmability is ensured by some control inputs like: MY2[1:0], E7, MXZ, ... The decoder receives noisy symbols X (data), Y1 (redundancy produced by the first encoder) and Y2 (redundancy produced by the second encoder) from the channel. It receives Z (extrinsic information) computed by the previous circuit. These incoming data are coded on 4 bits in 2's complement.
SOVA1 works on redundancy Y1 with noisy data X + Z. SOVA2 processes Y2 and the interleaved output of SOVA1 from which incoming data Z has been subtracted. The X input of SOVA2 is subtracted from its output and after deinterleaving, ZO may be used by the subsequent module as input Z.
INTERLEAVING / DEINTERLEAVING
Interleaving and deinterleaving are convolutional, each function using one memory (a matrix of 64 rows by 32 columns). Data are written row by row, then read following specific rules which ensure non-uniformity of the interleaving process. Global latency due to one interleaver and deinterleaver is 2048.
SYNCHRONISATION / SUPERVISION
Synchronisation of the circuit can be divided into two phases: the synchronisation search phase and the tracking phase, also called supervising phase. No synchronisation word is required for the synchronisation search phase, thus there is no loss in the coding rate. This phase uses a controlled inversion of symbols coming from the turbo-encoder.
S O V A 1 Inter- leaver 64x3 2 Dei n- ter- leaver 64x3 2 SYNCHRO/SUPERVISION SYNCOUT M OD[2 :0] SYNCIN Y1I[3: 0] XI[3 :0] ZI[3 :0] sign(X) outpu t SOVA1 s ign (Y) ou tput SOVA1 LOS ZO[3:0 ] X2REC Q[2: 0] X1REC Y1REC x 1 or x 0.7 Y2I[3: 0] Y2O[3: 0] Y1O[3: 0] XO[3: 0] M U X X Y X Y S O V A 2 X Y X Y s ign(X) ou tput SOVA2 weight(X) ou tpu t SOVA2 Y2REC x 1 , 0 .
The supervising phase uses the pseudo-syndrome procedure [START_REF] Berrou | Pseudo-syndrome method for supervising Viterbi decoders at any coding rate[END_REF]. This method is decorrelated from the one implemented for the synchronisation search phase. It deals with out-of-synchronisation detection and forces the c ircuit to return automatically to the synchronisation search phase. It also gives an estimation of the channel quality.
Tests on turbo4 showed an error in the design of the synchronisation block. A new version of the circuit is in process in order to overcome this problem.
PERFORMANCE
This section presents results of simulations and tests [START_REF] Jézéquel | Test of a Turbo-Encoder/Decoder[END_REF]. In addition, figure 5 presents a comparison between turbo4 performance and a classical concatenation of a convolutional code K=7 and a 8-bit (255,223) Reed-Solomon code with an infinite interleaver. This comparison shows that the coding gain given by 4 or 5 modules is better than that given by the classical concatenation for a bit error rate higher than 10 -8 . This comparison is made without taking into account the interleaver size which would decrease classical concatenation performance. Moreover the coding rate is more efficient for turbo4 (R=1/2) than for the classical concatenation (R=0.437). Results are most satisfactory, as the slope is the same as with the Gaussian channel with a gap of 2.5 dB.
Figure 5, 6 and 7 show a flattening degradation for low bit error rates. This degradation is not due to turbo codes but to the internal accuracy of turbo4, which works with 4 bits [START_REF] Jézéquel | Test of a Turbo-Encoder/Decoder[END_REF].
CONCLUSION
The coding gain of turbo codes was verified by tests on turbo4. Except for the Big Viterbi Decoder (using the constraint length 15), turbo4 is at the moment the circuit with the best coding gain, on both Gaussian and Rayleigh channels.
The error detected in the design of the synchronisation block will be overcome in a new version of the circuit.
figure 1 : Turbo-encoder
5 figure 2 :
52 figure 2 : Turbo-decoder
2 Figure 3 : 3 Figure 3 Figure 4 :
23334 Figure 3 : Gaussian channel, R=1/3 Figure 3 and 4 present simulation results for a Gaussian channel. Figure 3 shows results for 1 to 4 modules with a global coding rate R=1/3, each code working with a coding rate equal to a half. In figure 4 simulations are made with 3 modules (3 iterations) for BPSK or QPSK modulations. These curves are given for different coding rates. If R is the global coding rate, R 1 the coding rate associated with the first code and R 2 the coding rate associated with the second code we have : -R= 1/2 : R 1 = 4/7, R 2 = 4/5; -R= 2/3 : R 1 = 4/5, R 2 = 4/5; -R= 3/4 : R 1 = 6/7, R 2 = 6/7; -R= 4/5 : R 1 = 8/9, R 2 = 8/9;
Figure 5 :
5 Figure 5 : Gaussian channel, R=1/2
Figure 6 : 3 Figure 6
636 Figure 6 : Gaussian channel, R=2/3 Figure 6 presents measured results for a Gaussian channel. Measures are made with 1 to 5 modules for a global coding rate R=2/3 (R 1 = 4/5, R 2 = 4/5).
Figure 7
7 Figure 7 presents measured results for a Rayleigh channel with optimum channel interleaving and weighting. Measures are made with 1 to 5 modules for a global coding rate R = 1/2 (R 1 = 4/7, R 2 = 4/5).Results are most satisfactory, as the slope is the same as with the Gaussian channel with a gap of 2.5 dB.
Figure 7 :
7 Figure 7 : Rayleigh channel, R=1/2
Acknowledgements.
The authors would like to thank P. Ferry and J.R. Inisan for their help. |
01166699 | en | [
"shs.socio",
"shs.scipo"
] | 2024/03/05 22:32:16 | 2014 | https://enpc.hal.science/hal-01166699/file/transfer%20version%20r%C3%A9vis%C3%A9e.pdf | Gilles Jeannot
Jeannot Gilles
Austerity and social dialogue in French local government
Keywords: France, Saint-Etienne, civil service, municipalities, bargaining, austerity
This article investigates whether and how social dialogue has influenced austerity policies in French local government, with a particular focus on municipalities. With regard to local government staff, social dialogue takes place at two levels, with wage and general rules being discussed at national level and working conditions and individual career issues at local level.
National-level measures, as in many countries, have included unilateral wage freezes. However, though staff reductions have already occurred in the state administration, they have not (yet) affected municipalities. As seen in our case study, when it comes to local-level austerity measures such as cuts in services and restructuring measures, we are witnessing a situation of real bargaining -including conflict and formalized agreements. Even if not leading to official collective agreements, this strengthens the role of formal committees and suggests the potential resilience of social dialogue linked to the proximity of decision-making authorities and affected citizens.
Pascal F 77455 Marne la Vallée cedex 2
Email: [email protected] As in many European countries the French central government has finally responded to austerity by choosing a policy of decentralizing public services to the local level, though in combination with cutting budgets for providing services. Local authorities are thus now faced with having to implement budgets cuts, restructure services and/or make staff redundant. This article focuses on when and how these reforms have taken place and to what extent social dialogue has contributed to the adjustment process. In doing so, one must keep three French peculiarities in mind: the multi-level characteristics of social dialogue, the lateness of crisis responses compared to other countries, and a lack of academic knowledge. Looking at the first aspect, social dialogue occurs simultaneously at national and local level. Local government employees are public servants whose status and wages are determined to a great extent by central government, while being actually hired by local authorities responsible for defining job profiles and tasks. As regards the second aspect, the French state has reacted to the crisis somewhat later than other countries, with even recent growth in public sector employment. Finally, the issue of local government social dialogue has been largely neglected in French research. This has led us to focus, after a general presentation, on a case study of Saint Etienne, a city confronted earlier than others with severe budget cuts.
1) Context of local government administration
Local government
Local government responsibilities are subject to a geographical hierarchy. At the top, the 22 régions are responsible for general questions of economic development, transport (regional trains), vocational training and the maintenance of lycée (high school) buildings. The 100 départements are responsible for the majority of roads, secondary school maintenance and social support (social workers, minimum income distribution). This welfare role represents a growing proportion of spending by the départements, although the criteria for allocating welfare are subject to national rules. The 36,000 municipalities (now mostly members of one of the 2600 municipal district groupings) are responsible for urban planning, the environment and theupkeep of public spaces and primary schools (including subsidies for childcare facilities) and can also choose to provide numerous local services (crèches, libraries, sports facilities). There is a longstanding tradition of municipalities delegating public utilities (water, waste collection, urban transportation) to private companies.
The 1983 decentralization represented a sharp break in France's centralist tradition. Though local government is partly reliant on central government for setting service levels (e.g. the number of school buildings to maintain, the level of social service provision) and execution conditions (e.g. the status of staff employed), financial transfers from central government and their own tax-raising powers gave local administrations extensive freedom. This was used to expand services in general, as evidenced by the increase in local government payrolls in recent years. However, recent restrictions on their powers to tax businesses and the downward trend in central government transfers are gradually reducing this freedom.
The local civil service
The public service workforce in local government consists of the regional civil service, set up in 1983 as part of the process of consolidating the status of a variety of employment levels within a single civil service. Its creation was a product of compromise [START_REF] Biland | La fonction publique territoriale[END_REF]. On the one side, the plan for a single integrated civil service was supported by the Communist Public Service Minister, Anicet Le Pors, leading to the creation of corps and strict obligations to recruit through competitive examination. On the other side, the plan to increase local government decision-making autonomy was supported by Interior Minister Defferre. One of the aims of this plan was to use civil servant status as a means of changing employment conditions, in line with the aspirations of the big cities. As a result, a variety of measures were available that would have the effect of increasing the independence of local employers. 1Although the laws on civil servants apply to both local and central government employees, local employers have considerable room for manoeuvre. This includes the substantial proportion of contract staff without public service status, special employment status for executive staff chosen on political criteria (functional positions), or significant flexibility in allocating bonuses or applying promotion criteria. This means that municipal government heads have considerable autonomy as employers. This flexibility and autonomy, long criticized as encouraging unprofessional practices or cronyism, can now be reinterpreted as a sign of managerial modernity. In contrast to the rigidity of staff management in central government, certain local government practices (such as the cadres d'emploi with their broader scope than the civil service corps), have been upheld as a model for policies aimed at merging civil service corps.
Social dialogue institutions
Social dialogue takes place at both national and local levels. At national level, since the recent social dialogue reform, the negotiating body has been the joint higher council for local government service (Conseil supérieur de la fonction publique territoriale. The body dedicated to local government is made up of representatives from the unions and local employers (elected from electoral lists by three 'colleges', which correspond to the municipalities and municipal districts, the départements and the régions). However, because the provisions applicable to local government are often transcriptions of rules primarily designed for central government, a significant proportion of preliminary discussions takes place without representatives of local employers and employees.
For each entity, local social dialogue is in the hands of official bodies: technical committees for discussing organization questions, joint administrative committees focusing on individual cases and particularly promotions, and health and safety committees for health-related matters. The union representatives are elected at regional, departmental and municipal level.
For municipalities with fewer than 50 public employees and for larger municipalities that so choose, the organization of joint administrative committees is delegated to management centres, with a view to preventing conflicts of interest, guaranteeing the legal validity of actions relating to individuals, and providing flexibility in the case of rules involving quotas on the number of people promoted. This division of social dialogue roles offers no forum for social dialogue when new district services are created and employees from individual municipalities transferred to them. A report on this subject notes that these major measures restructuring local services have taken place without the creation of any genuine discussion forums in the new entities comparable to those existing in the former entities (INET, 2011). At municipal level, the only aspect subject to social dialogue was the preservation of each individual's acquired benefits following transfer to the new district entity.
An industrial relations reform was undertaken at national level between 2008 and 2010 (Bercy agreements). Its aim was to introduce an element of negotiation into a system governed by statute and unilateral decision-making through changing the conditions of union representation to restore union legitimacy and through outlining conditions for binding contracts between employer and employee [START_REF] Bezes | The development and current features of the French civil service system[END_REF]. The origins of this reform predate the pressure of austerity, and were the result of the longstanding criticism of overly formalistic industrial relations in the public sector and the idea of adapting equivalent reforms applying to the private sector. However, this reform will not be implemented in local government until after the next trade union elections in 2014, meaning that as yet it has had no consequences at this level.
Trade unions representing the public sector are organized at national level as sections of the big national trade unions. As the majority of local public servants are 'blue-collar', the CGT (Confédération Générale du Travail) with its previous links to the Communist Party is particularly well represented (32.8 per cent), as is the FO (Force Ouvrière)with its focus on occupational corporatism (18.6 per cent), while the CFDT (Confédération Fédérale du Travail) is more important for white-collar workers (21.6 per cent) (DGAFP, 2011: 520). The level of unionization is low in local government (10per cent according to [START_REF] Garabige | Modernisation du service public et évolution des relations professionnelles dans la fonction publique territoriale[END_REF].
The ambiguities of social dialogue in local government
At national level, neither government nor unions attach any great significance to local government social dialogue. In the negotiations on civil service status, central government can be assumed to be primarily concerned with its function as a civil service employer, with rules primarily designed from this perspective and then simply applied to local government employees. However, the unions, at two levels, are also a cause of the subordinate nature of local government social dialogue. First, for a long time certain unions, although more present in the public than in the private sector, attached greater value to the latter.2 Secondly, in a sort of mirror process, union members working as central government civil servants sometimes represent local public service workers.
The fact that little significance is attached to local government social dialogue is also partly linked to the difficulty in finding a legitimate national-level local government representative.
On the one hand, there are powerful non-partisan associations representing elected officials (association of mayors of France, association of municipal council chairs…). On the other hand, there are also representatives of different political bodies within the higher council of the civil service. Although these two entities are legally separate, in reality they overlap, as seen by the fact that the only electoral lists presented for the higher council are those presented by these big associations (certain elected members of the higher council are also non-executive directors of these associations). The pre-reform discussions between the administration and the politicians' representatives fluctuated between a formal relationship with the elected members of the higher council and direct contacts with the associations and their technical staff for the preparation of projects.
Stakeholders at local level (for example mayors) are more clearly identified. However, the roles of other local politicians (who may be responsible for overseeing a particular department) and those of municipal department heads (in particular human resources managers in charge of day-to-day staff relations (INET, 2007)) are less clearly identified. The case study below highlight the great vitality of this social dialogue at municipal level.
A neglected subject
The topic of social dialogue in French local government has never really been put on the agenda. Professional bodies have little to say, the issue is not discussed at specialist conferences, and the leading public sector journal, the Gazette des communes, confines itself to reporting on occasional conflicts. The section in social audits covering social dialogue is often only summarily filled in or left empty, as if all such matters are best dealt with in private. Though the prefects representing state administration at local level are obliged to transmit information on local government practices to the Ministry of Interior, they are in no hurry to report industrial conflicts as they are assessed on their capacity to maintain social harmony and good relations with local government.
The question of local government social dialogue has also been largely neglected in French research [START_REF] Garabige | Modernisation du service public et évolution des relations professionnelles dans la fonction publique territoriale[END_REF][START_REF] Guillot | Faire vivre le dialogue social dans la fonction publique territoriale[END_REF]. The few studies of unionism and industrial relations in the public sector have focused on the central civil service and public companies, with a particular emphasis on collective action driven by such high-profile groups as railway workers, nurses or teachers. As regards municipal government, we can cite outdated research by Stéphane [START_REF] Dion | La politisation des mairies[END_REF], who emphasized the fact that it was in the interest of mayors to be flexible with employees forming part of their electoral base. In a similar vein, Gérard Adam (2000: 120) highlights local capacities for adjustment: 'We are in the world of the "deal" where union members close to the base have achieved miracles over the years in grabbing substantial advantages that are never touted at the forefront of the "big" industrial relations triumphs.' For his part, Dominique Lorrain (1990) describes a pragmatic and peaceful modernization. However, this description may no longer hold true at a time of rising tensions in municipal management and the emergence of embryonic conflicts [START_REF] Garabige | La logique du compromis belliqueux. Chronique d'une négociation sur le régime indemnitaire dans une mairie française[END_REF].
Based on research and institutional characteristics, we can hypothesize the main features of social dialogue in municipalities: local employers have on the one hand significant room for manoeuvre with regard to modernization, while on the other hand they are to a great extent accountable to local citizens and sensitive to strikes affecting service delivery. In the current context of the financial crisis the opportunity to respond to conflict by offering substantial benefits to workers will vanish and we can speculate that social dialogue on such pragmatic questions as how service delivery is organized could emerge, requiring a balancing of citizens' and workers' interests. The dual proximity of a mayor to local citizens and to local civil servants could play in favour of genuine social dialogue on restructuring, in great contrast to the status-focused dialogue in state administrations.
Austerity factors and restrictive measures in local government services
Given the dual-national and local-regulation of local government, austerity should be approached from two angles: (i) the consequences of national austerity measures aimed at controlling wages, and (ii) local adjustments aimed at cutting overall spending including staff cuts and restructuring measures. While the first is already underway, the second is just starting.
The impact of national measures on local staff wages
Whilst there seems to be a clear shift towards a strict management of France's public services, this cannot be directly attributed to a response to the financial crisis, as the shift had already taken place with the election of Nicolas Sarkozy in 2007 and his announcements of government reform [START_REF] Jeannot | Changer la fonction publique[END_REF]. The new economic conditions after 2008 only helped consolidate the earlier decision, reframing it in a pro-austerity context [START_REF] Mccann | Reforming public services after the crash: The roles of framing and hoping[END_REF], meaning that it is not easy to measure the specific impact of the crisis. By contrast, the policy of cutting public sector jobs (excluding education and justice) adopted by François Hollande's government is officially linked to the financial difficulties that have arisen in the meantime and to its efforts to balance the national budget.
Austerity measures have concentrated on two aspects: general wage controls and constraints on recruitment (non-replacement of 50 per cent of retirees) in larger organizations, though only the first aspect concerns local governments.
A French civil servant's wages are based on three criteria. The first is his or her position in a career scale, itself determined by two criteria: membership of a corps and position within the corps. Civil servants therefore advance within a corps by changing grade (with a minimum automatic progression and the possibility of faster advancement based on rankings) or by moving into a corps with a higher grade. The second criterion is wage adjustment to inflation (index-linking).Each career position referred to above corresponds to a number of 'points', and these points are multiplied by an index to calculate wages: 'the point value'. On top of this there are bonuses essentially dependent on the person's corps and level within that corps.
These bonuses are primarily awarded to executive grades in certain corps (Ministry of Finance, engineers), and their attribution is not very transparent.
Pay discussions traditionally revolve around a single criterion: the point value. These discussions take place at central level and their outcome applies to all public employees (central and local government and public hospitals). There is major divergence between trade union and government views on how to weigh up wage gains and losses. For unions, if the point value increases above inflation, civil servants have received a pay increase; if not, their purchasing power has fallen. For the government the data they refer to is the total amount spent on public servants' wages including individual wage growth based on individual advancement (seniority or promotion) [START_REF] Bezes | The Hidden Politics of Administrative Reform: Cutting French Civil Service Wages With a Low-Profile Instrument[END_REF]. The first pay measure introduced by Nicolas Sarkozy's government was to stop indexing the point value to the retail price index.
Point value grew by 2.8 per cent between 2008 and 2011, at a time when inflation was 4.4 per cent, meaning a wage cut of 1.6 per cent in real terms. This measure was partially offset by other initiatives, with the government for instance promising that this reduction in point value would not lead to a fall in individual purchasing power, as a consequence of which an offsetting adjustment was introduced for employees who had not advanced in their careers in the meantime (this individual standard-of-living guarantee was awarded to 56,000 civil servants in 2010, at an average of €800 each). In addition, sectoral negotiations improved career prospects within certain corps, a number of bonuses were increased, and opportunities provided for overtime (especially in education). After including all income components, the Civil Service Ministry made the following announcement on wage restraint: 'average net wages are €2377 a month, up 2 per cent in constant terms in 2009 (compared with 0.9 per cent the previous year)' [START_REF] Gonzalez-Demichel | Les rémunérations dans les trois versants de la fonction publique en 2009[END_REF].
The impact of these measures on local wages needs to be qualified for two reasons. If wages determined by the salary grid are below the national minimum wage (SMIC), then the national minimum wage applies. And since the SMIC has increased much faster than point value since 2008, entry wages at the lowest skilled level have increased faster than entry wages at medium level. In addition, with regard to the highest levels, numerous top managers are recruited on contracts and can receive substantial bonuses. All of this basically undermines national pay policy.
Future budget cuts
The situation of local government in France would seem to offer a counter-example to the rise of austerity in European public organizations. Public employment has increased in local government both over the long term and in recent years (see Table 1), corresponding in particular to the emergence of new district entities. The development of these larger groupings has resulted in a greater range of services (buses, sports facilities…) outside town centres and therefore an increase in staff requirements. They are also supposed to generate economies of scale, though these have not yet materialized, as shown by figures indicating an expansion of the workforce (see Table 1). However, part of the increase is due to staff being transferred from the state administration to local authorities (mainly for roads and school maintenance).
A total of 117,000 people left the state administration (mainly départements and regions) between 2006 and 2008. A further aspect concerns ability to contract debt. This differs between central and local government, with the latter only able ton contract debt to finance investment but not to offset a deficit in operating costs. For these reasons, the question of austerity was not discussed in recent years at local level government. régions3) b Given that budget cuts are inevitable, it is deemed a good idea to look at a French city already subject to public spending cuts for many years. Our case study can thus be seen as a laboratory for such austerity measures.
3) Social dialogue responding to the economic crisis: the Saint Etienne laboratory
Background
Saint Etienne would seem to be one of the towns in France that has experienced the greatest economic problems, for reasons that are both historical and circumstantial. A former industrial and mining town (known for arms and bicycle manufacturing and for its soccer team), it has undergone a long process of de-industrialization persisting to this day. Moreover, its population is one of the poorest in terms of income. 6 A second feature population decline-between 1990 and 2010, the city's population has dropped from 240,000 to 175,000linked mainly to de-industrialization but also to the fact that the city's wealthiest inhabitants choose to settle in the surrounding countryside. A falling population is a difficult situation for a municipality to manage, as it means diminishing resources, whereas the costs associated with the provision of existing amenities are hard to adjust. Over time, this has resulted in Saint Etienne becoming one of the country's most indebted cities. 7 This was exacerbated by the fact that the previous municipal government opted to take out investment loans that reduced interest rates by half over three or four years, in exchange for taking a risk on financial markets and other factors over which the town had no control (e.g. dollar-yen parity). The risk became a loss with the 2008 crisis. The town had contracted 70 per cent of its debt in the form of such financial products, i.e. €260m, and accumulated a potential loss of €150m out of a total annual budget of €350m.
There were approximately 3500 employees in the city in 2011, 10 per centof whom were not civil servants. As in other cities, the largest share was made up of the lowest-level employees (2500 category C) and 1700 in the technical branch (gardens, roads, rubbish, etc.) The CGT union holds a majority in the technical committees, followed by the FO, while the CFDT represents 50 per cent of category-A employees). Compared to other equivalent cities, St Etienne has maintained in-house service provision, such as homes for the elderly.
Policies and reforms
On replacing the previous administration in 2008, the new socialist administration made restoring the city's finances the top priority of its term of office. As wages are set at central level, action focused on management and restructuring.
As in many towns, the municipal administration was organized around isolated departments run by elected deputy mayors who were more inclined to defend the size and budget of their departments than to contribute to overall cost-saving measures. The multi-year process of modernizing municipal management [START_REF] Lorrain | Les mairies urbaines et leur personnel[END_REF]) is still underway. It has seen power concentrated around the mayor and the senior manager responsible for service provision, with power taken away from deputy mayors and department heads. One direct consequence of this concentration was a reduction in layers of management, with a number of middle managers being replaced and organizational reforms implemented.
The restructuring of technical services around local hubs working directly with neighbourhood committees is a further management change implemented by many towns.
One of the primary aims of restructuring parks, gardens and cleaning services has been to create district-based teams, with the goal of achieving efficiency gains by removing the need for staff to travel before starting work (though restricted by the fact that certain technical facilities are shared by several districts), enhancing local proximity with people seeing the same staff day after day, and improving local democracy through closer links between the maintenance teams and neighbourhood committees. This latter aspect is part of a wider move to reorganize municipal services on a neighbourhood basis, begun in 2000. 8 A second associated priority was to improve the versatility of the maintenance teams. The aim of the initial project was to integrate the work of the gardeners and maintenance staff, and in addition to give them responsibility for combating vandalism. A third issue was weekend working, since a previous team that only worked at weekends had proved difficult to manage.
However, certain reforms would seem to be part of a longer history of professionalizing municipal management. Efforts to apply fully the 35-hour week in the municipal police force or to increase control over the largely autonomous weekend cleaning teams, for example, seem to have less to do with new public management principles than with standard professionalization of management practice.
The margin for manoeuvre with regard to wagesis very limited. As described previously, wage rises are for the most part defined at national level, and municipal organizations have benefited from national wage containment. Moreover, municipality managers have chosen not to change promotion policy. Restructuring is thus the main cost-cutting lever, and has resulted in a workforce reduction limited to around 100 people on a like-for-like basis and a wage bill reduction of 1per cent per year with a 1.5 to 2 per cent age and job-skill coefficient (wage bill of around €130m).
Social dialogue
For the municipality, these reforms are linked with a process of developing and formalizing social dialogue. This aspect is the particular responsibility of the deputy mayor in charge of human resources, a former union activist. She began by establishing a formal system for planning social dialogue, the 'mille bornes'. Under this procedure, any restructuring plan is accompanied by a succession of preliminary, intermediate and follow-up meetings with the unions, and by associated meetings with the staff concerned (attended by the unions) and with the union organizations alone. Other initiatives have since been introduced to foster this dialogue, such as the specific recruitment of a person to facilitate relations with the unions.
Another important factor is the growing role of occupational health and safety issues. A specific health and safety system has been established, pointing the way to leveraging health issues to tackle organizational questions. For example, in 2011 the GPS (gérer, prévoir, servir manage, plan ahead, serve) group sought to clarify the role of supervisors in safety management, thereby contributing to an inter-departmental discussion network and mobilizing reception staff to organize reception areas and establish joint training programmes.
The priorities and procedures for discussion and debate on restructuring measures vary from one service to another, but in most cases result in amendments to the initial plans. Concerning the restructuring of parks, gardens and cleaning services, the objective was in line with a CGT aspiration, but encountered opposition from the CFDT, though was finally accepted. The proposal on gardener versatility met with a heated reaction from staff and was completely abandoned. Gardeners viewed the planned versatility as disrupting professional identities, and in the endrestructuring was limited to one section of this cohort. All these changes were accompanied by additional recruitment and the previously outsourced tram maintenance services were brought back under municipal control.
The highly formal nature of the dialogue and such significant adjustments to initial plans do not mean that there was unanimous recognition of social dialogue quality. The two unions we spoke to clearly expressed their dissatisfaction on this point, while municipal officials were disappointed that the adjustments to the initial projects went unacknowledged. This negative view of the social dialogue led to a six-month boycott of the official structures and a wellattended demonstration that resulted in the symbolic burial of the dialogue process. This malaise partly reflects the fact that the adjustments to the plans often entailed a period of conflict. In 2011, local warnings were issued to 1978 strikers regarding local action. Most of these actions were short-lived. Warnings are also sometimes a way of avoiding conflict. It would seem that this disappointment also reflects difficulties arising less from the decision to restructure services and front offices than from tensions within management. In particular, the decision to concentrate power around central management and to replace certain senior executivesput pressure on middle managers, requiring them to implement the new structures and achieve new targets. This created bad feelings amongst these managers. In a 2011 survey of 254 managers, 18 per cent judged the industrial relations climate as bad and 60 per cent as very bad, while 60 per cent felt that rules were being applied inequitably within the municipality. This bad feeling also crystallized into a social movement around the plan to reduce certain benefits relating to the organization of working time.
As a conclusion to this case study, it appears that in this city confronted by financial difficulties earlier than others, cost-saving policies excluded wages to a large extent since wage levels are largely determined at central level. Action was focused on restructuring and related staff reductions. Reviewing the three aspects of restructuring, the social dialogue record is uneven. As regards the core priority-the restructuring of services provided to citizensthe components of the debate seem clear, the organizational choices relatively clear and in certain cases shared by the unions. In addition, the plans seem in no way to have run aground, even though the basic amendments and compromises seem to have entailed periods of conflict. Turning to the second prioritybringing the administration under controlthere is relative agreement over the broad record, though things were trickier in detail, with more personal factors coming into play. The unions cannot object to the general principles of regulatory equality and transparency, but they find themselves in a tricky position when wanting to avoid appearing to be simply toeing the management line. However, it is the final prioritythe reform of governancewhere the biggest problems arise. The new principles of concentrating decision-making around municipal services and resource departments cannot be justified on the general grounds of transparency, or by visible improvements for citizens, making them both more questionable and more questioned.
Conclusion
Seen from an international perspective, austerity policies started in France with their own peculiar rhythm. Although the Bercy agreements aimed to introduce more formalism, they have not yet been applied. Similarly, there are factors relating to negotiation which are not about signing an agreement within a joint technical committee, but about managing these social movements. In certain areas, management avoids intervening, as there is too great a risk of social upheaval.
In certain cases, the administration provides openings for dialogue and accords unions a certain status with a view to preventing managers being overwhelmed by unorganised staff opposition. In other cases, the administration collects its own information to identify the causes of conflict. Finally, in this marked area of power relations and within the formal frameworks of joint technical committees or official social monitoring schemes, there is often implicit understanding of what will or will not give rise to conflict. In these circumstances of potential tension, amendments to projects are not insignificant, even though this does not necessarily produce satisfaction. Direct and indirect participation [START_REF] Denis | France: From direct to indirect participation to where? In: Farnham D, Hondghem A and Horton S Staff participation and public management reform, some international comparisons[END_REF], social dialogue and participatory democracy are intertwined. Even if this does not lead to official collective agreements, it boosts the role of formal committees. This kind of bargaining would seem to be closer to the practices found in northern European countries.
The differences we observe between service delivery and back-office reforms also support this hypothesis. We observe open conflict and real negotiations in service delivery but no discussion andgeneral unease among back-office staff and middle management. The difference appears to be linked to the difficulty of directly linking reforms and citizen satisfaction. The question of pressure on services seems to take different forms when it comes to the provision of public services as compared to support or management functions. Defining citizen needs provides a starting point for social dialogue. Combining participatory democracy and social dialogue provides a useful way of framing this kind of shared qualityof-service objective. Reducing outsourcing practices or bringing services back under municipal control can also provide a basis for agreements. The unions or staff working groups are able not only to specify what they see as unacceptable in terms of working conditions or wages, but also to highlight failings in the restructuring proposals or challenge certain assumptions in them, pointing to the history of organizations, often one of fluctuations between different (and always imperfect) priorities. The example of the management malaise in Saint Etienne suggests that internal changes can be more problematic.
Finally, the two-tier character of French local government bargaining raises the question of the proximity dimension of social dialogue in a crisis context. Decisions on general wage cuts are taken far from local organizations, though their impact is felt across the board. As in most countries the French wage freeze has been imposed unilaterally [START_REF] Glassner | The crisis and social policy: The role of collective agreements[END_REF].
In such a situation trade union strategy could tend to obtain separate offsetting benefits (e.g.
with regard to retirement plans). Restructuring measures in municipalities are decided locally and have an impact on specific workers (and not others) and on specific citizens. Whereas the job consultation system on state restructuring measures is largely formal, it appears more dynamic at municipal level. The case study points to the possibility, even in conflict, of a joint problem-solving attitude conducted in a transparent manner and including the direct effect of reforms on service levels and quality. This feature, if confirmed in other case studies, could suggest a relative resilience [START_REF] Bach | Social Dialogue and the Public Service in the Aftermath of Economic Crisis: Strenghtening Patnership in an era of Auterity[END_REF] of local social dialogue in the face of crisis.
Funding
Financial support of the European Commission is acknowledged , VS/2011/0412, 'Social dialogue and the public services in the aftermath of the economic crisis: strengthening social dialogue in an era of austerity'.
Table 1 .
1 French public sectoremployment by headcount.
State Local Hospitals Total
administration government
2006 2 , 649 , 857 1 , 610 , 925 1 , 055 , 821 5 , 316 , 603
2007 2 , 587 , 956 1 , 703 , 058 1 , 073 , 238 5 , 364 , 253
2008 2 , 509 , 247 1 , 769 , 845 1 , 084 , 827 5 , 363 , 919
2009 2 , 483 , 722 1 , 806 , 483 1 , 095 , 801 5 , 386 , 006
2010 2 , 458 , 070 1 , 811 , 025 1 , 110 , 554 5 , 379 , 649
2011 2 , 398 , 672 1 , 830 , 663 1 , 129 , 438 5 , 358 , 773
Municipal spending billion euro
2006 2007 2008 2009 2010
Municipalities 85.8 90.1 89.9 91.8 91.1
Districts 30.0 32.5 33.3 34.6 36.1
Total 115.8 122.6 123.2 126.4 127.2
Staff budget variations (%)
2000-2005 2005-2008 2008-2009 2009-2010
Municipalities 3.6 3.9 2.2 2.1
Districts 17.0 8.9 10.8 7.6
Local government = municipalities, districts and two levels of regional government départements and régions.
Source: Civil Service Ministry, DGAFP, Rapport annuel sur l ' Etat de la fonction publique 2011 -
2012, Faits et chiffres. 5
The current President has some room for manoeuvre, as he is supported by a left-wing majority in the Senate -attributable to the major dissatisfaction of local politicians with local government reform. Nonetheless, commenting on the2012 State budget, he conveyed the same messagethough in different terms -about the need to control local budgets and to reduce subsidies from State to local government. Moreover, in October 2013 the Cour des Comptes (the French national audit office) criticized staff growth in local government.
Policies for public spending cuts were launched in 2007 as a late new public management orientation just prior to the outbreak of the crisis and then moderated as a Keynesian reaction to the crisis. Moreover, local administrations were only partially affected by these public spending cuts, and local government headcount continued to rise after 2008. Reform plans inspired by new public management have sought to alter this model by introducing more contracts (Bercy agreements), by reorganizing administrative departments, and by changing wage-setting conditions, though without transforming the statute's underlying economic rationale[START_REF] Bezes | The development and current features of the French civil service system[END_REF] [START_REF] Jeannot | Changer la fonction publique[END_REF].National dialogue seems formal and limited in impact[START_REF] Rehfeldt | Négocier dans les services publics, dimensions procédurales et stratégiques[END_REF], making Francecomparable with other southern European countries[START_REF] Bach | Varieties of new public management or alternative models? The reform of public service employment relations in industrialized democracies[END_REF]. Wage freezes have been accepted without much discussion. Trade unions have, under left-wing governments, been able more effectivelyto limitchanges to the advantageous retirement plan for civil servants.At local level, the St Etienne case study shows ongoing negotiations on public service restructuring and working conditions. The hypothesis of pragmatic discussions on service delivery is broadly confirmed, with us witnessing a true bargaining situation including conflict and formalized agreements. Power relations remain very tensed in particular around service provision. Alexandra Garabige (2010) speaks of 'bellicose compromises' in describing negotiations on working time observed by her in one municipality. This same tone would seem to similarly exist in Saint Etienne where power struggles are still a reality.
This French peculiarity is now slowly dissipating, with national policies becoming more
clearly reactive to economic circumstances. On top of the tax increases announced during the
presidential elections, the current French socialist government has launched a competitiveness
package including plans to reduce the public deficit, and it is clear that the overall economic
context has guided budget policies since2013. On the other hand, these austerity measures
now also apply to local government, bringing France broadly into line with other European
countries.
Social dialogue concerning local government staff is organized at two levels. Wages and
general rules are discussed at the national level under the state umbrella, while working
conditions and individual career issues are discussed at local level.
Local government employees are largely dependent on the outcome of national-level social
dialogue concerning state employees, whereby the state acts simultaneously as employer and
regulator. The employment conditions of all public servants are laid down by statute rather than contract, placing the government in a sovereign position
[START_REF] Bach | HRM and the new public management[END_REF]
, since the negotiation options set out in the 1983 statute are small.
The Galland Law of 13 July 1987replaced the branch system by the more flexible system of job cadres and facilitated the recruitment of contract workers, while the more recent law of 19 February
2007abolished promotion quotas and allowed politicians to promote people once they met the minimum seniority conditions.
However, this tendency has recently been corrected within the CGT and the CFDT, with the latter organizing '1 2
public' meetings to draw more attention to public sectorconflicts.
Doligé Eric, Jeannerot Claude, Transfert de personnels de l'Etat vers les collectivités territoriales, un pari réussi des perspectives financières tendues,Sénat, rapport d'information n° 117, 2010Sénat, rapport d'information n° 117, -2011. .
A Randstat survey in May-June 2010 noted that
per cent of local managers questioned expected local government employment to remain stable in 2011, a quarter expected an increase and 15 per cent a reduction (cited inGuillot and Michel, 2011: 44).Another sign, the national local personnel centre, whose budget is proportional to the local government wage bill, after many years of rising budgets, expected resources to remain flat in future years. 5 'A 40 per cent increase in local government employees over 10 years is a trend that needs to be stopped' Nicolas Sarkozy messageto civil servants in Lille on 12 January 2012.
In 2004, average annual income per tax household was €14,082, compared to€17,314 in the Rhône Alpes Region.
In 2011 the Journal du dimanche (8 October) provideda ranking of France's most indebted cities. Saint Etienne was top of the list ahead of Marseille and Lille.
Moderniser le service public des villes, territoire et modernité, Rencontre des acteurs de la ville, 24 to 25 February 2000, Montreuil. |
01768073 | en | [
"shs.gestion",
"shs.archi",
"shs.hisphilso",
"shs.hist"
] | 2024/03/05 22:32:16 | 2014 | https://hal.science/hal-01768073/file/IntroductionMaterialityTime_finalversion%20%281%29.pdf | François-Xavier De Vaujany
Nathalie Mitev
Pierre Laniray
Emmanuelle Vaast
Time and Materiality: What is at Stake in the Materialization of Time and Time as a Materialization?
come L'archive ouverte pluridisciplinaire
The stream of research related to sociomaterial practices is influenced primarily by [START_REF] Latour | Reassembling the Social. An Introduction to Actor-Network-Theory[END_REF], [START_REF] Suchman | Plans and Situated Actions: the problem of human-machine communication[END_REF], [START_REF] Pickering | The Mangle of Practice: Time, Agency, and Science[END_REF] and [START_REF] Orlikowski | Material works: exploring the situated entanglement of technological performativity and human agency[END_REF][START_REF] Orlikowski | Sociomaterial practices: Exploring technology at work[END_REF] and has attempted to overcome the dichotomy between social and material worlds by concentrating on practices within organizations. These practices are constituted by, but also produce, material and social dynamics. This movement is currently having an important impact in the field of management and organization studies. (2009). Business historians study the historical evolution of business systems, entrepreneurs and firms, as well as their interaction with their political, economic, and social environment. (1983). This classic study changed the way anthropologists relate to the "here and now" and the "there and then" of their objects of study. Researchers in management and organizational history have started bringing similar changes to organization studies; however they tend to publish in academic journals and this new field has not fully integrated the importance of the material into its remit. Our book therefore endeavors to bring together these various strands of existing research together into a unique outlook on organizations, materiality and time.
The relationship between time and the materiality of everyday practices is by itself an old and well-trodden theme in the analysis of societies and organizations. Among the founding fathers of social sciences, Karl Marx, through what he called 'historical materialism' (see Marx, Engels and Lenin, 1974 or Giddens, 1985 2 ), considered that the History of our societies and their material underpinnings should be the key focus of social studies. Unsurprisingly, to develop an alternative post-marxist vision of societies and their structuration, [START_REF] Giddens | The constitution of society: introduction of the theory of structuration[END_REF][START_REF] Giddens | A Contemporary Critique of Historical Materialism: The nation-state and violence[END_REF] has also reconceptualized the relationship between the materiality of our societies (e.g. facilities and rules) and human agency to think about their constitution in the context of different spatio-temporal dynamics.
From a more methodological perspective, some social sciences, in particular History, have considered that analyzing traces of materiality provides strong hints of past times and historical contexts. Thus, exploring the way time is materialized (whether to 'show' the time it is or the broader historical time) is a key issue in the analysis of societies and organizations [START_REF] Giddens | A Contemporary Critique of Historical Materialism: The nation-state and violence[END_REF][START_REF] Goff | Au Moyen Âge: Temps de l'Église et temps du marchand[END_REF][START_REF] Bluedorn | Time and Organizations[END_REF]. In his illustrious research on mechanical clocks, medievalist Jacques Le Goff has shown the social and symbolic dimensions of the materialization of time. According to Le Goff (1960, 2011 3 ), the invention of the mechanical clock between the 13 th and 15 th century is a defining event of the Middle Ages. Two different times, and two related materializations were then in opposition with each other: that of the country (rang by Church bells) and that of towns (rang by mechanical clocks). From this point and this new accountability of time, societies and organizations experienced major changes.
Employees' labor and wages began to be counted in hours. People realized what was at stake (e.g., their autonomy, the visibility of their work) and in some cases revolted against the use of mechanical clocks (perceived as a control tool) and went on strike (e.g. in the context of vineyards in Burgundy in the 14 th century) 4 .
Time and its relationship with organizational materiality have also been a key focus in management and organization studies, in particular for the exploration of organizational change [START_REF] Bluedorn | Time and Organizations[END_REF][START_REF] Lee | Time in Organizational Studies: Towards a New Research Direction[END_REF][START_REF] Goodman | Introduction to the Special Issue: Time in Organizations[END_REF][START_REF] Ancona | Taking time to integrate temporal research[END_REF][START_REF] Child | Development of organizations over time[END_REF]. Time can be seen as a grid or scheme to make sense of organizations and their dynamics (see e.g. [START_REF] Child | Development of organizations over time[END_REF]. It is then an abstraction required to making sense of movement and organizational change. Since the late 1990s, time
3 See also [START_REF] Dohrn-Van Rossum | L'histoire de l'heure: l'horlogerie et l'organisation moderne du temps[END_REF] on this issue. 4 Le [START_REF] Goff | Interview sur le Moyen Age[END_REF] even sees in mechanical clocks a more important invention than printing techniques: "JL: I think that the mechanical clock has been much more important than printing. The former has had immediate consequences on everyday life, which has not been the case of the latter. Firstly, because many works had been disseminated through the form of manuscripts for a long period. Then, because what was printed, let's say till the mid-16 th century, most books were Bibles or religious books: their diffusion targets an elite and is not at all a breakthrough for everyday life. That is why I see the mechanical clock as a major invention in the History of Mankind."
has also been conceptualized as something which needs to be performed or materialized in organizations [START_REF] Ancona | Taking time to integrate temporal research[END_REF][START_REF] Orlikowski | It's about time: Temporal structuring in organizations[END_REF]. Time does not appear any more as an ontological construct, exterior to collective action and organizational dynamics. It is no longer a 'variable'. As a materialization, or performance, it is organization and its dynamics. Paradoxically, time (often thought as abstract only) has or can have a 'matter' for organizational stakeholders (see [START_REF] Bergson | Time and Free Will: An Essay on the Immediate Data of Consciousness[END_REF], on the issue of "duration"). It also becomes a key strategic stake. For instance, the schedule, planning and temporal orientation of projects are seen as a matter of control and power in organizations [START_REF] Gersick | Time and transition in work teams: Toward a new model of group development[END_REF][START_REF] Gersick | Marking time: Predictable transitions in task groups[END_REF]. For some researchers, this possible plurality of time and its materialization introduces the possibility of conflicts of temporalities, what Norbert Alter (2000) calls "organizational dyschronies".
Management historians have also influenced organization research by adopting longer-term perspectives related to societies and their material dynamics inspired by Braudel's "longue durée" [START_REF] Kieser | Organizational, institutional, and societal evolution: Medieval craft guilds and the genesis of formal organizations[END_REF][START_REF] Kieser | Why organization theory needs historical analyses-and how this should be performed[END_REF]Usdiken and Kieser, 2004;[START_REF] Mitev | Seizing the opportunity: towards a historiography of information systems[END_REF]). Usdiken and [START_REF] Üsdiken | Introduction: History in Organization Studies[END_REF] distinguished three possible approaches for the incorporation of History and longue durée in management and organization studies:
-Supplementarist approach, where the historical "context" is simply added and is only a complement to common positivist approaches still focusing on variables, although with a longer time span than usual. It "adheres to the view of organization theory as social scientistic and merely adds History as another contextual variable, alongside other variables such as national cultures" (Booth and Rowlinson, 2006: 8);
-Integrationism, or a full consideration of History with new or stronger links between organization theory and history. The aim is "to enrich organization theory by developing links with the humanities, including history, literary theory and philosophy, without completely abandoning a social scientistic orientation" (Ibid: 8);
-Reorientationist or post-positivist approach, that examines and repositions dominant discourses including our own (such as progress or efficiency), and produces a critique and renewal of organization theory itself, on the basis of history. This "involves a thoroughgoing critique of existing theories of organization for their ahistorical orientation" (Ibid: 8).
Both integrationism and re-orientationism have interesting implications for the study of materiality and materialization in organizations. They imply a more subjective investigation of time, for example on the nature of collective representations of people in the past, and their evolution. They also imply a deeper understanding of materiality and materialization. Studying the institutional evolution of an organization implies long time spans and it is shown more clearly through the inclusion and comparison of material traces of past actions. Materiality and the matter of organizational and collective actions, i.e. the organizational space and its specificity, appear more clearly from a longue durée perspective (see [START_REF] Vaujany | If these walls could talk: The mutual construction of organizational space and legitimacy[END_REF].
In the context of this edited book, we propose a re-orientationist stance on three key topics related to time and materiality. We first need (Part I) to understand how time is materialized and performed in organizations, i.e. how IT artefacts, standards and material space perform time and temporal dynamics in organizations (ontological vision of time). This is necessary -Topic I: how is everyday time materialized and performed in organizations? What is at stake in its materialization through time schedules, time-oriented managerial techniques, vestiges, enactment of old artefacts, etc.? This topic will be the concern of the first three chapters of our book;
-Topic II: how are organizations and organizational members constituted through time by material artefacts? In turn, how are material traces of past actions used and incorporated into present dynamics? How are material and social dimensions (e.g. at the level of agency) imbricated through time? Chapters 4, 5 and 6 will deal with these issues.
-Topic III: how can we make sense specifically of longue durée (in terms of organizational and societal histories), long term processes, and their materialization in organizations? This third focus will be at the core of chapters 7, 8, 9 and 10.
Our book will be organized according to these three topics in management and organization studies as is shown below. Using examples related to standardization, she illustrates her three themes and speculates about their implications, both individually and as they interact with each other.
In Chapter 2 ("Iconography, materiality and legitimation practices: A tale of the former NATO command room"), François-Xavier de Vaujany and Emmanuelle Vaast suggest that organizing is highly iconographical and related to historical symbolic imageries. Their chapter draws on Baschet (2008)'s historical distinction between "object-images" and "screen-images", originally applied to the religious iconography of the Middle Ages, and argues for their relevance and critical significance in organizations' legitimation practices. By means of an ethnographic case study of a former NATO command room repurposed as a meeting room by a French university, the authors reveal how these two iconographies co-exist and interrelate in legitimation practices. Various historical periods (NATO, post-May 68 or the more recent 'corporate' momentum) are frequently reenacted and performed during meetings in front of external stakeholders to guide their interpretations of events and space. Materializing the past therefore contributes to legitimating organizational activities. The authors also reveal the extent to which the relationships between these iconographies (which rely on different foundations in terms of materiality and visibility) legitimate the organization in complementary ways.
In Chapter 3 ("Evolution of non-technical standards: The case of Fair Trade"), Nadine Arnold suggests examining non-technical standards as artefacts that evolve in accordance with their contextual and historical setting. In the past, organization scholars studying standards and standardization tended to ignore material aspects of the phenomenon. An analysis of the Fair Trade standardization system shows how its underlying rules changed from guiding criteria to escalating standards. Taking an evolutionary perspective Arnold outlines four major trends in the written standards and relates them to the history of Fair Trade. Detected modifications were either meant to legitimate the standardization system through the content of standards or to enable legitimate standardization practices. Overall the exuberant growth of non-technical standards leads to critical reflections on the future development of Fair Trade and its standards.
Part II. Temporal dynamics of artifacts and materiality in organizations: the importance of material traces
In Chapter 4 ("Making Organizational Facts, Standards and Routines: Tracing materialities and materializing traces"), Chris McLean discusses how we can become sensitive to the many different connections and traces of action that emerge and engage with other actions. She studies how these intensive forces become foregrounded while others may appear to fade away.
In contrast to a process of linearity and continuity where traces exist 'out there' in some simple cause and effect form, the cauldron of becoming is an entangled mesh of complex foldings, relations and discontinuous links with connections emerging from diverse and heterogeneous forms.
In Chapter 5 ("Management control artefacts: an enabling or constraining tool for action?"), Emilie Bérard questions the definition and uses of the concept of affordances from a perspective on management control. Sociomateriality research studies organizational practices by examining how the material and the social mutually and constitutively shape each other. In this perspective, the concept of affordance is of particular interest when studying the role of management tools in the transformation of an organization's activities. Objects offer affordances for action, which depend both on its materiality and on the actors' perception for action. However, the concept of affordances has not spread far beyond the field of information systems and technology studies, and its interpretative scope is still open to discussion for studying the role of objects in organizations. The aim of this chapter, therefore, is to examine the definition of the concept as well as its potential use and benefit in the field of management control.
Lastly, in Chapter 6 ("Sociomateriality and reputation damages, when the Omerta is broken"), Hélène Lambrix uses the story of student hazing in French universities to illustrate how material traces of an unethical behavior are used to construct and destruct corporate reputation.
The study focuses on multiple stakeholders' perceptions and practices over time. Using data collected through semi-structured interviews, observation and archives on the Web, she explains the different processes used by a university and its key constituents (students and alumni) to protect and maintain the organizational reputation, and by external observers to destruct it. This chapter contributes to the recent literature on reputational crisis by providing empirical results, and exploring the relationship between reputational crisis and legitimacy costs.
Part III. Stretching out time and materiality in organizations: from presentism to longue durée
In Chapter 7 ("The historian's present"), François Hartog highlights that conditions of the historian's craft have changed over the last thirty years or so, and that they continue to change in front of our eyes. For Marc Bloch (1997:65), history is a "science of men in time," that "needs to unite continuously the study of the dead with that of the living." Today, should the historian practice his craft uniquely within the confines of the present? Hartog focuses on the extended present, that is, the new field of Memory. In order to be admitted into the public sphere, recognized by civil society, must the historian make himself "relevant" to this present, so to speak to make himself present to the present? François Hartog enters into the debates by suggesting elements of answers related to his vision of a regime of historicity.
In Chapter 8 ("Dance of sociomaterial becoming: an ontological performance"), Dick Boland's explains how we can get behind language and explore sociomateriality not by trying to define it, but by trying to do it. It is only the performative nature of language that affords the very possibility for us to experience the world. Only where there is language there is a world. And only where the world predominates, there is history. So, because of language, man can existwoman alsohistorically. Further, the tool invents the human, or the human invents himself by inventing the tool through a techno-logical exteriorization. But this exteriorization is in fact the co-constitution of the interior and the exterior through this dance, through this movement.
Dick explores palaeo-anthropology and the earliest discussion of tools, technologies and the becoming human of the homo genus to analyze the co-constituted dance of interior and exterior as a theme.
In Chapter 9, Caroline Scotto explores the Campus Paris-Saclay project in France. She questions what is included in the notion of campus by looking at the hypothesis that a historical approach can generate knowledge through the link between a context, academic and planning principles, functions, actors, planning tools, spatial organizations' and geographical situation.
Caroline Scotto proposes to focus on the principles of campus development in order to establish a morphological and functional genealogy of this object. The idea is to represent the relationship between the different models and the campus in construction by using the genealogy as a comparative tool in order to question the link between institutional changes and spatial organization.
As a conclusion, we summarize the key contributions of all authors with regards to the materialization of time (e.g. historical time) and the material dynamic of organizations. We also suggest a set of avenues for further research in the field of management and organizations studies.
concentrates on the materiality of artefacts, practices and organizations, and on their historical dimension. It combines recent scholarly interest on sociomateriality with a deep fascination with time and a temporal perspective. It adds a time dimension that complements the spatial focus of a first book on 'Materiality and Space' published by Palgrave Macmillan in 2013.
This second book is based on the 3 rd Organizations, Artefacts and Practices workshop 1 that took place at the London School of Economics in June 2013 organized jointly by the Information Systems and Innovation Group in the Department of Management, the Department of Accounting, and Paris Dauphine University. The workshop encompassed themes related to: Historical perspectives on materiality; Historiographies, data and materiality; Social and material entanglements across time; Information technology, information and materiality in organizations; Measuring and accounting for time in organizations; Space and time in organizations; Theoretical and methodological perspectives on time in organizations; Identity and materiality in organizations; Accounting, time and materiality; Institutions, institutionalization and materiality in organizations; Critical perspectives on time and materiality; Artefacts, organizations and time. The event gathered 120 participants from the UK, Europe, the US and elsewhere, to present and discuss 42 papers. Track chairs and the workshop organizers selected the best papers. Additionally, two keynote speakers have provided a chapter, and two senior figures who attended the workshop have offered a preface and a postface. There are many books on materiality in social sciences, going back to Arjun Appadurai (1986)'s "The Social Life of Things: Commodities in Cultural Perspective" that examines how things are sold and traded in a variety of social and cultural settings and bridges the disciplines of social history, cultural anthropology, and economics. Another example in social anthropology is Daniel Miller (2005)'s "Materiality (Politics, History, and Culture)" that explores the expression of the immaterial through material forms and aims to de-centre the social to make room for the material. More recent examples drawing on cultural anthropology are Tilley, Keane, Kuchler, Rowlands and Spyer (2013)'s Handbook of Material Culture. It is concerned with the relationship between persons and things in the past and in the present, in urban, industrialized and small-scale societies across the globe. Harvey, Casella, Evans, Knox, McLean, Silva, Thobur, and Woodward (2013)'s Objects and Materials focuses on objectmediated relations and investigates the capacity of objects to shape, unsettle and fashion social worlds.
A classic example can be Inventing the Electronic Century (Harvard Studies in Business History) by Alfred Chandler (2005) that traces the origins and worldwide development of consumer electronics and computer technology companies. Organizational and management history is a more recent movement in management studies that draws on philosophical and sociological conceptualizations of time, such as George Herbert Mead (The Philosophy of the Present, 1932)'s seminal work on the structure of temporality and consciousness and the character of both the present and the past. Scholars in this vein include Hayden White (Narrative Discourse and Historical Representation, 1987) who examines the production, distribution and consumption of meaning in different historical epochs; Alex Callinicos (Theories and Narratives: Reflections on the Philosophy of History, 1995)'s exploration of the relationships between social theory and historical writing; or David Carr (Time, Narrative, and History, 1991)'s work on narrative configurations of everyday life and their practical and social character. Another example is Robert Hassan and Ronald Purser's edited book on 24/7: Time and Temporality in the Network Society (2007). They examine how the regimes firstly of the clock and then of the networked society have changed individuals and organizations.Adding a time perspective has benefited other disciplines, in particular, anthropology, as illustrated in Time and the Other: how Anthropology makes its Object, by Johannes Fabian
(
Part II) to then explore how organizations and organizational members are constituted by and constitutive of material artifacts (subjective and sociomaterial visions of time).; this leads us (Part III) to finally reflect on what a historical perspective on these materializations can bring to the study of organizations (historicallongue duréevision of time).
Figure 1 :
1 Logical structure of the book. time and history in organizations: what is at stake?In Chapter 1 ("Time, History, and Materiality") JoAnne Yates, suggests that a discussion about time, history, and materiality brings together three themes that have been central to her research, both historical and contemporary. Throughout her career, she has studied change, whether over short (a couple of months) or long (e.g. 150 years) periods of time. Much of her work is explicitly historical and addresses long stretches of time with a historical eye. It has also been mindful of the material implications of information and communication technologies. In this chapter she explains her vision of the relationship between time, longue durée and materiality, in particular in the context of the history of voluntary consensus standard setting activities.
For more information: http://workshopoap.dauphine.fr/fr.html
According to[START_REF] Giddens | A Contemporary Critique of Historical Materialism: The nation-state and violence[END_REF] 2) "Historical materialism connects the emergence of both traditional and modern states with the development of material production (or what I call "allocative resources"). But equally significant, and very often the main means whereby such material wealth is generated, is the collection and storage of information, used to coordinate subject populations." |
01768184 | en | [
"sdv.bbm.bs"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768184/file/J.%20Biol.%20Chem.-2018-Weinha%CC%88upl-jbc.RA118.002251.pdf | Katharina Weinhäupl
Martha Brennich
Uli Kazmaier
Joël Lelièvre
Lluis Ballell
Alfred Goldberg
Paul Schanda
email: [email protected]
Hugo Fraga
email: [email protected]
Joel Lelievre
The antibiotic cyclomarin blocks arginine-phosphate-induced millisecond dynamics in the N-terminal domain of ClpC1 from Mycobacterium tuberculosis
Keywords: nuclear magnetic resonance, Mycobacterium tuberculosis, natural product, antibiotic action, antibiotic resistance, small angle X-ray scattering, protease, chaperone
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
2
Tuberculosis (TB) is a major public health problem with ten million people infected and two million dying each year [START_REF] Zumla | Advances in the development of new tuberculosis drugs and treatment regimens[END_REF]. The main challenge in TB treatment is the long duration of therapy required for a cure, as the resistance of TB results from its ability to stay dormant for long periods in the host. Most antibiotics require bacterial replication for their action and this dormant state renders Mycobacterium tuberculosis (Mtb) resistant to bactericidal antibiotics. Aggravating this problem, Mtb has become increasingly resistant to existing antibiotics, and multidrugresistant TB (MDR-TB) is now widespread [START_REF] Zumla | Advances in the development of new tuberculosis drugs and treatment regimens[END_REF].
The proteolytic complex formed by the proteins MtbClpP1 and MtbClpP2 and their hexameric regulatory ATPases, MtbClpX and MtbClpC1 are essential in mycobacteria and have emerged as attractive targets for anti-TB drug development. The Clp ATP-dependent protease complex is formed by two heptameric rings of protease subunits (MtbClpP1 and MtbClpP2) enclosing a central degradation chamber, and a hexameric ATPase complex, MtbClpC1 or MtbClpX [START_REF] Akopian | The active ClpP protease from M. tuberculosis is a complex composed of a heptameric ClpP1 and a ClpP2 ring[END_REF]. The ClpC1/ClpX ATPases recognize, unfold and translocate specific protein substrates into the MtbClpP1P2 proteolytic chamber, where degradation occurs. MtbClpC1 is a member of the class II AAA+ family of proteins, which contains a N-terminal domain (NTD) and two distinct ATPbinding modules, D1 and D2 (Fig. 1a). The active form of ClpC is a homohexamer, and in MtbClpC1 and Synechococcus elongates ClpC ATP alone is essential and sufficient for efficient protein degradation in association with ClpP (3). However, B. subtilis ClpC (BsClpC) requires the binding of both ATP and the adaptor protein MecA for formation of the active hexamer. No homologous adaptor protein has been described in Mtb (4) but it remains to be tested if MtbClpC1 can associate with a MecA-like protein.
Recently, Clausen and coworkers demonstrated that BsClpC specifically recognizes proteins phosphorylated on arginine residues by the arginine kinase McsB [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. These phosphorylation sites are often found in secondary structure elements and thus are accessible only when the protein is unfolded or misfolded. This innovative work revealed a new pathway for selective degradation of misfolded proteins in bacteria, but the structural consequences of argininephosphate (ArgP) binding to ClpC are unclear. Indeed, although the crystal structure of BsClpC NTD shows two ArgP molecules bound to the protein, no significant structural changes were observed, which is quite surprising since argininephosphorylated substrates (e.g. casein) can stimulate BsClpC ATPase activity and can promote complex oligomerization [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF].
The potential importance of MtbClpC1 as a novel drug target against TB has been emphasized by the recent findings by two independent groups that pyrazinamide (PZA) resistant strains contain mutations (Fig. 1b) in ClpC1 [START_REF] Yee | Missense Mutations in the Unfoldase ClpC1 of the Caseinolytic Protease Complex Are Associated with Pyrazinamide Resistance in Mycobacterium tuberculosis[END_REF][START_REF] Zhang | Mutation in clpC1 encoding an ATP-dependent ATPase involved in protein degradation is associated with pyrazinamide resistance in Mycobacterium tuberculosis[END_REF]. PZA is a critical first-line TB drug used with isoniazid, ethambutol and rifampicin for the treatment of TB and is also frequently used to treat MDR-TB [START_REF] Zumla | Advances in the development of new tuberculosis drugs and treatment regimens[END_REF]. In addition to PZA, three natural product antibiotics that specifically target MtbClpC1 have been recently discovered: cyclomarin [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF][START_REF] Schmitt | The Natural Product Cyclomarin Kills Mycobacterium by guest on April 16, 2018[END_REF], ecumicin [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF] and lassomycin [START_REF] Gavrish | Lassomycin, a Ribosomally Synthesized Cyclic Peptide, Kills Mycobacterium tuberculosis by Targeting the ATP-Dependent Protease ClpC1P1P2[END_REF]. Cyclomarin, identified (and since synthetized by independent groups) is bactericidal against TB and is able to kill nonreplicating bacteria [START_REF] Schmitt | The Natural Product Cyclomarin Kills Mycobacterium by guest on April 16, 2018[END_REF]. Despite the absence of resistance mutations, ClpC1 NTD was identified as the drug target using affinity chromatography with cyclomarin conjugated to sepharose [START_REF] Schmitt | The Natural Product Cyclomarin Kills Mycobacterium by guest on April 16, 2018[END_REF]. While the crystal structure of the NTD was identical with or without cyclomarin bound, observations in Mycobacterium smegmatis suggested that cyclomarin can increase proteolysis by the ClpC1P1P2 machine [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF][START_REF] Schmitt | The Natural Product Cyclomarin Kills Mycobacterium by guest on April 16, 2018[END_REF]. How cyclomarin binding to the NTD may lead to increased proteolysis is still not known. Ecumicin is another potent natural antibiotic that efficiently kills Mtb persisters, and resistance mutations in ClpC fall within its NTD (fig 1b). When tested in vitro, ecumicin increases ClpC1 ATPase activity several fold, while simultaneously compromising degradation of ClpC1P1P2 substrates [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF][START_REF] Jung | Mutation analysis of the interactions between Mycobacterium tuberculosis caseinolytic protease C1 (ClpC1) and ecumicin[END_REF].
Lassomycin, an actinomycetes ribosomally encoded cyclic peptide, is yet another natural antibiotic able to kill Mtb persisters with efficiency [START_REF] Gavrish | Lassomycin, a Ribosomally Synthesized Cyclic Peptide, Kills Mycobacterium tuberculosis by Targeting the ATP-Dependent Protease ClpC1P1P2[END_REF]. Despite differing structurally from ecumicin, lassomycin also activates ATP hydrolysis by ClpC1 ATPase and resistant mutants map to a basic domain in the protein's NTD [START_REF] Gavrish | Lassomycin, a Ribosomally Synthesized Cyclic Peptide, Kills Mycobacterium tuberculosis by Targeting the ATP-Dependent Protease ClpC1P1P2[END_REF].
Due to their high molecular weights and structural complexity, these natural products are challenging for structure-activity relationship studies, but compounds with similar modes of actions may be very attractive as drug candidates. Comprehension of the mechanism of action of these compounds will provide valuable insights for development of more effective TB drugs. Unfortunately, the intrinsic flexibility and exchange dynamics between different oligomeric states of ClpC1 impeded so far structural studies. For example, crystallization of BsClpC was only possible upon the removal of flexible loop regions, rendering the protein nonfunctional, and at the same time underlining the importance of dynamics for the function of these complexes [START_REF] Wang | Structure and mechanism of the hexameric MecA-ClpC molecular machine[END_REF]. For this purpose, nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) offer important advantages to investigate protein conformation in solution and test the effects of ligands.
Here we study the interaction of these potent new antibiotics with ClpC1 by using state of the art NMR and SAXS in order to elucidate their mode of action. A proper comprehension of the ways these drugs influence ClpC1 mechanism may also clarify how the family of AAA+ ATPases functions upon substrate binding.
Results
:
Drug Binding to ClpC1 NTD
It is now widely accepted that conformational heterogeneity of proteins can be an important factor in ligand binding and drug mechanism of action [START_REF] Hart | Modelling proteins' hidden conformations to predict antibiotic resistance[END_REF]. Indeed, in the case of cyclomarin, no significant changes of the ClpC1 NTD X-ray structure were observed upon ligand binding. Therefore, it was proposed that hidden unexplored conformations could be the basis for the compound's specific actions [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF]. Although linking conformational heterogeneity or dynamics to protein function is a difficult task, NMR is a powerful method to elucidate such phenomena.
Full length MtbClpC1 contains 849 residues that form a functional hexamer of 561 kDa in the presence of ATP. The large size of this complex together with its low solubility and expression prohibit NMR studies of the full-length protein, even when perdeuterated and specifically methyl labelled samples are used. To circumvent this problem, we cloned and expressed separately the MtbClpC1 domains: NTD (1-145 aa), D1 (165-493 aa), D2 (494-849 aa), NTD-D1 (1-493 aa) and D1-D2 (165-849 aa). With the exception of the NTD, NTD-D1 and D2 constructs, all the other yielded insoluble proteins when expressed. Moreover, the D2 domain when purified did not show any detectable ATPase activity and did not form oligomers in the presence of ATP, while the NTD-D1 construct was soluble when expressed in ArticExpress cells at 4 °C, but precipitated when ATP was added.
Mutations in the NTD of MtbClpC1 have been shown to confer resistance to cyclomarin, ecumicin, lassomycin and PZA, indicating a pivotal role of this domain (Fig1b). This 16 kDa domain is easily accessible for solution state NMR even without perdeuteration. Therefore, we focused our work on MtbClpC1 NTD and tested the effects of the different antibiotics on its structure in solution. MtbClpC1 NTD behaved as a homogeneous monomeric protein upon size exclusion chromatography (FigS1a) and dynamic light scattering (DLS, FigS1a), and resulted in high quality NMR spectra with excellent peak dispersion, indication of a well-folded, globular protein (Fig2a). Two sets of assignment experiments were done. While at pH 7.5, we were unable to assign loop regions due to amide exchange, at pH 6.0 we were able to assign 95% of all residues allowing the mapping of almost the entire protein structure. ClpC1 NTD consists of eight helices that fold as two repeats of a fourhelix motif sharing 58% identity (Fig2b). A 14 amino acid loop between helix 4 and 5 connects the two motifs [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF]. Analysis of the backbone 1 H, 13 C and 15 N chemical shifts with TALOS+ [START_REF] Shen | TALOS+: a hybrid method for predicting protein backbone torsion angles from NMR chemical shifts[END_REF] shows that the predicted secondary structure elements in the NTD in solution are highly similar to the elements in the crystal structure (Fig2b).
We proceeded testing the effect of cyclomarin binding on MtbClpC1 NTD spectra. When cyclomarin was added to the MtbClpC1 NTD, large changes in the spectrum were observed (Fig2a). Similar changes were observed with the analogue desoxycyclomarin (Fig S1b) [START_REF] Barbie | Total synthesis of desoxycyclomarin C and the cyclomarazines A and B[END_REF]. In fact, given the amplitude of chemical shift perturbations (CSP), a new set of assignment experiments was required to identify the shifted residues. The large magnitude of the CSP (Fig2c) observed is likely a consequence of the rich content of aromatic residues in the cyclomarin molecule and the corresponding ring-current effect. As the chemical shift is very sensitive to a change in the local chemical environment, we were able to map the compound binding site to the region between helix 1 and 5 (Fig2d) in agreement with the X-ray structure of this domain in the presence of bound cyclomarin [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF]. Analysis of the backbone amide, CO, Ca and Cb chemical shifts with TALOS+ [START_REF] Shen | TALOS+: a hybrid method for predicting protein backbone torsion angles from NMR chemical shifts[END_REF] shows that the predicted secondary structure elements in the cyclomarin bound NTD in solution are highly similar to the elements in the crystal structure (Fig S1d) excluding major changes in the domain secondary structure.
Based on the location of the resistance mutations (Fig1b), ecumicin was proposed to target the MtbClpC1 NTD as well [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF]. For this reason, we tested if we could observe similar effects for ecumicin, as we did for cyclomarin by NMR. Compared to cyclomarin, ecumicin only caused modest spectral changes (Fig S2). These small perturbations did not indicate a strong binding of ecumicin to the MtbClpC1 NTD and were inconsistent with previous biochemical data where ecumicin was found to be a potent inhibitor of MtbClpC1 degradation of casein, but a stimulator of MtbClpC1 ATPase activity (result confirmed here, Fig S2d) [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF]. Consequently, we tested ecumicin binding to the NTD using isothermal titration calorimetry (ITC). While we were able to confirm the nanomolar K d for cyclomarin (8) (Fig S3), we were unable to obtain a saturation curve with ecumicin under the conditions used (up to 100 µM ecumicin, Fig S3). In contrast to cyclomarin and ecumicin, the addition of PZA did not result in any changes of NTD NMR spectra, furthermore no binding of PZA was observed by ITC (Fig S3).
NTD dynamics
Analysis of 15 N backbone relaxation rates can provide detailed information about protein dynamics on different time scales. In particular motions in the μs-ms timescale, associated with conformational changes between different states, often occur in sites important for protein function [START_REF] Kleckner | An introduction to NMR-based approaches for measuring protein dynamics[END_REF][START_REF] Eisenmesser | Intrinsic dynamics of an enzyme underlies catalysis[END_REF]. Studying the dynamics of MtbClpC1 NTD is particularly interesting, since it acts as a ligand recognition site that has to convey information about the bound state to the D1 and D2 rings for ATP driven translocation and unfolding. Given the very modest structural differences of the NTD in its apo and cyclomarin bond states, we reasoned that dynamics may be important in the allosteric process.
In order to investigate whether such μs-ms motions are indeed relevant for the NTD, we performed Carr-Purcell-Meiboom-Gill (CPMG) relaxation-dispersion (RD) NMR experiments. Briefly, RD profiles monitor the effective spin relaxation rate constant (R 2,eff ) as a function of a variable repetition rate, ν CPMG , of refocusing pulses applied during a relaxation delay. The presence of conformational dynamics is manifest as a dependence of R 2,eff on ν CPMG , i.e. "non-flat" RD profiles. Such dispersions arise when the local environment around the atom under consideration is fluctuating on a μs-ms time scale, either because of motion of the considered atom(s) or neighboring atoms, e.g. through binding events. When we applied this technique to apo NTD, we observed flat RD curves for all backbone amides. Thus, this domain alone does not exhibit significant μs-ms motions.
Furthermore, cyclomarin binding does not induce ms dynamics in the NTD.
Arginine-Phosphate and arginine phosphorylated proteins bind to MtbClpC1 NTD
In B.subtilis phosphorylation of arginines targets certain proteins for ClpCP-mediated degradation [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. Phosphorylated proteins, are first detected and bound by the NTD of ClpC and then transferred to the D1 domain for subsequent unfolding and degradation [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. Analyzing the MtbClpC1 NTD sequence, we found that the binding site of ArgP in B. subtilis is strictly conserved in Mtb (Fig3a). Furthermore, comparing the overall structures of the two NTDs, it is clear that they are structurally identical with an RMSD of 1.7 Å (Fig S3b). This striking similarity motivated us to test if MtbClpC1 NTD could also bind ArgP. Indeed, using ITC we confirmed that NTD could bind ArgP with a K d of 5.2 µM (Fig3b), a similar value as reported for BsClpC NTD [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. In addition, a pull down showed that McsB phosphorylated lysozyme and casein bind to ClpC1 NTD while no binding was observed with the control substrate (Fig 3c). Furthermore, as previously reported for BsClpC, a large excess of free ArgP was required to inhibit this association likely reflecting a stronger binding of NTD to the protein substrate (Fig 3c). Despite some residual binding of non-phosphorylated lysozyme, a similar result was observed when full length ClpC1 was used instead of isolated NTD (Fig S3c). Addition of ArgP to MtbClpC1 NTD resulted in strong perturbations of its backbone amide NMR spectrum (Fig 3d). This effect was specific to ArgP since no significant changes were observed with unmodified arginine or phosphoserine (Fig S4). In contrast to the rather small changes in peak intensity observed upon cyclomarin or ecumicin addition, ArgP caused large changes in peak intensity, with 45% of the peaks falling below the level of detection into the background noise (Fig 3e). As shown below, this decrease in intensity can indicate exchange events at the affected residues. Interestingly, these residues do not map exclusively to the putative binding site of ArgP, but are localised over a large part of the core of the structure, whereby helices 2, 3, 6 and 7 are most affected (Fig3e, 3f). Using CPMG RD experiments, we could prove that the observed decrease in intensity is indeed caused by millisecond dynamics (Fig 4a ,4b). Surprisingly, these dynamics can be seen throughout the domain. In Fig. 4a the affected residues and the ΔR 2eff are plotted on the structure of MtbClpC1 NTD. Furthermore, we assume that the part of the protein showing the highest degree of motion are the three helices with disappearing resonances (refer to Fig3f). Although CPMG RD experiments provide residue resolved direct evidence of the motions, they do not actually reveal what the underlying motion corresponds to. In the case of low affinity binding, ms dynamics can be also caused by on/off binding dynamics of the compound. Considering the ArgP K d , we can calculate the population of free NTD with the concentrations of ArgP used (2 mM), which results in full domain saturation (calculated free NTD 0.3 %). Fitting of the CPMG RD data with the software Chemex provides an exchange rate between the ground and excited state, the population of the excited state and the chemical shift difference between the two states. The fitted population of the excited state [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]Fig S5d) is not in agreement with the calculated free state of the NTD (0.3 %). Moreover, theoretical chemical shift differences of the excited state derived from fitting of the CPMG RD data with Chemex do not correlate with experimental chemical shift differences between apo and ArgP bound NTD (Fig S5a). To conclude, we confirmed that the excited state does not correspond to the free state of the NTD, but most likely to a different conformation. Arginine has been previously reported to reduce the melting point of some protein but paradoxically has been also suggested as a stabilizer for protein preparations [START_REF] Ishibashi | Is arginine a protein-denaturant?[END_REF][START_REF] Golovanov | A simple method for improving protein solubility and long-term stability[END_REF][START_REF] Shukla | Interaction of arginine with proteins and the mechanism by which it inhibits aggregation[END_REF][START_REF] Vagenende | Protein-associated cation clusters in aqueous arginine solutions and their effects on protein stability and size[END_REF]. In order to discard the possibility that the dynamics observed result from ArgP promoted unfolding we recorded far-UV circular dichroism (CD) spectra of MtbClpC1 NTD with and without ArgP present. In both cases, the spectra were typical for α-helical proteins with characteristic minima at ≈210 nm and ≈222 nm (Fig. 5a). Additionally, we used chemical shift differences from the Chemex fitting of the CPMG RD data and compared these values with the chemical shift differences of the folded NTD and calculated random coil shifts derived from ncIDP (23) (FigS5b). Clearly, the fitted chemical shift differences do not correlate with the calculated random coil values. Thus, ArgP does not appear to unfold the NTD and also does not perturb the secondary structure of the domain. Alternatively, ArgP might induce domain oligomerization, which would explain peaks disappearing by transient interactions between two ClpC1 subunits. Analysis of the protein by DLS (FigS1) and Diffusion Ordered Spectroscopy (DOSY, FigS5e), however, excluded this hypothesis. Both ArgP and cyclomarin did not induce any change in MtbClpC1 NTD oligomerization. Intrinsic aromatic fluorescence can be used to probe conformational changes upon ligand binding. While MtbClpC1 NTD does not contain any tryptophan residues, it contains 3 tyrosine residues (Y27, Y102 and Y145), which can be used as probes for domain conformational changes (Fig S5f). When we tested the effect of ArgP on NTD fluorescence, we observed an increase in tyrosine fluorescence (Fig S5g) associated with a stabilization of the protein. NTD displayed cooperative unfolding with an apparent Tm of 69 °C, this value was increased to 79 °C in the presence of ArgP 1 mM (Fig 5b). An alternative approach to probe conformational changes is the use of fluorescent probes. 1-Anilinonaphthalene-8-Sulfonic Acid (ANS) binds to hydrophobic regions in the protein, and ANS fluorescence increases substantially when proteins undergo changes that expose hydrophobic surfaces as normally occurs during protein unfolding. ANS can, however, be used to detect subtle conformational changes, and we tested if ArgP binding leads to changes in ANS fluorescence. While ANS binding to MtbClpC1 NTD resulted in an increase in fluorescence, indicating the presence of exposed hydrophobic surface, ArgP binding did not significantly alter fluorescence (Fig5c). Because cyclomarin has intense intrinsic fluorescence, it cannot be used in the previous fluorescence studies.
Cyclomarin restricts ArgP-induced dynamics
The fact that cyclomarin binding does not affect MtbClpC1 NTD dynamics is inconsistent with the prior proposal that cyclomarin acts by causing conformational changes in this domain [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF]. By contrast, ArgP leads to a significant increase in domain dynamics. We therefore tested if cyclomarin binding could either block ArgPrecognition or ArgP-induced dynamics. Adding ArgP to MtbClpC1 NTD prebound to cyclomarin, we observed significant changes in the 1 H- 15 N correlated HSQC spectrum in the binding site of ArgP (Fig S6). This is consistent with our pulldown results where no effect of cyclomarin on arginine phosphorylated proteins binding to the NTD was observed (Fig3c). However, the behavior of ArgP bound and ArgP/cyclomarin bound NTD was clearly different. Several peaks that disappeared upon ArgP binding, reappeared when cyclomarin was added. Instead of 45%, only 25% of all peaks disappeared when cyclomarin was added together with ArgP. Related to the spectral changes, the most striking difference between ArgP and ArgP/cyclomarin binding are the dynamic properties of the NTD. When ArgP is bound to the NTD most observed residues exhibit µs-ms dynamics. These dynamics can however be completely abolished by addition of cyclomarin. In ArgP/cyclomarin bound NTD, not a single residue showed µs-ms dynamics exactly as occurs in apo MtbClpC1 NTD (Fig 4). Cyclomarin binds to a hydrophobic "bed" formed by two phenylalanines in the symmetric axis of the domain [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF], which by stabilizing the core of the domain, presumably explains how it can block completely NTD domain dynamics.
Cyclomarin prevents ArgP inhibition of FITCcasein degradation
The primary function of ClpC1 is to recognize certain cellular proteins and to unfold and translocate them into ClpP1P2 for degradation, although it is also able, at least in vitro, to catalyze the refolding of some proteins [START_REF] Akopian | The active ClpP protease from M. tuberculosis is a complex composed of a heptameric ClpP1 and a ClpP2 ring[END_REF][START_REF] Kar | Mycobacterium tuberculosis ClpC1[END_REF]. Until the discovery of ArgP in BsClpC no recognition signal for ClpC was known. In the presence of the cofactor MecA, BsClpC can catalyze the degradation of unfolded proteins, such as casein. However, ArgP (1mM) is able to block MecA-dependent proteolysis, apparently because the binding site of ArgP overlaps with the contact site of MecA and BsClpC's NTD [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. This observation clearly suggests that ClpC has two alternative mechanisms for substrate selection. One for unfolded proteins containing no specific tag and dependent on the MecA adaptor for efficient ClpC1 oligomerization and activation, and a second MecA-independent pathway that depends specifically on the presence of protein sequences containing phosphorylated arginines [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF][START_REF] Kirstein | The tyrosine kinase McsB is a regulated adaptor protein for ClpCP[END_REF]. In contrast to BsClpC, MtbClpC1 does not require any cofactor for activity. In the presence of ATP it can, in association with MtbClpP1P2, efficiently degrade unfolded proteins like casein [START_REF] Akopian | The active ClpP protease from M. tuberculosis is a complex composed of a heptameric ClpP1 and a ClpP2 ring[END_REF]. Despite these differences, the mechanism of these homologous enzymes seems to be conserved between species, and studying the effects of cyclomarin and ArgP on protein degradation by MtbClpC1P1P2 could give important mechanistic insights. One interesting question that arises is whether ArgP binding is able to block casein degradation as it does in BsClpC. We therefore compared degradation of FITC-casein in the presence and absence of ArgP. ArgP caused a significant inhibition (up to 55%, Fig5d) of FITC-casein degradation, but it did not completely block this process even at very high ArgP concentrations. By contrast, in the presence of cyclomarin (20 µM), no inhibition of proteolysis was observed (Fig5d).
ClpC1 forms high oligomeric species in solution
We used SAXS as a complementary method to obtain information on the effect of drug binding on ClpC1 structure. Compared to X-ray diffraction, SAXS has a modest resolution but can provide information on several global parameters: the radius of gyration; the largest intraparticle distance; the particle shape; and the degree of folding, denaturation, or disorder [START_REF] Svergun | Small-angle scattering studies of biological macromolecules in solution[END_REF]. All these parameters can be good reporters for significant structural changes promoted by drug binding. In addition, SAXS does not require the preparation of highly concentrated deuterated samples, allowing the study of ClpC1 structure in native like solution conditions. Our first SAXS measurements in batch format in the presence of ATP revealed the presence of very high molecular weight species incompatible with a ClpC1 hexamer. As MtbClpC1 is inherently prone to aggregation, known to seriously affect SAXS data interpretation, we concluded that part of the MtbClpC1 sample could be aggregated. To overcome this problem, we turned to sizeexclusion chromatography coupled to SAXS (SEC-SAXS). In this system, the sample is separated according to size and shape before SAXS measurement, this way removing protein aggregates from the protein samples. Indeed, consistent with the aggregation hypothesis, when ClpC1 was loaded on a Superose 6 10/300 GL column, the chromatograms of ClpC1 showed two distinct peaks: a small peak directly after the void volume followed by a second broad peak (Fig 6a). The scattering signal at the second peak was relatively stable with a radius of gyration in the range of 8 nm (Table S1), but reduces significantly to 7.6 nm at the end of the peak, indicating either structural flexibility or overlapping oligomeric states. Surprisingly, this radius of gyration was again clearly inconsistent with a ClpC1 hexamer (radius of gyration 5.53 nm) representing instead bigger complexes. Apparently, in the conditions used (Hepes 50 mM pH 7.5, KCl 100 mM, glycerol 10%, MgCl2 4 mM and ATP 1 mM, ClpC 1 mg/ml), the hexamer is not the dominant species, and ClpC1 appears to exist in a rather distinct molecular organization. Recently, Carroni et al, using cryo-electron microscopy (cryo-EM) and mutagenesis, showed that S. aureus ClpC (SaClpC) can exist in a decameric resting state formed through ClpC middle domains establishing intermolecular head-to-head contacts [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF]. This head to head contacts allow the docking of two layers of ClpC molecules arranged in a helical conformation. Despite their oligomerization, these structures are however highly dynamic -as the peripheral subunits are likely in exchange -suggesting that higher order species can exist in equilibrium.
As the middle domain of MtbClpC1 is conserved compared to SaClpC, and to better understand how our SAXS data could relate to the resting state cryo-EM structure, we averaged frames from the center of the peak and the trailing end. SAXS estimates the molecular weight at the center of the peak between 1100 and 1400 kDa, compatible with 12-or 14-mers (Table S1). For the tail, the estimated weight between 860 and 1200 kDa is in better agreement with 10-or 12mers. Bead modeling based on both curves resulted in curling stone shaped objects, whose main body matches the reported EM data in size and shape (Fig6b). An artifact resulting from the presence of several oligomer populations in the sample is observed in the form of an appendix. Direct comparison of the SAXS curve from the tail with the atomistic model gives a surprisingly good fit (χ 2 =3.4, Fig 6c), given that the SAXS curve represents an ensemble of states. Considering the similarities between our data and the previously reported cryo-EM structure, it is likely that MtbClpC1 can form a resting state with a large part of the population representing even higher oligomers than decamers. This difference could derive from a concentration dependent oligomeric equilibrium which would explain why MtbClpC1 seems larger in our study as compared to the SEC-MALS data presented by Carroni et al [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF]. In fact, crosslinking data from the same study already suggested the presence of complexes bigger than the EM decameric structure [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF]. While we could not detect the hexamer in solution, the fact that in the same conditions ClpC1 is active, catalyzing the degradation of GFPssra and casein in association with ClpP1P2, suggests that a part of the population exists as a hexamer. We proceeded testing if the natural product antibiotics or ArgP targeting ClpC1 could affect the distribution of ClpC1 between a resting state and an active hexameric form. The addition of ecumicin (20 µM), cyclomarin (20 µM) and ArgP (200 µM) to the SEC buffer lead only to small changes in the averaged SAXS curve (Fig S8 & Table S1). As the curve is not stable, the differences are too small to allow any statement about local structural rearrangement. It appears however that the natural antibiotics tested do not affect the ClpC1 oligomer equilibrium significantly.
Discussion:
The structural characterization of AAA+ proteins involved in molecular recognition and unfolding is usually a complex task. Whereas their intrinsic heterogeneity normally results in difficult crystallization, their oligomeric organization and large size makes their study by NMR challenging. So far, no X-ray structure is available for MtbClpC1 and, as reported here, the full-length protein and its domains have intrinsic low solubility. Adding further complexity to the study of the system, we show here that MtbClpC1 can exist in an equilibrium between different oligomeric states. The existence of a resting decameric state formed by the association of head-to-head contacts between coiled-coil middle domains of ClpC was recently described by Carroni et al [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF]. The middle domains were proposed to repress the activity of ClpC by forming a highly dynamic resting state that can block substrate binding or ClpP interaction. Quite striking is the observation that a single point mutation in the middle domain can disrupt the resting state and result in the formation of an active hexamer even in the absence of the MecA adaptor [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF]. Consistent with the conservation of the key residues between MtbClpC1 and SaClpC [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF], our SAXS data suggests that a similar structure is predominant versus the active hexameric form in MtbClpC1. However, contrary to the case of S. aureus or B. subtilis, where MecA was proposed to modulate this equilibrium, in the case of Mtb it is not clear how the distribution between resting state and hexameric form occurs. N-terminal domains are packed between middle domains in this resting state, and were suggested to play a role in the complex stability by fluctuating between a hidden position to an exposed one, available to adaptors or substrates. One hypothesis is that the equilibrium could be modulated by substrates or natural product antibiotics. However, we could show that the addition of cyclomarin, ecumicin or ArgP ligand appears not to shift ClpC1 distribution. With the current data, we cannot exclude that binding of a bulkier substrate can shift the equilibrium towards a state where the NTDs are not constrained thus activating the unfoldase.
Whereas natural product antibiotics appear not to influence ClpC1 oligomerization equilibrium, we were unable to obtain convincing evidence for ecumicin or PZA binding to the isolated MtbClpC1 NTD. This result is intriguing, as mutations in the NTD domain have been associated with resistance to ecumicin and PZA [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF]. With regard to ecumicin, we and others [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF] were able to demonstrate biochemical activity, namely activating ATPase activity. Most likely, ecumicin requires full length ClpC1 or an oligomeric structure for binding and ATPase activation. Possibly the binding site is located at the interface of the NTD and D1 domain, which can explain the ATPase activation by ecumicin. In this case, disrupting part of the binding site, the Nterminal interface in the resistant mutants, might be sufficient for reduced binding affinity in vivo. Small CSPs were evident on the NTD NMR spectrum for ecumicin in the regions where resistant mutations are located, suggesting that this region could be part of the putative binding site (FigS2). In the case of PZA, an efficient "dirty drug" with multiple reported cellular targets, we cannot exclude that resistance derives from the modulation of protein homeostasis for any of the other targets, for example preventing or increasing substrate degradation [START_REF] Zhang | Mechanisms of Pyrazinamide Action and Resistance[END_REF].
Using NMR, we could show that, although cyclomarin binds with high affinity to the ClpC1 NTD, the domain dynamics are not modified. MtbClpC1 NTD is a rather rigid domain, and showed no millisecond dynamics in the apo state. Also, cyclomarin binding does not result in peak broadening or loss of intensity that might indicate the presence of an alternative state. These observations rule out the existence of hidden conformations not captured by previous X-ray studies [START_REF] Vasudevan | Structural basis of mycobacterial inhibition by cyclomarin A[END_REF].
Finally, our finding that ArgP binding induces millisecond dynamics in the MtbClpC1 NTD domain is a new and important clue about ClpC1 mechanisms. Particularly when no alternative conformations were reported in the X-ray structure of BsClpC with ArgP-bound, and because no structural changes were observed with arginine or phosphoserine. While we excluded unfolding and transient-binding as potential explanations for the observed dynamics, we were unable to pursue structural determination since approximately half of the residues in the ArgP-bound NTD are NMR invisible. ArgP binding results in a significant increase in tyrosine fluorescence and a dramatic change in the stability of this domain. While the increased fluorescence could result from subtle changes in tyrosine side chains in a region densely packed with aromatic residues (three Phe and two Tyr, FigS5) the increased stability could derive from the ArgP binding preferentially to the folded state. The exact relationship between ArgP induced dynamics and the functional cycle of MtbClpC1 is currently not clear. Do dynamics promote target binding through ArgP recognition by allowing multiple transient interaction sites with the incoming substrate? Conformational heterogeneity and dynamics in substrate binding sites have been proposed to increase substrate recognition efficiency, and at the same time to facilitate substrate handover to downstream elements, by making a multitude of transient weak interactions with the substrate which can be easily broken [START_REF] He | A molecular mechanism of chaperone-client recognition[END_REF]. In fact, a single phosphorylated arginine in a protein has been shown to be sufficient for efficient ClpC-mediated degradation. Thus ClpC's molecular recognition mechanism must be highly efficient, for example 1 mM of free ArgP does not completely block protein binding, which may appear inconsistent with the micromolar K d that we and others report for ArgP binding to the isolated NTD [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. Despite this, the fact that cyclomarin cannot block arginine phosphorylated protein binding appears to contradict the hypothesis that dynamics are fundamental for substrate association to the NTD, not excluding however, that they are relevant for subsequent steps -for example substrate release to the D1 pore (Fig 7). Another possibility is that the observed conformational dynamics could modulate the positioning of the NTD related to the D1 pore. Indeed, studies with the type II eukaryotic homologue p97/Cdc48 ATPase complex which shares the NTD-D1-D2 architecture with ClpC1, have stressed the mechanistic relevance of the interface between NTD/D1 and the denominated up/down equilibrium of the highly mobile NTDs [START_REF] Schuetz | A Dynamic molecular basis for malfunction in disease mutants of p97/VCP[END_REF]. In p97/Cdc48, NTDs have been shown to adopt, depending on the nucleotide bound, either a coplanar (down) or elevated (up) position with respect to the D1 domain [START_REF] Schuetz | A Dynamic molecular basis for malfunction in disease mutants of p97/VCP[END_REF]. Increasing the life time of the NTDs "up state", thereby holding the substrate next to the D1 pore would promote substrate recognition, while supporting the down state would prevent substrate recognition, nevertheless exposing the D1 domain pore. In E.coli ClpA, a close bacterial homologue of ClpC, removal of NTD is known to seriously impair recognition of substrates bearing the SsrAtargeting sequence, but to have only a modest effect on degradation of unfolded proteins [START_REF] Lo | Characterization of the N-terminal repeat domain of Escherichia coli ClpA-A class I Clp/HSP100 ATPase[END_REF]. In other words, NTDs may work as recognition domains for certain substrates, while at the same time blocking the D1 domain and preventing free access for unfolded proteins. This differential effect on certain substrates was the basis for the suggested role for the NTDs as an "entropic brush" that prevents nonspecific degradation of proteins by blocking acess to the D1 ring [START_REF] Ishikawa | The N-terminal substrate-binding domain of ClpA unfoldase is highly mobile and extends axially from the distal surface of ClpAP protease[END_REF]. In our view the competition between recognition of ArgP-labelled and unfolded protein, can simultaneously explain the inhibition of casein degradation by ArgP, and the effect of cyclomarin, which is able to completely abolish this inhibition of casein hydrolysis.
Put together, our work shows that ArgP binding to MtbClpC1NTD leads to widespread domain millisecond dynamics and that cyclomarin is able to block this process. While, it is surely not the absence of ClpC1 NTD dynamics that kills TB but likely its derived functional consequences, for example blocking the ArgP pathway, this work sheds light on ClpC mechanism. Rather than a static interaction, ArgP labelled proteins binding to ClpC must be understood as a highly dynamic process. Cyclomarin is therefore a unique example of a drug whose mode of action relies on the restriction of protein dynamics induced by substrate binding.
Experimental Procedures:
ArgP was obtained from Sigma. Cyclomarin and desoxycyclomarin were synthetized as previously described [START_REF] Barbie | Total Synthesis of Cyclomarin A, a Marine Cycloheptapeptide with Anti-Tuberculosis and Anti-Malaria Activity[END_REF]. MtbClpC1, MtbClpP1, MtbClpP2 were expressed and purified as previously described [START_REF] Akopian | The active ClpP protease from M. tuberculosis is a complex composed of a heptameric ClpP1 and a ClpP2 ring[END_REF]. Bacillus stearothermophilus mcsB was cloned into a pet28a+ vector and expressed and purified as previously described [START_REF] Trentini | Arginine phosphorylation marks proteins for degradation by a Clp protease[END_REF]. MtbClpC1 domains: NTD corresponding to residues 1-145, D1 corresponding to residues 165-493, D2 corresponding to residues 494-849, NTD-D1 corresponding to 1-493 and D1-D2 corresponding to 165-849 were cloned into a pet28a+ vector by Genscript. Unless otherwise noted, the purification protocol consisted of an initial NiNTA affinity chromatography step taking advantage of the histidine tag, followed by a size exclusion step using a Hiload 16/600 Superdex 200 pg column. FITC-casein, GFPssra degradation and ATP hydrolysis were measured, as previously described [START_REF] Akopian | The active ClpP protease from M. tuberculosis is a complex composed of a heptameric ClpP1 and a ClpP2 ring[END_REF]. For DLS measurements, 200 µl of a 1.1 mg/ml ClpC1 NTD solutions with and without ArgP (1 mM) were used. Far UV CD spectra were acquired on a Jasco J-810 spectropolarimeter continuously purged with nitrogen and thermostated at 20 °C. Briefly, a solution of ClpC1 NTD (5 µM) in Tris pH 7.5 NaCl 150 mM with/without ArgP (1 mM) was used to obtain CD spectra between 205 and 250 nM.
Intrinsic tyrosine fluorescence was measured in a Varian Cary Eclipse spectrofluorimeter using a 60 µM solution of ClpC1 NTD. Samples were excited at 280 nm, and fluorescence spectra were measured from 290 to 350 nm. Samples in the presence of ANS (50 µM) were excited at 370 nm, and fluorescence spectra were measured from 400 to 600 nm.
Degree of saturation of ClpC1 NTD
The degree of saturation of ClpC1 NTD with ArgP was calculated using the equation:
!" ! = 1 2 1 + " ' ! ' + ( ) ! ' - 1 + ["] ' [!] ' + ( ) [!] ' - -4 ["] ' [!] 0
where [L] 0 is initial ligand, [P] 0 is initial protein and PL is protein ligand complex.
SAXS data collection and analysis
SEC-SAXS data were collected at ESRF BM29 [START_REF] Pernot | Upgraded ESRF BM29 beamline for SAXS on macromolecules in solution[END_REF][START_REF] Brennich | Online Size-exclusion and Ion-exchange Chromatography on a SAXS Beamline[END_REF]. The HPLC system (Shimadzu, Farance) was directly coupled to the 1.8 mm flow-through capillary of SAXS exposure unit. The flow rate for all online experiments was 0.3 mL/min. SAXS data collection was performed continuously throughout the chromatography run at a frame rate of 1 Hz with a Pilatus 1M detector (Dectris) at the distance of 2.876 m from the capillary. The scattering of pure water was used to calibrate the intensity to absolute units [START_REF] Orthaber | SAXS experiments on absolute scale with Kratky systems using water as a secondary standard[END_REF]. The X-ray energy was 12.5 keV and the accessible q-range 0.07 nm - 1 to 4.9 nm -1 . The incoming flux at the sample position was in the order of 10 12 photons/s in 700 × 700 µm 2 . A summary of the acquisition parameters is given in table S1. All images were automatically azimuthally averaged with pyFAI [START_REF] Ashiotis | The fast azimuthal integration Python library: pyFAI[END_REF] and corrected for background scattering by the online processing pipeline [START_REF] Brennich | Online data analysis at the ESRF bioSAXS beamline[END_REF]. For each frame, the forward scattering intensity and radius of gyration were determined according to the Guinier approximation (https://github.com/kif/freesas). For each run, regions of 20 to 80 frames were averaged for further characterization. Data at small angles before the Guinier-region was removed before further data analysis to avoid experimental artefacts. Pair distribution functions were calculated using GNOM [START_REF] Svergun | Determination of the regularization parameter in indirect-transform methods using perceptual criteria[END_REF]. 20 ab-initio models each were calculated in C1 symmetry, using DAMMIF [START_REF] Franke | DAMMIF, a program for rapid ab-initio shape determination in small-angle scattering[END_REF] and averaged, aligned and compared using DAMAVER [START_REF] Volkov | Uniqueness of ab initioshape determination in small-angle scattering[END_REF]. The scattering curve of the ClpC decamer [START_REF] Carroni | Regulatory coiled-coil domains promote head-to-head assemblies of AAA+ chaperones essential for tunable activity control[END_REF] was predicted and fitted to experimental data using Crysol 3 [START_REF] Franke | ATSAS 2.8: a comprehensive data analysis suite for small-angle scattering from macromolecular solutions[END_REF].
NMR experiments
All NMR experiments were performed on Bruker Avance III spectrometers, equipped with cryogenically cooled TCI probeheads, operating at magnetic field strengths corresponding to 1 H Larmor frequencies of 850, 700 and 600 respectively. The sample temperature was set to 37 °C, unless stated otherwise.
Sequence-specific resonance assignments of ClpC1 NTD
Apo ClpC1 NTD was assigned in NMR buffer pH 6 (50mM MES, 100mM NaCl, 5% D 2 O) and in NMR buffer pH 7.5 (50mM Tris, 50mM NaCl, 5% D 2 O) at a protein concentration of 0.8mM at a 1 H Larmor frequency of 600. The following experiments were performed: 2D 15 N-1 H BEST HSQC, 3D BEST HNCO, 3D BEST-TROSY HNcaCO, 3D BEST HNCA, 3D BEST HNcoCA, 3D BEST HNcoCACB and 3D BEST HNCACB [START_REF] Favier | Recovering lost magnetization: polarization enhancement in biomolecular NMR[END_REF]. The same experimental conditions were used for the assignment of ClpC1 NTD in the presence of cyclomarin, except for a lower protein concentration due to low solubility of cyclomarin (0.2 mM ClpC1 NTD, 0.22 mM cyclomarin). DMSO controls were also measured. For the assignment of ClpC1 NTD in the presence of ArgP and ArgP plus cyclomarin, 15 N-1 H BEST HSQC, BEST HNCO and BEST HNCA spectra were recorded at a 1 H Larmor frequency of 850 MHz. Assignment was performed by following the chemical shifts of backbone amide, C a and CO peaks in apo and ligand bound spectra. The sample conditions used were: 0.2 mM ClpC1 NTD and 2 mM ArgP or 0.2 mM ClpC1 NTD, 2 mM ArgP and 0.22 mM cyclomarin in NMR buffer pH 6.
Data processing and analysis were performed using the NMRPipe software package [START_REF] Delaglio | NMRPipe: A multidimensional spectral processing system based on UNIX pipes[END_REF] and CCPN software [START_REF] Vranken | The CCPN data model for NMR spectroscopy: development of a software pipeline[END_REF].
Titration of cyclomarin into MtbClpC1 NTD:
For the cyclomarin titration four titration points were measured with 0.2 mM MtbClpC1 NTD each. The DMSO content in all samples was 2.2 %. The measured ratios between the NTD and cyclomarin were 1:0, 1:1.1, 1:1.5 and 1:2. No chemical shift changes were observed after a ratio of 1:1.1 MtbClpC1 NTD:cyclomarin. Ratio used for experiments 1:1.1.
Titration of ArgP into MtbClpC1 NTD:
Five titration points were measured with 0.3 mM MtbClpC1 NTD each. The DMSO content in all samples was 0.6 %. The measured ratios between the NTD and ArgP were 1:0, 1:0.5, 1:1, 1:2 and 1:10. No intensity changes were observed after a ratio 1:2 MtbClpC1 NTD : ArgP. Ratio used for experiments 1:10.
Titration of cyclomarin and ArgP into MtbClpC1 NTD:
Four different samples were measured for the titration of MtbClpC1 NTD with cyclomarin and ArgP. All samples contained 4.2 % DMSO and 0.2 mM MtbClpC1 NTD. We measured one reference sample, one sample with 0.22 mM cyclomarin added, one sample with 0.22 mM cyclomarin and 0.2 mM ArgP and one sample containing 0.22 mM cyclomarin and 2 mM ArgP.
Titration of Ecumicin into MtbClpC1 NTD:
Three titration points were measured with 0.2 mM MtbClpC1 NTD each. The DMSO content in all samples was 4 %. The measured ratios between the NTD and Ecumicin were 1:0, 1:1, and 1:2. Due to the insolubility of Ecumicin no higher concentrations of Ecumicin could be measured.
Titration of Pyrazinamide into MtbClpC1 NTD:
Three titration points were measured with 0.2 mM MtbClpC1 NTD each. The DMSO content in all samples was 2 %. The measured ratios between the NTD and ArgP were 1:0, 1:1 and 1:10. No intensity or chemical shift changes were observed at any measured Pyrazinamide concentration.
BEST-TROSY type (43) 15 N CPMG relaxation dispersion experiments were performed with the pulse scheme described by Franco et al [START_REF] Franco | Probing Conformational Exchange Dynamics in a Short-Lived Protein Folding Intermediate by Real-Time Relaxation-Dispersion NMR[END_REF] at static magnetic field strengths of 700 and 850 MHz, at a sample temperature of 37 °C. Effective relaxation rate constants, R 2eff , were measured at 11 (700 MHz) and 13 (850 MHz) different CPMG frequencies and derived from the commonly employed two-point measurement scheme [START_REF] Mulder | Studying excited states of proteins by NMR spectroscopy[END_REF], R 2eff =-1/T•ln(I/I 0 ), where I is the peak intensities in the experiment with the CPMG pulse train and I 0 the one in a reference experiment without relaxation delay. T is the total relaxation delay, which was chosen as 60 ms and 40 ms in the experiments performed at 700 and 850 MHz, respectively. Peak heights and error margins were extracted in the software nmrView (OneMoon Scientific). A two-state exchange model was fitted jointly to the dispersion data of 24 residues using the program ChemEx [START_REF] Bouvignies | Measurement of proton chemical shifts in invisible states of[END_REF]. (a) Side and top view of MtbClpC1 structural model based on BsClpC in complex with MecA (pdb 3j3s). In gray NTD and linker region, in blue D1 domain and in green D2 domain. NTD and linker are predicted to be mobile. (b) MtbClpC1 NTD (pdb 3wdb) is the assumed target of the natural product antibiotics ecumicin, cyclomarin and lassomycin. Mutations in pyrazinamide-resistant strains have also been mapped to this domain. Depicted as pink spheres are ecumicin-resistant mutations (L92S, L96P, L92F) [START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF], in yellow, lassomycin-resistant (Q17R, Q17H, R21S, P79T) [START_REF] Gavrish | Lassomycin, a Ribosomally Synthesized Cyclic Peptide, Kills Mycobacterium tuberculosis by Targeting the ATP-Dependent Protease ClpC1P1P2[END_REF] and in green pyrazinamide-resistant (G99D (7) and L88V, G99D, I108T, R114L mutations ( 6)). The crystal structure of cyclomarin-bound MtbClpC1 NTD is shown in Figure 2d.c) Binding site of cyclomarin (yellow) and ArgP (red) on the MtbClpC1 NTD. 3d, left) plotted on its structure (PDB: 3wdb). Assigned residues are shown as spheres, unassigned residues as gray cartoon, residues that disappear upon ArgP binding as white cartoon. Peak height ratio is drawn as a spectrum from green to white. Whereby white indicates loss in intensity. Two arginine-phosphate molecules (red sticks) are placed at the putative ArgP binding site, identical to the X-ray structure of BsClpC NTD (PDB: 5hbn). (a) SEC-SAX chromatogram of MtbClpC1, showing the radius of gyration of the eluted species (pink) with the respective absorbance at 280 nm (black). The first peak immediately after the column void volume corresponds to protein aggregates. (b) DAMMIF models obtained from MtbClpC1 scattering curve in the presence of different ligands ecumicin (pink), cyclomarin (blue) and ArgP (green). With the exception of the appendix, an artifact derived from sample heterogeneity, the obtained model fits well to the structure obtained previously by Cryo-EM (pdb 6em9). (c) The scattering curve of the resting state SaClpC decamer was predicted and fitted using Crysol 3 to the experimental curve obtained from the apoMtbClpC1 tail of the peak . MtbClpC1 can form a resting state in equilibrium with the active hexameric form. Phosphorylation of arginines marks proteins for degradation by the ClpCP machinery. Phosphorylated arginines bind to the NTD where they induce millisecond dynamics that could either facilitate contact between different NTDs or the transfer of the substrate to the D1 domain pore. Although, cyclomarin binding does not change the structure of the NTD or substrate binding it restricts ArgP induced dynamics.
Briefly, the program involves the integration of the Bloch-McConnell equations throughout the explicit train of CPMG pulses, taking into account offset effects and finite pulse lengths. Error estimates were obtained from Monte Carlo simulations. The fit curves from the joint fit are shown in Fig S7, and a table of residue-wise chemical-shift differences is provided in Fig S5c.
Figure 1 .
1 Figure Legends:Figure 1. Structural model of MtbClpC1 showing drug-resistant mutations in the N-terminal domain.(a) Side and top view of MtbClpC1 structural model based on BsClpC in complex with MecA (pdb 3j3s). In gray NTD and linker region, in blue D1 domain and in green D2 domain. NTD and linker are predicted to be mobile. (b) MtbClpC1 NTD (pdb 3wdb) is the assumed target of the natural product antibiotics ecumicin, cyclomarin and lassomycin. Mutations in pyrazinamide-resistant strains have also been mapped to this domain. Depicted as pink spheres are ecumicin-resistant mutations (L92S, L96P, L92F)[START_REF] Gao | The cyclic peptide ecumicin targeting ClpC1 is active against Mycobacterium tuberculosis in vivo[END_REF], in yellow, lassomycin-resistant (Q17R, Q17H, R21S, P79T)[START_REF] Gavrish | Lassomycin, a Ribosomally Synthesized Cyclic Peptide, Kills Mycobacterium tuberculosis by Targeting the ATP-Dependent Protease ClpC1P1P2[END_REF] and in green pyrazinamide-resistant (G99D[START_REF] Zhang | Mutation in clpC1 encoding an ATP-dependent ATPase involved in protein degradation is associated with pyrazinamide resistance in Mycobacterium tuberculosis[END_REF] and L88V, G99D, I108T, R114L mutations (6)). The crystal structure of cyclomarin-bound MtbClpC1 NTD is shown in Figure2d.c) Binding site of cyclomarin (yellow) and ArgP (red) on the MtbClpC1 NTD.
Figure 2 .Figure 3 .
23 Figure 2. NMR assignment and cyclomarin-binding site of MtbClpC1 NTD. (a) 1 H-15 N correlated backbone amide spectrum of apo (black) and cyclomarin bound (blue) MtbClpC1 NTD. 95% of amide resonances of the apo and 79% of the cyclomarin bound protein spectrum have been assigned. (b) TALOS+ predicted helix propensity derived from NMR assignments of apo MtbClpC1 NTD. In dark blue predicted helix, in white predicted loop and as a gray background helical parts in the X-ray structure of MtbClpC1 NTD (PDB: 3wdb). The secondary structure in solution and in the crystal, seem to be identical. (c) Combined chemical shift difference between apo and cyclomarin-bound MtbClpC1 NTD 1 H-15 N HSQC spectra. Chemical shift differences are mapped on the structure in Figure 2d. (d) Chemical shift differences from Figure 2c plotted on the structure of cyclomarin bound MtbClpC1 NTD (PDB: 3wdc). Assigned backbone amides are shown as spheres, and unassigned residues as gray cartoon. Chemical shift differences are plotted in a spectrum from blue to white, whereby blue indicates a strong effect. Cyclomarin is shown as yellow sticks Figure 3. The effect of ArgP binding on MtbClpC1 NTD. (a) Sequence alignment of MtbClpC1 NTD and BsClpC NTD. Identical residues are highlighted in black and similar residues in grey. The binding site of ArgP in BsClpC NTD is circled in red. (b) Representative Isothermal Calorimetry Titration of ArgP binding to MtbClpC1 NTD (N =1.99 ± 0.02; K d 5.2 ± 0.5 µM; DH -4066 ± 59 cal/mol; DS 10.3 cal/mol/deg). (c) NTD is able to pull down lysozyme phosphorylated by McsB kinase but not non-treated lysozyme (lane 1 and 4). Cyclomarin (50 µM) is unable to block substrate binding but a reduction is observed with ArgP (1 mM). (d) 1 H-15 N correlated backbone amide spectrum of apo (black), ArgP (green), cyclomarin (blue) and ArgP/cyclomarin (red) bound MtbClpC1 NTD. 95% of apo, 55% ArgP, 85% of cyclomarin and 75% of CymA/ArgP bound MtbClpC1 NTD amide resonances are NMR visible. (e) Loss in peak intensity of resonances in 1 H-15 N HSQC spectra upon ArgP binding with (red) or without (green) cyclomarin added. (f) Peak height ratio of ArgP bound MtbClpC1 NTD (Figure3d, left) plotted on its structure (PDB: 3wdb). Assigned residues are shown as spheres, unassigned residues as gray cartoon, residues that disappear upon ArgP binding as white cartoon. Peak height ratio is drawn as a spectrum from green to white. Whereby white indicates loss in intensity. Two arginine-phosphate molecules (red sticks) are placed at the putative ArgP binding site, identical to the X-ray structure of BsClpC NTD (PDB: 5hbn).
Figure 4 .
4 Figure 4. ArgP induces millisecond dynamics in MtbClpC1 NTD.(a) Residues that exhibit millisecond dynamics plotted on the MtbClpC1 NTD structure (PDB: 3wdb). All assigned residues are shown as spheres. Residues that have a ΔR 2,eff of 5 are in yellow, of 15 orange and of 30 or more red. (b) Examples of CPMG curves for residues with a ΔR 2,eff of 5, 15 or 30. Apo MtbClpC1 NTD has no millisecond dynamics (lower row), 63% of all NMR visible residues experience ms dynamics when ArgP is bound (upper row), if cyclomarin is added before ArgP no more ms dynamics can be observed, resulting in flat dispersion curves (middle row).
Figure 5 .
5 Figure 5. Effect of ArgP on MtbClpC1 NTD secondary structure and MtbClpC1 substrate degradation. (a) CD spectrum of apo (black) and ArgP bound (green) MtbClpC1 NTD. ArgP binding does not influence the secondary structure of MtbClpC1 NTD. (b) ArgP bound (green) is more stable than Apo MtbClpC1 NTD. Intrinsic tyrosine fluorescence was measured as a function of temperature. (c) ArgP (green) does not change exposure of hydrophobic regions in MtbClpC1 NTD (black). Shown in gray the fluorescence of ANS in buffer. (d) ArgP inhibits FITC-casein degradation by MtbClpC1P1P2 (black curve). Cyclomarin (20μM) is able to block this inhibition (blue curve).
Figure 6 .
6 Figure 6. ClpC1 forms high oligomeric species in solution.
Figure 7 .
7 Figure 7. MtbClpC1 exists in equilibrium between a resting state and a functional hexamer.
Figure 1
Figure 2
by guest on April 16, 2018 http://www.jbc.org/ Downloaded from
Acknowledgments:
Hugo Fraga is a COFUND fellowship recipient co-funded by the European Union and the Tres Cantos Open Lab Foundation (TC189). This work used the platforms of the Grenoble Instruct center (ISBG; UMS 3518 CNRS-CEA-UJF-EMBL) with support from FRISBI (ANR-10-INSB-05-02) and GRAL (ANR-10-LABX-49-01) within the Grenoble Partnership for Structural Biology (PSB). Special thanks to Dr Caroline Mas for valuable advises. Dr. Goldberg's lab has received grants from the Tres Cantos Open Lab Foundation and National Institute of General Medical Sciences. We thank the ESRF for beamtime at BM29.
Author contributions:
KW, MB and HF conceived and performed experiments, KW, MB, UK, JL, LB, AG, LB, PS and HF analyzed the data, KW and HF wrote the manuscript.
Declaration of Interests:
The authors declare no competing interests. |
01768190 | en | [
"info",
"info.info-ds",
"info.info-dc"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768190/file/UTXO.pdf | Emmanuelle Anceaume
email: [email protected]
Antoine Guellier
email: [email protected]
Romaric Ludinard
email: [email protected]
UTXOs as a proof of membership for Byzantine Agreement based Cryptocurrencies
The presence of forks in permissionless blockchains is a recurrent issue. So far this has been handled either a posteriori, through local arbitration rules (e.g., "keep the branch which has required the most computational power") which are applied once a fork is present in the blockchain, or a priori, via a Byzantine resilient agreement protocol periodically invoked by a committee of well identified and online nodes. In the former case, local arbitration rules guarantee that if they are correctly applied by a majority of the users of the system, then with high probability forks are progressively resolved, while in latter case, the sequence of Byzantine resilient agreements decide on the unique sequence of blocks to be appended to the blockchain. The question we may legitimately ask is the following one: To prevent the period of uncertainty inherent to optimistic-based solutions, are we doomed to rely on the decisions made by a unique committee whose members are already actively involved in the creation of blocs ? We negatively answer this question by presenting a solution that combines the best features of optimistic and pessimistic approaches: we leverage the presence of users and the "public-key as identities" principle to make users self-organize in small Byzantine resilient committees "around" each new object (i.e., blocks and transactions) to decide on their validity. Once validated, objects can be pushed in the network, appended to the blockchain without fearing any fork nor double spending attacks: we guarantee a "0"-confirmation delay. Additionally, our solution mitigates selfish attacks. We are not aware of any solutions enjoying such features.
I. INTRODUCTION
Blockchains, also called distributed ledgers, initially appeared as the technological solution for the deployment of the Bitcoin digital cryptocurrency system, a secure system, usable by anyone, in a peer-to-peer way, with no trusted third party whatsoever. Blockchains achieve the impressive result of constructing a persistent, distributed, append-only log of transactions, and publicly auditable and writable by anyone in case of permissionless (i.e., public) blockchains. Construction of distributed ledgers typically relies on a sophisticated orchestration of cryptographic primitives, agreement algorithms, and broadcast communication primitives.
To face recurrent double-spending attacks -which are inherent to digital cash systems -blockchains are built so that their records (i.e blocks of transactions) are totally and securely ordered. So far, two main designs exist to totally and securely order blocks: the optimistic and the pessimistic approaches. The optimistic approach mainly consists in regularly running elections among a subset of the nodes of the system (i.e., miners), at the outcome of which, leaders (i.e., successful miners) properly and securely gathers transactions into blocks, with the hope that no concurrent blocks already exist in the system. Transient inconsistencies (i.e., the presence of concurrent forks) are locally handled, but at the expense of a substantially long block confirmation time. For instance Bitcoin guarantees that if a block has been stored for more than one hour in any local copy of the ledger, then it will remain there forever, and at the very same position in the local copies maintained at all the nodes of the system. This holds with very high probability even if up to 10% of the miners are malicious. More recently Ethereum [START_REF] Wood | Ethereum: A secure decentralised generalised transaction ledger[END_REF] and Spectre [START_REF] Sompolinsky | SPECTRE: A fast and scalable cryptocurrency protocol[END_REF] have succeeding in decreasing the block confirmation time but at the expense of more involved arbitration rules. The second approach, which we call pessimistic, aims at preventing forks from happening so that once recorded in the ledger, a block will never be pruned. This is achieved by relying on Byzantine resilient agreement algorithms (e.g. [START_REF] Castro | Practical Byzantine Fault Tolerance[END_REF], [START_REF] Kotla | Zyzzyva: Speculative Byzantine Fault Tolerance[END_REF]) fed with all the currently submitted transactions. An already impressive amount of work has been focusing on the properties of those algorithms to securely and totally order transactions in distributed ledgers, but the foremost difference that exists among all these works is related to the essential notion of identity. In consortium blockchains, including RedBelly [START_REF] Crain | Leader/Randomization/Signature)-free Byzantine Consensus for Consortium Blockchains[END_REF] and HyperLedger [START_REF] Androulaki | Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains[END_REF], participants -those who control and manage copies of the blockchain -form a clique of carefully selected institutions with appropriate permissions. In Byzantine-based permissionless blockchains, such as PeerCensus [START_REF] Decker | Bitcoin Meets Strong Consistency[END_REF], Bizcoin [START_REF] Kogias | Enhancing bitcoin security and performance with strong consistency via collective signing[END_REF], or BitcoinNG [START_REF] Eyal | Bitcoin-NG: A scalable blockchain protocol[END_REF], those in charge of executing the Byzantine resilient agreement algorithms are selected among all the successful miners. Note that one may be careful in the way miners are selected to form the Byzantine algorithms committees not to violate the system safety [START_REF] Anceaume | Safety Analysis of Bitcoin Improvement Proposals[END_REF].
To summarize, the pessimistic approach achieves an irrevocable decision on the next block to be appended to the permissionless ledger at the cost of running a Byzantinetolerant algorithm among several hundred of nodes for each created block. The optimistic approach guarantees that in presence of forks on the ledger the probability that a given branch will remain in the ledger increases exponentially with the number of blocks appended after it.
Very recently, an elegant pessimistic-based approach to mitigate the presence of blockchain forks has appeared with Algorand [START_REF] Gilad | Algorand: Scaling byzantine agreements for cryptocurrencies[END_REF]. In Algorand, members of the Byzantinetolerant algorithm are selected no more proportionally to their computational power, but proportionally to their stake. Among them, a leader is elected, the one with the largest stake, and handles all the currently submitted transactions. Algorand guarantees that in periods of strong synchrony, the blockchain correctly grows (absence of forks and double-spending), while in presence of variable communication delays, growth is not guaranteed.
In this paper, we propose a solution that borrows ingredients of both approaches to guarantee that once a block, mined in isolation, is declared valid by the system, then it cannot be confronted with any other conflicting block, and thus will irremediably be registered in the ledger. Our solution also guarantees that double-spending attacks are detected and prevented once any conflictual transaction is submitted to the peer-to-peer network. Hence, once a transaction is declared valid by the system, then it cannot be confronted with any other conflicting transaction, relieving sellers of any fraud.
The block validation protocol and the transaction validation one are essential in our design. The core of both protocols relies on Byzantine agreements (BA). BA allows the block validation protocol to reach an agreement on the unique block that can reference an earlier block in the blockchain, and allows the transaction validation protocol to reach an agreement on the unique transaction that can redeem all the unspent transaction outputs (UTXOs) referenced as its inputs. In contrast to all the pessimistic approaches described above (and to the best of our knowledge to all them), BAs are executed "around" the objects to be validated. It briefly means that, when a newly created transaction is submitted to the peer-to-peer network for validation, the transaction validation protocol is executed by a subset of users randomly chosen among those that are (logically) close to the UTXOs referenced in the input of the transaction. Participation of those randomly chosen users is publicly verifiable by anyone. Similarly, when a newly created block is submitted to the network, the block validation protocol is executed by a subset of users randomly chosen among those that are (logically) close to the predecessor block referenced by this block. Participation of those users is also publicly verifiable. As an additional consequence of our design, selfish attacks are mitigated. Since a selfish miner can not create a block without disclosing it, no one is able to built a private sequence of blocks in order to prune the tail of the blockchain. Finally, beyond giving users an incentive to correctly behave, relying on their participation to execute Byzantine agreement protocols allows for greater equality in the management of the blockchain. By the "locality" principle of the validation protocols, participation in the execution of a Byzantine agreement protocol is de facto temporary and unfrequent.
The remainder of the paper is organized as follows. Section II describes the computational and system model adopted in this work and then presents the main features of blocks and transactions in Bitcoin. Section III presents a brief survey of some of the attempts that have been made at solving Bitcoin issues. Section IV presents the key elements that we leverage to derive our solution. Section V describes the orchestration of these elements to validate blocks and transactions. Section VI and Section VII detail implementations of both protocols. Finally, Section VIII concludes.
II. MODEL, TRANSACTIONS AND BLOCKS
A. Model
We assume a large, finite yet unbounded set Π of nodes whose composition may change over time. Nodes communicate with each other through unreliable channels, meaning that messages can be lost, altered, duplicated or reordered. We assume the existence of a finite but unknown upper-bound on message propagation time, which fits the the partial synchrony model [START_REF] Dwork | Consensus in the Presence of Partial Synchrony[END_REF].
We suppose that a bounded proportion µ, with µ ≤ 1/3 of the nodes in Π are Byzantine (i.e., behave arbitrarily, either in collusion or on their own to maximize some utility function of their choice). All the other nodes are said correct or honest. We assume that nodes have access to basic cryptographic functions, including a cryptographic hash function h -modeled as a random oracle -and an asymmetric signature scheme -that allows nodes to generate public and secret key pairs (p r , s r ), to compute signatures σ r,h(d) on messages d, and to verify the authenticity of a signature. These primitives are assumed to be safe -i.e., forging signatures, and finding hash collisions, pre-image or second pre-image is impossible. By these properties, each object o of the system -i.e., UTXO (for unspent transaction output), transaction and block -is assumed to be uniquely identified.
We assume that correct nodes use their cryptographic keys in a safe way, i.e. they do not disclose, share or drop their secret keys. As a consequence, their identity can not be spoofed and their received coins cannot be stolen. We do not suppose the existence of any trusted public key infrastructure (PKI) to establish nodes identities. Finally, we assume that each object o is well-formed. For example, a transaction can be rejected (by a correct node) only if that transaction tries to double-spend inputs and not e.g. because a script is not correctly written.
B. Transactions and blocks
Prior to describing our solution, let us first recall some background on transactions and blocks manipulated in most of the cryptosystems that derive from Bitcoin. A transaction is made of two sets, the input set denoted by I and the output one denoted by O. Set I contains the set of outputs, credited by previous transactions, that the creator of the transaction wishes to spend, together with the proof that she is allowed to redeem each of those outputs. The output set O contains transferred coins, together with the challenges that will allow their owners to redeem those coins.
Transactions outputs are locked with a challenge and redeemed in subsequent transactions by providing the appropriate response. Different types of scripts exist in Bitcoin, but the most common one is the PAY-TO-PUBKEY-HASH script. 1In that script, the challenge embeds the hash value h = h(p r ) of a public key p r and the response of the challenge contains the public key p r together with the signature σ r signed with the secret key associated to p r . Thus the only user able to provide the appropriate values of σ r and p r is the effective owner of the transaction output, that is the owner of the UTXO. As a consequence, double spending attacks can only be launched by (malicious) users that create distinct transactions redeeming exactly one of their transaction outputs.
Once created, a transaction is submitted to the peer-to-peer network. Each node of the network should check the validity of the transaction prior to propagating it to its neighborhood. Informally, a transaction T = (I, O) is locally valid at node p if p has received all the transactions that have credited all the inputs in I and for all i ∈ I, i is not in a double-spending situation.
Input i ∈ I is in a double-spending situation if p is aware of transaction T = (I , O ) such that i ∈ I ∩ I = ∅.
Transaction T = (I, O) is conflict-free if none of the inputs of T is involved in a double-spending situation and all of the transactions that credited T 's inputs are conflict-free. By construction, the induction is finite at least in Bitcoin, because money is created only through coinbase transactions, which are by definition conflict-free [START_REF] Anceaume | Safety Analysis of Bitcoin Improvement Proposals[END_REF].
Blocks are created by successful miners, a subset of the nodes involved in the proof-of-work competition. The incentive to participate to such a competition is provided by a reward given to each successful miner. This reward is made of a fixed amount of coins (in Bitcoin, the reward is currently equal to 12.5 bitcoins) and a fee associated to each transaction contained in the newly created block. This reward is inserted in the output of a particular transaction, called the coinbase transaction. Note that coinbase transactions do not have inputs.
A block is made of two parts: the header and the payload. The payload contains a unique coinbase transaction and a list of valid transactions. The header of the block contains several fields among which the reference to its parent block (hence the blockchain), a proof-of-work, that is a nonce such that the hash of the block matches a given target (in Bitcoin, this target is calibrated so that the mean generation time of a block is equal to 10 minutes), and the fingerprint of the payload. In the following, we refer by b = (h(b ), c(b)), a block with a parent block reference h(b) and a payload c(b).
When a transaction T is included in a block b, it is said confirmed by all the peers that accept that block in their local copy of the blockchain. The level of confirmation of transaction T is the number of blocks included in the blockchain starting from b; by extension, a 0 confirmation level means that the transaction has not yet been included in the blockchain. To limit double-spending attacks, Bitcoin recommends that sellers do not provide their goods in exchange of a transaction before it becomes deeply-confirmed. Actually, Nakamoto [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF] as well as subsequent studies [START_REF] Garay | The Bitcoin Backbone Protocol: Analysis and Applications[END_REF], [START_REF] Karame | Misbehavior in Bitcoin: A Study of Double-Spending and Accountability[END_REF], [START_REF] Miller | Anonymous byzantine consensus from moderately-hard puzzles: A model for bitcoin[END_REF] have shown that if the computational power of malicious miners is equal to 10% of the whole computational power, then with probability less than 0.1%, a transaction can be rejected if its level of confirmation in a local copy of the blockchain is less than 5.
In the following, we say that a transaction is deeply confirmed once it reaches such a confirmation level.
Most of the permissionless blockchain-based cryptosystems guarantee the following two properties:
• Safety If a transaction T is deeply confirmed by some correct node, then no transaction conflicting with T will ever be deeply confirmed by any correct node. • Liveness A conflict-free transaction will eventually be deeply confirmed in the blockchain of all correct nodes at the same height in the blockchain. I case of a blockchain fork, some blocks can be invalidated and the level of confirmation of their transactions can decrease, especially if the conflicting branch contains a conflicting transaction. This deters the use of Bitcoin for fast payment, as the expected time for a deep confirmation is approximately one hour. Fast payment are used in most everyday life situations, where the time between buying and consuming the goods is in the order of minutes. This impracticality motivates this work.
In the present paper, by preventing double-spending attacks and blockchain forks we aim at strengthening the safety property as follows:
• Strong Safety If a transaction T is confirmed by some correct node, then no transaction conflicting with T will ever be confirmed by any correct node. The strong safety property ensures that whenever a transaction T has been included in a block, no other block will ever contain a transaction conflicting with T . An immediate and important consequence of this property is the capability blockchain-based cryptosystems to safely handle fast payments. The remaining of the paper is devoted to the implementation of this property.
III. RELATED WORK
Bitcoin [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF] is seen as the pioneer of cryptocurrencies. Since its inception, several altcoins [START_REF] Ahamad | A Survey on Crypto Currencies[END_REF] have emerged. The GHOST protocol [START_REF] Sompolinsky | Accelerating Bitcoin's Transaction Processing[END_REF] proposes a different rule to solve blockchain forks, based on the number of blocks contained in each blockchain subtree (in case of consecutive forks). Recent works have focused on Bitcoin modeling and evaluation. Authors of [START_REF] Miller | Anonymous byzantine consensus from moderately-hard puzzles: A model for bitcoin[END_REF] prove that the Bitcoin protocol achieves consensus with high probability, while [START_REF] Garay | The Bitcoin Backbone Protocol: Analysis and Applications[END_REF] show that peers participating in the Bitcoin network agree on a common prefix for the transaction history, both in failure-free environments. In contrast, authors of [START_REF] Karame | Double-spending Fast Payments in Bitcoin[END_REF], [START_REF] Karame | Misbehavior in Bitcoin: A Study of Double-Spending and Accountability[END_REF] focused on adversarial environments. These works study the feasibility of double spending attacks and their detection. Several studies have shown that Bitcoin behaves quite well in failure-free environments [START_REF] Miller | Anonymous byzantine consensus from moderately-hard puzzles: A model for bitcoin[END_REF] but is vulnerable to some attacks such as the double-spending one [START_REF] Karame | Misbehavior in Bitcoin: A Study of Double-Spending and Accountability[END_REF]. Several attempts to fix it have been published, using a leader [START_REF] Decker | Bitcoin Meets Strong Consistency[END_REF], [START_REF] Eyal | Bitcoin-NG: A scalable blockchain protocol[END_REF], [START_REF] Kogias | Enhancing bitcoin security and performance with strong consistency via collective signing[END_REF], or forming local committees to run consensus algorithms at the local level [START_REF] Luu | SCP: a computationally-scalable Byzantine consensus protocol for blockchains[END_REF] but these proposals encounter various scalability or security issues which make them unusable. Specifically, Bitcoin-NG [START_REF] Eyal | Bitcoin-NG: A scalable blockchain protocol[END_REF], PeerCensus [START_REF] Decker | Bitcoin Meets Strong Consistency[END_REF], and BizCoin [START_REF] Kogias | Enhancing bitcoin security and performance with strong consistency via collective signing[END_REF], have proposed to rely exclusively on miners to take in charge the full process of validation and confirmation to guarantee that all the operations triggered on the transactions are atomically consistent. Atomic consistency guarantees that all the updates on shared objects are perceived in the same order by all entities of the system. In all these protocols, time is divided into epochs. An epoch ends when a miner successfully generates a new block. This miner becomes the leader of the subsequent epoch. Each of these solutions rely on a dedicated set E , with ∈ {1, w, ∞}. This set is built along consecutive epochs as follows. At epoch k, if |E | < , the new leader is added to E . Otherwise, the leader at epoch k + 1is removed from E and the new leader is added. Once set E reaches size , it remains at constant size . Strong consistency is implemented in these protocols by different means. In Bitcoin-NG, it is achieved by delegating the validation process to E 1 , i.e. the leader of the current epoch. In PeerCensus it is implemented by relying on Byzantine Fault Tolerant consensus protocols (e.g. [START_REF] Castro | Practical Byzantine Fault Tolerance[END_REF], [START_REF] Guerraoui | The Next 700 BFT Protocols[END_REF], [START_REF] Kotla | Zyzzyva: Speculative Byzantine Fault Tolerance[END_REF]) run by E ∞ (recall that it contains all the miners that successfully generated a block). Finally, BizCoin leverages both ideas using the leader and a consensus run by E w . In all these protocols, members of E , with ∈ {1, w, ∞}, are entitled to validate and confirm issued transactions and blocks and to disseminate them so that each peer integrates them in its local blockchain. It has been shown in a previous paper [START_REF] Anceaume | Safety Analysis of Bitcoin Improvement Proposals[END_REF] that none of the studied solutions enhances Bitcoin's behavior. Beyond the complexity introduced by the consensus executions, the main issue comes from the fact that all important decisions of Bitcoin are solely under the responsibility of (a quorum of) miners, and the membership of the quorum is decided by the quorum members. This magnifies the power of malicious miners.
Very recently, an elegant pessimistic-based approach to mitigate the presence of blockchain forks has appeared with Algorand [START_REF] Gilad | Algorand: Scaling byzantine agreements for cryptocurrencies[END_REF]. In Algorand, members of the Byzantinetolerant algorithm (BA*) are selected no more proportionally to their computational power, but proportionally to their stake. Among them, a leader is elected, the one with the largest stake, and handles all the currently submitted transactions. Algorand guarantees that in periods of strong synchrony, the blockchain correctly grows (absence of forks and double-spending), while in presence of variable communication delays, growth is not guaranteed.
The current paper improves upon a previous work in which the idea of validating transactions and blocks as early as possible was introduced [START_REF] Lajoie-Mazenc | Handling bitcoin conflicts through a glimpse of structure[END_REF]. To cope with the risk of Sybil attacks, participants to BA committees in [START_REF] Lajoie-Mazenc | Handling bitcoin conflicts through a glimpse of structure[END_REF] have to solve a computational puzzle to create their current identities [START_REF] Luu | SCP: a computationally-scalable Byzantine consensus protocol for blockchains[END_REF], which in expectation makes the number of identities per node proportional to its computational power. In the present solution, we rely on UTXOs owners to participate to BA committees for the following reasons. First by relying on the public key as identity principle, anyone can easily verify BA committees membership, and second by relying on the fact that UTXOs are one shot objects (i.e., once debited an UTXO does not exist anymore), an induced churn is generated allowing honest participants to escape poisoning attacks by moving to a new region of the system, and preventing malicious nodes from staying indefinitely long in the same region of the system, healing the system from eclipse attacks.
IV. A SET OF INGREDIENTS
The main objectives of our solution are twofold: (i) the guarantee that only non conflictual objects -blocks and transactions -are validated and propagated in the system in order to prevent arbitration rules from being applied a posteriori and, (ii) the execution of the block and transaction validation process over distinct committees to mitigate adversarial behaviors (collusion and eclipse attacks) and to improve the system scalability. Our solution relies on the orchestration of the following ingredients.
Byzantine agreement (BA).
The first ingredient we use to implement the validation process is Byzantine agreement. Informally, Byzantine agreement (BA) is a communication protocol enabling a set of committee members, each of which holds a possibly different initial value, to agree on a single value v. Such an agreement is reached by all honest members, that is, by those who scrupulously follow the protocol despite the fact that a minority of the members are malicious and can deviate from the protocol in an arbitrary and coordinated manner.
Distributed Hash Table (DHT).
The second ingredient of our solution is a distributed hash table. Recall that DHTs build their topology according to structured graphs, and for most of them, the following principles hold: the identifier space, e.g., the set of 256-bit strings, is partitioned among all the nodes of the system, and nodes self-organize within the graph according to a distance function D based on node identifiers (e.g. two nodes are neighbors if their identifiers share some common prefix), plus possibly other criteria such as geographical distance. Example of DHTs are [START_REF] Ratnasamy | A Scalable Content-addressable Network[END_REF], [START_REF] Stoica | Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications[END_REF], [START_REF] Maymounkov | Kademlia: A peer-to-peer information system based on the xor metric[END_REF], [START_REF] Rowstron | Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems[END_REF]. For resiliency reasons, each vertex of the graph can be a set or a cluster of nodes. Basically, nodes sharing a common prefix gather together into clusters, and clusters selforganize into a graph topology, for instance an hypercube. By running distributed algorithms inside each cluster, clusterbased DHTs can be made robust to high churn [START_REF] Heilman | Eclipse Attacks on Bitcoin's Peer-to-Peer Network[END_REF] and adversarial attacks [START_REF] Anceaume | PeerCube: A Hypercube-Based P2P Overlay Robust against Collusion and Churn[END_REF].
Cluster-based DHT. In the following we use PeerCube, a cluster-based DHT to implement the validation protocols [START_REF] Anceaume | PeerCube: A Hypercube-Based P2P Overlay Robust against Collusion and Churn[END_REF]. Briefly, PeerCube is a DHT that conforms to an hypercube. Vertices of the hypercube are clusters of nodes. Each cluster is dynamically formed by gathering nodes that are close to each other according to a distance function D applied on the bit string identifier space. Distance D consists in computing the numerical value of the "exclusive or" (XOR) of bit strings.
Thus identifiers that have longer prefix in common are closer to each other, and for any point p and distance ∆ there is exactly one point q such that D(p, q) = ∆ (which does not hold for the Hamming distance). Nodes whose identities share a common prefix gather together within the same cluster. Each cluster is uniquely identified with a label that characterizes the position of the cluster in the overall hypercubic topology. The label of a cluster is defined as the shortest common prefix shared by all the users of that cluster such that the non-inclusion property is satisfied. The non-inclusion property guarantees that a cluster label never matches the prefix of another cluster label, and thus ensures that each identifier belongs to at most one cluster. The length a cluster label, i.e. the number of bits of that label, is called the dimension of the cluster. In the following, notation d-cluster denotes a cluster of dimension d. Dimension determines an upper bound on the number of links a cluster has with other cluster of the DHT, i.e. the number of its neighbors. Clusters self-organize into a hypercubic topology, such that the position of a cluster into the hypercube is determined by its label. Ideally the dimension of each cluster C should be equal to some value d to conform to a perfect d-hypercube. However, due to the fact that nodes join and leave the system at anytime, and their identifiers are random bit strings, then cluster membership evolve, and thus clusters may grow or shrink more rapidly than others. In the meantime, cluster size are bounded. Whenever the size of C exceeds a given value S max , C splits into clusters of higher dimensions, and whenever the size of C falls under a given size of S min nodes, C merges with other clusters into a single new cluster of lower dimension. Members of each cluster run Byzantine agreement protocols to guarantee that the functioning of the DHT is correct despite targeted attacks [START_REF] Anceaume | Performance evaluation of large-scale dynamic systems[END_REF]. This is achieved by partitioning each cluster into two sets, core members and spare members. The number of core members is at any time kept constant to handle a proportion µ of malicious nodes, while the spare set gathers all the other node of the cluster, and by doing so handle the churn without impacting the topology of the hypercube [START_REF] Anceaume | PeerCube: A Hypercube-Based P2P Overlay Robust against Collusion and Churn[END_REF].
"Public keys as identities" principle. The third ingredient of our solution is to use (verification) public keys as user identities. This means that users can use their public keys as a reference to them. Digital signatures enables this because one has the ability to verify the validity of an information based on the public key, information, and signature. This principle is at the core of challenges present in transactions to redeem coins of UTXOs, and as detailed below we deeply use the unique association UTXO/identity as a proof of membership for BA.
V. DISTRIBUTED HASH TABLE HAS A SUPPORT FOR RUNNING DISTINCT INSTANCES OF BYZANTINE
AGREEMENTS
We now describe how the above ingredients are orchestrated to validate transactions and blocks.
BA committee members are the owners of UTXOs. BA committee members are the owner of UTXOs, that is the users of the cryptocurrency system -this clearly differs from most of the BA-based blockchains in which BA protocols are executed by the successful miners. Note that as each user may own a multitude of UTXOs (i.e. may have a multitude of identities), a user may belong to several distinct BA committees. As will de described in the following, any committee member may prove its right to belong to a given committee by exhibiting a digital signature that verifies an UTXO that has never been redeemed so far.
BA committees as vertices of the DHT. To cope with the thousands of transactions to be validated per day, a multitude of BA protocols are run in parallel, each one sitting at the vertices of PeerCube. Thus a BA committee is identified by a unique label, which is the shortest common prefix shared by all the users (i.e. UTXOs owners) of that BA committee. Validation of each transaction T is handled by a specific BA committee, the one whose label is a prefix of T identifier -the identifier of a transaction is equal to its hash. Recall that by construction, PeerCube guarantees that any identifier belongs to a single cluster. In the following the BA committee to which T is affected is called T referee. Similarly, validation of a block is handled by a unique BA committee. However, in contrast to transactions, the referee of a block B is the BA committee whose label is a prefix of B predecessor (Recall that blocks form a chain by pointing to a predecessor).
UTXOs to prevent targeted attacks. It is very important to understand that, since UTXOs are one shot objects -that is UTXOs are debited once and then disappear -the presence of users in committees is verifiable. Anyone in the system can check that some user is allowed to participate to the execution of BA in a given cluster by just checking in her blockchain that the identity of that user, that is its public key, and thus its UTXO has never been redeemed so far. This feature is important as it is a very efficient way to prevent targeted attacks, attacks in which collusion of malicious nodes devise strategies to progressively take the leadership of a targeted region by staying longer than honest nodes [START_REF] Awerbuch | Group spreading: A protocol for provably secure distributed name service[END_REF], [START_REF] Anceaume | Performance evaluation of large-scale dynamic systems[END_REF]. By the second pre-image property of public keys, it is also very difficult for malicious nodes to generate identities that allow them to choose their positions so as to form collusions inside committees.
VI. TRANSACTION VALIDATION PROTOCOL
The purpose of the transaction validation protocol is to prevent double spending attacks by ensuring that concurrent transactions do not try to use common inputs. Say differently, its objective is to guarantee that at any time at most one transaction can redeem all the UTXOs referenced in its input set.
If we make an analogy between transaction inputs and objects, and an analogy between using an input and writing an object, then we can refer to database systems, in which exclusive access to objects is obtained by asking each transaction to explicitly lock objects it accesses using some single object locking mechanism. Yet, unless care is taken, locking objects one by one may cause deadlocks. As the application we consider involves different nodes spread over a large area, it is not advisable to rely on having all of them conform to the same locking strategies. Moreover, from a performance viewpoint, it may be impossible to run deadlock detection and prevention protocols assuming independent object locking. In the following we propose a transaction validation protocol that provides the equivalent of an atomic locking mechanism for all of the inputs of each issued transaction.
Formally, our transaction validation protocol implements two methods, grantInputs and release, that both accept a transaction T = (I, O) as parameter. The grantInputs method returns with GRANTED or DENIED. When an invocation returns with GRANTED, we say that the method exclusively grants the inputs in I to T or, in short, that T has been GRANTED. Once T has been GRANTED, for any subsequent transaction T conflicting with T , the service returns DENIED. T can invoke the release method only if T has not been granted. Otherwise it has no effect.
The transaction validation protocol prevents double spending attacks if the following three properties are met: As evoked above, the referee of each transaction T = (I, O) is the BA committee whose label share a common prefix with T . Let us call it pi T . pi T is in charge of invoking the grantInputs method for T and possibly the release method if T has not been granted. Granting a transaction means granting all the inputs of the transactions. Thus similarly to the referee of a transaction T = (I, O), each input i ∈ I of T has a referee. we call it UTXO referee, which is the BA committee whose label is a prefix of i ∈ I. Let us call it pi i .
Therefore, when a user creates a transaction T = (I, O), it submits T to PeerCube that routes T to its referee pi T . For each input i ∈ I of T , pi T asks an exclusive lock at the referee pi i of each input i ∈ I, in an order that corresponds to the lexicographical order of the input IDs. If the lock is DENIED for at least one of these inputs, pi T releases all previously obtained locks (by proving to each of these referees that a conflicting transaction T has already been GRANTED). Otherwise, after obtaining all locks, a GRANTED status is returned to pi T . Thus, similarly to transaction referees, UTXO referees are characterized by the following properties: The correctness and, in particular, the lack of deadlocks, result from the fact that objects are always obtained in lexicographical order. A lock can be implemented using a combination of Test-and-Set and Reset primitives. The referee pi i that wishes to lock input i ∈ I, first checks the value of a binary register. When this value is 0, it modifies the register to 1 and uses the lock. Releasing a lock is done by resetting to 0 the register value. The fact that T has been granted the lock on each input i ∈ I is proven by pi i 's signature. Each signature is bundled with the identity of the signer. Note that Bitcoin transactions can easily be extended to accommodate this process: the referee pi T of transaction T = (I, O) computes a group signature S (e.g. [START_REF] Boldyreva | Threshold Signatures, Multisignatures and Blind Signatures Based on the Gap-Diffie-Hellman-Group Signature Scheme[END_REF]) using the signatures of each input referee pi i i∈I and its own signature and appends it, along with everything needed to verify this group signature, to a specific validation output o added to the set of outputs O of transaction T = (I, O). Any node can easily verify that transaction T = (I, O) has been GRANTED by checking the signatures S added by referee pi T .
Referees are incentivized by introducing a validation fee. A fair and easy way to share the validation output is to randomly pick one of the referees and give it the entire reward. This requires seeding a random number generator in a publicly verifiable way, and for example with an information that can only be published after the transaction validation protocol has returned, like the hash of the block in which the transaction is included.
VII. BLOCK VALIDATION PROTOCOL
By following exactly the same validation transaction principle, BA committees are exploited to prevent blockchain forks, that is ensure that any validated block has at most one valid block as immediate successor. It is achieved by providing a method grantBlock that accepts a block b = (h(b), c(b )) as parameter. This method returns with GRANTED or DENIED. When an invocation returns with GRANTED, we say that the method validates block b as the unique successor of block b, i.e. block b is granted for h(b). This method has to satisfy the three following properties:
•
Safety: If a transaction T = (I, O) is exclusively granted the inputs in I, then no other transaction T = (I , O ) is exclusively granted the inputs in I with I ∩ I = ∅. • Liveness: Each invocation of the grantInputs method eventually returns. • Non triviality: If there exists an invocation of the grantInputs method with T = (I, O), and no other transaction T = (I , O ) with I ∩ I = ∅ is exclusively granted the inputs in I then T is granted exclusively all the inputs in I.
•
Safety: if an UTXO u is spent by a transaction T = (I, O), u ∈ I, then no other transaction T = (I , O ), u ∈ I can spend u, i.e. T is considered as invalid.• Liveness: Each invocation of the grantUTXO method eventually returns. • Non triviality: If there exists an invocation of the grantUTXO method for an UTXO u ∈ I with T = (I, O), and no other transaction T = (I , O ) with u ∈ I ∩ I is exclusively granted the inputs in I then u is granted exclusively for T . Protocol 1 is the pseudo-code run by the referee of a transaction to orchestrate the grant requests on the involved UTXOs referees as described in the pseudo-code of Protocol 2.
•
Safety: If a block b = (h(b), c(b )) is granted for h(b), then no other block b = (h(b), c(b )) is granted for h(b). • Liveness: Each invocation of the grantBlock method eventually returns. • Non triviality: If there exists an invocation of the grantBlock method with b = (h(b), c(b )), and no
Note that different Bitcoin verification scripts exist and some of them do not rely on cryptography (ANYONE-CAN-SPEND) or do not ensure a unique recipient (ANYONE-CAN-SPEND, TRANSACTION PUZZLE). In the following, we will consider only PAY-TO-PUBKEY-HASH scripts.
Therefore, when a miner creates a block B it submits a grantBlock request for B to PeerCube that routes it to its referee pi B . The request is granted if pi B has never granted such a request before. Protocol 3 is the pseudo-code of the block validation protocol.
To summarize, the validation protocols guarantee via BA committees tessellated at the vertices of a DHT that that a transaction is validated only if it is the only one to redeem each of its inputs, and a block is validated if it is the unique successor of its predecessor. This is achieved by relying on the ephemeral participation of UTXOs owners as BA committees members. Indeed once an UTXO has been granted by a committee, that UTXO does not exist anymore, and thus its owner leaves the BA committee, and joins the BA committees Protocol 2: UTXO conflict handling protocol whose labels prefix the newly created UTXOs. This is very important to prevent eclipse attacks, that is strategies which allows the adversary to stay forever at the same position in order to progressively eclipse honest nodes around it.
VIII. CONCLUSION
In this paper we have presented a new idea to prevent both blockchain forks and double spending attacks, without relying on successful miners to impose a total ordering on the blocks they have mined. Our design relies on the ephemeral participation of UTXO owners, and exploits both the scalability and robustness properties of cluster-based DHTs and the "public key as identity" principle. Those ingredients allow us to introduce a small amount of synchronization, soon enough in the validation process, to guarantee that sellers can to deliver their good as soon as the transaction has been validated by the system. Fast payments transactions are thus no more an issue. The same level of local synchronization heals the Bitcoin system from blockchain forks, and selfish mining. We are currently evaluating the performance of our design through an implementation that we have deployed over several hundred nodes. Preliminary results are very promising. |
01768231 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768231/file/IJIE_submitted_6_juin.pdf | Sarah Hank
email: [email protected]
S L Gavrilyuk
email: [email protected]
Nicolas Favrie
email: [email protected]
Jacques Massoni
email: [email protected]
Impact simulation by an Eulerian model for interaction of multiple elastic-plastic solids and fluids
Impact simulation by an Eulerian model for interaction of multiple elastic-plastic solids and fluids
Sarah Hank, Sergey Gavrilyuk, Nicolas Favrie, Jacques Massoni
Introduction
Solid-fluid interactions in the case of extreme deformations appear in many industrial applications (blast effects on structures, hypervelocity impacts,...). This kind of problems may involve high pressures and strain rates as well as a high density ratio. The hyperelastic models [START_REF] Godunov | Elements of continuum mechanics[END_REF][START_REF] Kluth | Perfect plasticity and hyperelastic models for isotropic materials[END_REF][START_REF] Godunov | Elements of continuum mechanics and conservation laws[END_REF][START_REF] Miller | A high-order Eulerian Godunov method for elastic-plastic flow in solids[END_REF][START_REF] Plohr | A conservative formulation for plasticity[END_REF][START_REF] Merzhievsky | The role of numerical simulation in the study of highvelocity impact[END_REF][START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF] for which the stress tensor is defined in terms of a stored energy function are well adapted to treat accurately such problems. The hyperelastic models are conservative by construction. They are also objective and thermodynamically consistent. In this paper, a multi-component hyperelastic Eulerian formulation is used to compute several impact test cases [START_REF] Hank | Modeling hyperelasticity in non equilibrium multiphase flows[END_REF]. The modelling is based on a 'diffuse interfaces method' which was developed for multi-component fluids [START_REF] Abgrall | Discrete equations for physical and numerical compressible multiphase mixtures[END_REF][START_REF] Saurel | A multiphase model for compressible flows with interfaces, shocks, detonation waves and cavitation[END_REF][START_REF] Saurel | A multiphase model for compressible flows with interfaces, shocks, detonation waves and cavitation[END_REF] and generalized to the case of interaction of multiple solids and fluids [START_REF] Favrie | Diffuse interface model for compressible fluid-compressible elasticplastic solid interaction[END_REF][START_REF] Ndanou | Multi-solid and multi-fluid diffuse interface model: Applications to dynamic fracture and fragmentation[END_REF]. Relaxation terms for an accurate description of plastic transformations proposed in solids in [START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF] have been added. No hardening parameter is used to deal with the evolution of the yield strength.
The paper is organized as follows. In Section 2, the mathematical model is presented. In Section 3, the numerical method is briefly described. Two test cases are studied in Section 4. In particular, a symmetric copper rod impact is computed and compared to the experimental data provided in [START_REF] Forde | Symmetrical Taylor impact studies of copper[END_REF]. Then, a low velocity clay suspension impact is studied and compared to the experimental results obtained in [START_REF] Luu | Drop impact of yield-stress fluids[END_REF].
Viscoplastic model 2.1 Eulerian multi-component formulation of hyperelasticity
Hyperelasticity models have been intensively studied in the past few years [START_REF] Godunov | Elements of continuum mechanics[END_REF][START_REF] Godunov | Elements of continuum mechanics and conservation laws[END_REF][START_REF] Godunov | Thermodynamically consistent nonlinear model of elastoplastic Maxwell medium[END_REF][START_REF] Barton | An Eulerian finite-volume scheme for large elastoplastic deformations in solids[END_REF][START_REF] Miller | A high-order Eulerian Godunov method for elastic-plastic flow in solids[END_REF][START_REF] Kluth | Perfect plasticity and hyperelastic models for isotropic materials[END_REF][START_REF] Merzhievsky | The role of numerical simulation in the study of highvelocity impact[END_REF][START_REF] Plohr | A conservative formulation for plasticity[END_REF][START_REF] Ghaisas | High-order Eulerian methods for elastic-plastic flow in solids and coupling with fluid flows[END_REF][START_REF] Ortega | Numerical simulation of elasticplastic solid mechanics using an Eulerian stretch tensor approach and HLLD Riemann solver[END_REF][START_REF] Brauer | A Cartesian scheme for compressible multimaterial models in 3D[END_REF]. In this paper, we consider a modified conservative formulation adapted to the case of isotropic solids. The Eulerian formulation of the multi-component hyperelasticity proposed in [START_REF] Favrie | Solid-fluid diffuse interface model in cases of extreme deformations[END_REF][START_REF] Favrie | Diffuse interface model for compressible fluid-compressible elasticplastic solid interaction[END_REF][START_REF] Ndanou | Multi-solid and multi-fluid diffuse interface model: Applications to dynamic fracture and fragmentation[END_REF]] is considered. The numerical algorithm for solving this model is based on the generalization of the discrete equations method developed earlier for multi-component fluids in [START_REF] Abgrall | Discrete equations for physical and numerical compressible multiphase mixtures[END_REF][START_REF] Saurel | A multiphase Godunov method for compressible multifluid and multiphase flows[END_REF][START_REF] Saurel | A multiphase model for compressible flows with interfaces, shocks, detonation waves and cavitation[END_REF] and multi-component solids [START_REF] Hank | Modeling hyperelasticity in non equilibrium multiphase flows[END_REF].
As we deal with non-equilibrium flows, each component admits its own equation of state with its own stress tensor. This approach allows us to treat configurations involving several solids and fluids. The discrete equations are obtained by integrating the conservation laws over a multiphase control volume. The general model is written hereafter for the phase k (in 1D case for the sake of simplicity).
∂α k ∂t + u I ∂α k ∂x = 0, ∂(αρ) k ∂t + ∂(αρu) k ∂x = 0, ∂(αρu) k ∂t + ∂(αρu 2 -ασ 11 ) k ∂x = -σ 11,I ∂α k ∂x , ∂(αρv) k ∂t + ∂(αρuv) k ∂x + ∂(-ασ 12 ) k ∂x = -σ 12,I ∂α k ∂x , ∂(αρw) k ∂t + ∂(αρuw) k ∂x + ∂(-ασ 13 ) k ∂x = -σ 13,I ∂α k ∂x , ∂(αρE) k ∂t + ∂(αρEu -ασ 11 u -ασ 12 v -ασ 13 w) k ∂x = -(σ 11,I u I + σ 12,I v I + σ 13,I w I ) ∂α k ∂x , ∂(αa β ) k ∂t + ∂(αa β u) k ∂x + (αb β ) k ∂v k ∂x + (αc β ) k ∂w k ∂x = 0, β = 1, 2, 3 ∂b β k ∂t + u k ∂b β k ∂x = 0, β = 1, 2, 3 ∂c β k ∂t + u k ∂c β k ∂x = 0, β = 1, 2, 3. (1)
Here, for k th phase: α k is the volume fraction, ρ k is the phase density,
u k = (u k , v k , w k ) T is the velocity field, σ k is the stress tensor: σ k = S k -p k I, (2)
where S k is the deviatoric part of the stress tensor and p k is the thermodynamical pressure. As the model belongs to the class of hyperelastic models, the stress tensor can be expressed as the variation of the internal energy (their exact expressions are given in the following subsection). The evolution equations of hyperelasticity are written for deformation measures (in particular, for the Finger tensor defined below). E k is the total energy associated to the phase k and is given by the following expression:
E k = ∥u k ∥ 2 2 + e k (η k , G k ), (3)
where η k is the entropy of the phase k and G k is the Finger tensor. The exact expressions of e k (η k , G k ) in (3) will be given in the next subsection.
The variables with subscripts 'I' are the 'interface' variables. They are obtained directly when solving the Riemann problem. The model is thermodynamically consistent and satisfies the second principle of thermodynamics. The proof is not straightforward. Nevertheless, the thermodynamic consistency has been verified on numerical test cases (for example, a shock wave propagation in a media in presence of material interfaces). In the right hand side of the system (1), non conservative terms are present: these terms exist if the volume fraction gradient is non zero.
The geometric variables a β k , b β k , c β k related to the deformation gradient will now be defined. To simplify the presentation, we will not further use in this section the subscript k for unknowns. Let us define the Finger tensor G as the inverse of the left Cauchy-Green tensor B: G=B -1 . The Finger tensor can also be expressed in the form:
G = 3 ∑ β=1 e β ⊗ e β , e β = (a β , b β , c β ) T , e β = ∇X β , β = 1, 2, 3, F -T = (e 1 , e 2 , e 3 ).
2
Here X β are the Lagrangian coordinates, the gradient is taken with respect to the Eulerian coordinates, F is the deformation gradient. In the next subsection the equation of state is presented, allowing the system closure. Different relaxation phenomena can easily be added into the model (pressure and velocity relaxation, phase transitions...).
System closure
The closure of the system is performed by using an equation of state presented in a separable form [START_REF] Gavrilyuk | Modelling wave dynamics of compressible elastic materials[END_REF]:
e(η, G) = e h (ρ, η) + e e (g), g = G |G| 1/3 , ( 4
)
where |G| denotes the determinant of the tensor G. This formulation has been used in particular in [START_REF] Favrie | Solid-fluid diffuse interface model in cases of extreme deformations[END_REF][START_REF] Favrie | Diffuse interface model for compressible fluid-compressible elasticplastic solid interaction[END_REF][START_REF] Favrie | A thermodynamically compatible splitting procedure in hyperelasticity[END_REF][START_REF] Ndanou | The piston problem in hyperelasticity with the stored energy in separable form[END_REF]. With such a formulation, the pressure is determined only by the hydrodynamic part of internal specific energy e h (ρ, η). The deviatoric part of the stress tensor can be expressed using the shear part of the specific internal energy e e (g). The hydrodynamic part of the energy satisfies the Gibbs identity:
θdη = de h + pdτ,
where τ is the specific volume (τ = 1/ρ) and θ is the temperature. The expression of the deviatoric part of the stress tensor S is:
S = -2ρ ∂e e ∂G G.
The hydrodynamic part of the internal specific energy is taken as the stiffened gas equation of state:
e h (ρ, p) = p + γp ∞ ρ(γ -1) . (5)
In [START_REF] Gavrilyuk | An example of a one-parameter family of rank-one convex stored energies for isotropic compressible solids[END_REF], a family of rank-one convex stored energies for isotropic compressible solids with a single parameter (denoted by ã) is proposed:
e e (G) = µ 4ρ 0 ( 1 -2ã 3 j 2 1 + ãj 2 + 3(ã -1) ) , j m = tr(g m ), m = 1, 2, 3. (6)
Here, µ is the shear modulus of the considered material and ρ 0 is the reference density. Using the criterion proposed in [START_REF] Ndanou | Criterion of hyperbolicity in hyperelasticity in the case of the stored energy in separable form[END_REF][START_REF] Gavrilyuk | An example of a one-parameter family of rank-one convex stored energies for isotropic compressible solids[END_REF], it has been proven that with the equations of state ( 5) and ( 6), the equations are hyperbolic for any ã such that -1 ≤ ã ≤ 0.5. The relation [START_REF] Favrie | Dynamics of shock waves in elastic-plastic solids[END_REF] involves the following expression for the deviatoric part:
S = -µ ρ ρ 0 ( 1 -2ã 3 j 1 { g - j 1 3 I } + ã { g 2 - j 2 3 I }) . ( 7
)
One can notice that for the value ã = -1, the equation of state describes neo-Hookean solids. Its expression is the following:
e e (G) = µ 4ρ 0 ( j 2 1 -j 2 -6 ) . ( 8
)
The energy ( 8) is, in particular, suitable for the description of jelly-type materials. In the case of metals, the value ã = 0.5 can be chosen, where the equation of state becomes:
e e (G) = µ 8ρ 0 (j 2 -3) . ( 9
)
Viscoplasticity modelling
An important class of hyperbolic models describing the plastic behavior of materials under large stresses has been proposed, for example in [START_REF] Godunov | Elements of continuum mechanics[END_REF][START_REF] Godunov | Elements of continuum mechanics and conservation laws[END_REF][START_REF] Godunov | Thermodynamically consistent nonlinear model of elastoplastic Maxwell medium[END_REF][START_REF] Barton | An Eulerian finite-volume scheme for large elastoplastic deformations in solids[END_REF]. An extension of this approach has been proposed in [START_REF] Favrie | Solid-fluid diffuse interface model in cases of extreme deformations[END_REF][START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF][START_REF] Favrie | Dynamics of shock waves in elastic-plastic solids[END_REF][START_REF] Ndanou | Multi-solid and multi-fluid diffuse interface model: Applications to dynamic fracture and fragmentation[END_REF] to include material yield criteria (Von Mises). The relaxation terms are constructed in such a way that they are compatible with the mass conservation law and consistent with the second law of thermodynamics. The Von Mises yield limit is reached at the end of the relaxation step. The built model belongs to Maxwell type model, where the intensity of the shear stress decreases during the relaxation.
We use the formulation proposed in [START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF]. The governing equations for e β are now written as follows:
De β Dt + ( ∂u ∂x
) T e β = - 1 τ rel Re β , ( 10
)
where τ rel corresponds to a relaxation time and R is a symmetric tensor (R = R T ). As the Finger tensor G is linked to the local cobasis e β , it is possible to write the governing equation for G by using [START_REF] Gavrilyuk | An example of a one-parameter family of rank-one convex stored energies for isotropic compressible solids[END_REF].
DG Dt + ( ∂u ∂x ) T G + G ( ∂u ∂x ) = - 1 τ rel (GR + RG) . ( 11
)
In [START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF], an expression has been proposed for the tensor R. This expression ensures the thermodynamic compatibility of the model: R = -aS, with S being the deviatoric part of the stress tensor derived from ( 6):
S = -µ ρ ρ 0 ( 1 -2ã 3 j 1 { g - j 1 3 I } + ã { g 2 - j 2 3 I }) . ( 12
)
The relaxation step is performed after the hyperbolic step: there is no space variation during the relaxation process. The derivative D/Dt should be replaced by the partial derivative with respect to time:
D Dt = ∂ ∂t .
In the following, we write
∂ ∂t = d dt .
We have to solve the following relaxation equation for each cell:
dG dt = a τ rel (GS + SG) = 2a τ rel (GS) . ( 13
)
Von Mises yield criterion
The Von Mises criterion implies that the material starts to yield when the corresponding yield function (noted f (S)) becomes positive:
f (S) = S : S - 2 3 σ 2 Y , ( 14
)
Here σ Y is the yield strength. When the yield function is negative, the material has an elastic behavior. In this case, the relaxation time τ rel becomes infinite. If the yield function is positive we have to relax the deformations in such a way that at the end of the relaxation step, the yield surface is recovered.
The value of a is taken as,
a = 1 2 (S : S) 1/2 . ( 15
)
The following expression for the relaxation time is used [START_REF] Favrie | Mathematical and numerical model for nonlinear viscoplasticity[END_REF]:
1 τ rel = 1 τ 0 ( S : S -2 3 σ 2 Y σ 2 Y )n , if S : S -2 3 σ 2 Y σ 2 Y > 0, 0, if S : S -2 3 σ 2 Y σ 2 Y ≤ 0.
The values of the characteristic time τ 0 and the exponent n will be chosen latter. This expression is analogous to the Odquist law [START_REF] Lemaitre | Mécanique des matériaux solides[END_REF].
Numerical treatment
It has been proven that system (1) is hyperbolic when closed by the equation of state ( 5) and ( 6) ( [START_REF] Ndanou | The piston problem in hyperelasticity with the stored energy in separable form[END_REF], [START_REF] Gavrilyuk | An example of a one-parameter family of rank-one convex stored energies for isotropic compressible solids[END_REF]). The full system admits 7 characteristic eigenfields corresponding to 2 longitudinal waves, 4 shear waves and a contact discontinuity. The resolution of the Riemann problem is not straightforward. In order to simplify the resolution of the Riemann problem, a numerical splitting is performed for the full system. This method has been proposed in [START_REF] Favrie | A thermodynamically compatible splitting procedure in hyperelasticity[END_REF]. The full system is split in three sub-models, each of them is hyperbolic. The first sub-system deals with the longitudinal waves and the contact discontinuity while other sub-systems deal with the shear waves. The numerical splitting simplifies the solution of the Riemann problem at each cell edges. Indeed, as each sub-model admits the propagation of three waves. An HLLC type Riemann solver can be considered to compute fluxes. The details of the splitting for the multi-component case and multi-dimensional case are presented in [START_REF] Hank | Modeling hyperelasticity in non equilibrium multiphase flows[END_REF]. The three sub-models are written hereafter (in one dimensional case).
∂α k ∂t + u I ∂α k ∂x = 0 ∂(αρ) k ∂t + ∂(αρu) k ∂x = 0 ∂(αρu) k ∂t + ∂(αρu 2 -ασ 11 ) k ∂x = -σ 11,I ∂α k ∂x ∂(αρv) k ∂t + ∂(αρuv) k ∂x = 0 ∂(αρw) k ∂t + ∂(αρuw) k ∂x = 0 ∂(αρE) k ∂t + ∂(αρEu -ασ 11 u) k ∂x = -σ 11,I u I ∂α k ∂x ∂(αa β ) k ∂t + ∂(αa β u) k ∂x = 0, β = 1, 2, 3, ∂b β k ∂t + u k ∂b β k ∂x = 0, β = 1, 2, 3, ∂c β k ∂t + u k ∂c β k ∂x = 0, β = 1, 2, 3. (16)
System ( 16) deals with the longitudinal waves and the contact discontinuity. The variables with the subscript 'I' are used to identify the interface variables. These interface quantities are obtained by solving the Riemann problem. The longitudinal sound speed is given by the following expression:
c L k = ∂p k ∂ρ k η k - ∂S 11k ∂ρ k - 1 ρ k 3 ∑ β=1 ∂S 11k ∂a β k a β k . ( 17
)
The sub-systems for transverse waves are:
∂α k ∂t = 0 ∂(αρ) k ∂t = 0 ∂(αρu) k ∂t = 0 ∂(αρv) k ∂t - ∂(ασ 12 ) k ∂x = -σ 12I ∂α k ∂x ∂(αρw) k ∂t = 0 ∂(αρE) k ∂t + ∂(-ασ 12 v) k ∂x = -σ 12I v I ∂α k ∂x ∂(αa β ) k ∂t + (αb β ) k ∂v k ∂x = 0, β = 1, 2, 3 ∂b β k ∂t = 0, β = 1, 2, 3 ∂c β k ∂t = 0, β = 1, 2, 3 , ∂α k ∂t = 0 ∂(αρ) k ∂t = 0 ∂(αρu) k ∂t = 0 ∂(αρv) k ∂t = 0 ∂(αρw) k ∂t - ∂(ασ 13 ) k ∂x = -σ 13I ∂α k ∂x ∂(αρE) k ∂t + ∂(-ασ 13 w) k ∂x = -σ 13I w I ∂α k ∂x ∂(αa β ) k ∂t + (αc β ) k ∂w k ∂x = 0, β = 1, 2, 3 ∂b β k ∂t = 0, β = 1, 2, 3 ∂c β k ∂t = 0, β = 1, 2, 3 (18)
The expressions of the transverse sound speeds are given hereafter:
c t1 k = - 1 ρ k 3 ∑ β=1 ∂S 12k ∂a β k b β k , c t2 k = √ - 1 ρ k ∂S 13k ∂a β k c β k . ( 19
)
Both sub-systems [START_REF] Lemaitre | Mécanique des matériaux solides[END_REF] deal with the shear waves and can be solved simultaneously. The three submodels correspond to the continuous limit of the discrete models obtained by integrating the pure solid equations over a multiphase control volume. Each sub-system is hyperbolic (see [START_REF] Hank | Modeling hyperelasticity in non equilibrium multiphase flows[END_REF] for details). The integration scheme is a first order finite volume Godunov type scheme. The fluxes must be calculated at each cell edges. To do this, a HLLC type solver is used [START_REF] Toro | Riemann solvers and numerical methods for fluid dynamics: a practical introduction[END_REF].
After the hyperbolic step solving ( 16), [START_REF] Lemaitre | Mécanique des matériaux solides[END_REF], relaxation terms are treated. A fourth order Runge-Kutta scheme is considered because of the stiffness of the right-hand side terms. Two versions of the code have been developed: a 3D code and 2D axi-symmetric version, both are parallel. The parallelization is performed using the domain decomposition method and using the openMPI library (open source Message Passing Interface).
Validations
The aim of this section is the validation of the elastic-plastic model. Two test cases are considered in this section: high and low velocity impacts.
Impact of a copper rod
The plastic deformation induced by the normal impact of a rod is a classical problem of impact solid dynamics ( [START_REF] Taylor | The testing of materials at high rates of loading[END_REF][START_REF] Taylor | The use of flat-ended projectiles for determining dynamic yield stress. I. Theoretical considerations[END_REF]). We use experimental data provided in [START_REF] Forde | Symmetrical Taylor impact studies of copper[END_REF], where a symmetric rod-on-rod impact has been studied. Symmetric rod-on-rod impact at velocity V is equivalent to a "classical" impact at velocity V /2. Symmetric impact allows us not to consider properties of the impacted surface which can be important in the case of the "classical" Taylor impact.
Initial configuration
The initial configuration is presented in Figure 1 cells. The final physical time is 368 µs. We are interested in the study of the temporal evolution of the shape of the copper rod. The numerical results are then compared to those of [START_REF] Forde | Symmetrical Taylor impact studies of copper[END_REF] wherein the authors measure the copper rod radius as a function of the distance from the impact interface. Two materials are used for these computations: copper and air. Physical characteristics of both components are given in table 1.
Material γ P ∞ (GP a) µ (GP a) σ Y (M P a) ρ 0 (kg/m 3 ) τ 0 (s) n Copper 4.54
29.9 60 450 8924 6.10 -6 2 Air
: stiffened gas parameters and physical characteristics of copper and air.
The properties of the stiffened gas equation of state are determined by using the Russian Shock wave database www.ficp.ac.ru/rusbank/.
Simulation results
The available experimental data are given up to 68 µs. We have made the choice to perform the simulation until the stationary state would be reached. This allows us to measure the final dimensions of the rod (final length, final undeformed length) and especially to compare with the Taylor theory.
Comparison with the experimental results
The rod profiles obtained at different instants are compared to those given in [START_REF] Forde | Symmetrical Taylor impact studies of copper[END_REF], where the authors present high speed photograph of the rod-on-rod impact. The rod profiles are extracted at various time instants from the moment of impact, as a function of the distance from the impact interface. In Figure 2, the experimental results are compared to the numerical ones. The numerical profiles are obtained by extracting the contours of copper volume fractions. The value 0.5 of these contours corresponds to the position of the copper/air interface.
The results presented in Figure 2 show a good agreement, particularly regarding the global shape of the rod. The rod radius at the impact is under-evaluated by the model, especially during the first instants, when the deformation is mainly located near the impact interface. The gap between experimental results and numerical results decreases with time. The gap can be explained by the fact that we did not use work-hardening in the elastic-plastic model. Nevertheless, the error is quite small, just about 1 mm. In Figure 3, the rod radius at the impact interface is plotted as a function of time as well as the rod total length. The final radius is reached after 100 µs, its value is 10.74 mm, the total rod length tends to the value of 73.1 mm. The qualitative evolution of the copper rod profile is presented in order to appreciate the deformation induced by the impact. The Figure 4 shows the deformation of the copper rod due to the impact at different instants. It allows us to notice the appearance of the 'shoulders' on the rod shape (see the rod shape at instant 128µs in Figure 4). The rod shape tends to a stationary state after about 200 µs. It is interesting to compare the resulting final dimensions with the Taylor analysis [START_REF] Taylor | The use of flat-ended projectiles for determining dynamic yield stress. I. Theoretical considerations[END_REF]. Let L 1 be the final length of the rod and X the undeformed rod length. The Taylor theory links the final rod dimensions to features of the impact:
σ Y ρ p V 2 = L -X 2(L -L 1 ) 1 L/X ( 20
)
The simulation gives the following values : L 1 = 73.1mm and X = 39mm. It gives us :
L -X 2(L -L 1 ) 1 L/X ≈ 1.04.
The gap between the theoretical value and the calculated one is equal to 6.8 per cent.
Impact of a jelly-like material
Studied configuration
In this simulation, a sample of clay suspension of diameter D normally impacts a flat rigid surface. This kind of impact has been studied, in particular, in [START_REF] Luu | Drop impact of yield-stress fluids[END_REF]. Experiments were made on different surface types (smooth glass surface and super hydrophobic surface). In this paper, we extract the results associated to the bentonite impacting a smooth glass surface. In particular, we are interested in the final diameter of the impacting drop. The studied configuration is presented in L 0 as the diameter of the equivalent sphere of the same volume. Two materials are present in this this configuration : the clay suspension cylinder (bentonite) and surrounded air. The corresponding material parameters are given in Table 2. The initial diameter of the cylinder is equal to 14 mm. The associated value of L 0 is 15.08 mm. The experimental results show a quasi-linear behavior of the maximal spread factor with respect to the impact velocity. The aim is then to see if the elastic-plastic model can reproduce this evolution.
Numerical results
Several simulations has been performed with various values of the impact velocity (1 m/s, 2 m/s, 3m/s, 4m/s). The results of these computations are summarized in Figure 6 where the numerical results are compared to those of [START_REF] Luu | Drop impact of yield-stress fluids[END_REF]. The numerical results are in a good agreement with the experimental ones. The points corresponding to the numerical results follow a straight line.
Let us define the time T 0 such that,
T 0 = L 0 V 0 (21)
Here L 0 is the characteristic dimension of the droplet, and V 0 is the impact velocity. The dimensionless time is then given by t/T 0 , where t corresponds to the physical time. Figure 7 shows the qualitative comparison of the numerical results with the experimental ones at several time instants (t/T 0 = 0.07, t/T 0 = 0.3 t/T 0 = 0.6, t/T 0 = 0.8). validated on impact experiments involving impact velocities varying from 1 m/s to 200 m/s. Very different materials were considered: clay suspension and copper. The numerical solution is in a good agreement with the experimental data. The developed numerical model is also able to describe more complex phenomena like cracks formation and spallation in materials. These results will be presented in future publications.
SolidFigure 1 :
1 Figure 1: Studied configuration: the impact of a copper rod on a solid wall is performed. The impact velocity is 197.5 m/s. The rod diameter is D = 10 mm and the length L = 100 mm.
Figure 2 :
2 Figure 2: The copper rod radius is plotted as a function of the distance from the impact interface at several instants.
Figure 3 :
3 Figure 3: Time evolution of the rod radius at the impact interface between 0 µs and 368 µs.
Figure 4 :
4 Figure4: The copper rod is presented at several instants from the moment of the impact, 0 µs, 32 µs, 64 µs, 96 µs, 128 µs, 160 µs, 192 µs and 320 µs. The shape of the rod changes during the impact and tends to a stationary state. The formation of "shoulders" (of a new "inflection" point at the rod shape) can clearly be observed at the instant 128 µs.
Figure 5 .Figure 5 :
55 Figure 5: Studied configuration: the impact of a clay suspension cylinder at the velocity V.
2 :
2 Materialγ P ∞ (GP a) µ (P a) σ Y (P a) ρ 0 (kg/m 3 ) τ 0 (s) stiffened gas parameters and features of the clay suspension and air.
Acknowledgements
We thank Y. Forterre for providing us the experimental data, and P. Le Tallec and P. H. Maire for useful discussion. This work was partially supported by l'Agence Nationale de la Recherche, France (grant numbers ANR-14-ASTR-0016-01, ANR-11-LABX-0092, and ANR-11-IDEX-0001-02).
Figure 6: The maximal spread factor L m /L 0 as a function of the impact velocity in the case of a bentonite drop impacting smooth glass surface (comparison with the experimental data of [START_REF] Luu | Drop impact of yield-stress fluids[END_REF]).
Figure 7: Comparison between the experimental results (at the top) with the numerical results (at the bottom) for several time instants of t/T 0 (t/T 0 = 0.07, t/T 0 = 0.3 t/T 0 = 0.6, t/T 0 = 0.8). The velocity impact is equal to 2 m/s.
The initial shape of the 'numerical' drop (see Figure 5) is not exactly the same one compared to the real drop. The real drop is generated by a syringe driver and falls then freely under gravity. This can explain a 'pointed' crest observable on the real drop. The dynamic behaviour of the bentonite drop is well reproduced by the simulation. A good agreement with the experimental results can be observed, both for the maximal spread factor (Figure 6) and for the form of the drop (Figure 7).
Conclusion
A visco-plastic Eulerian hyperbolic model is proposed. A simulation tool has been was developed to model simultaneously an arbitrary number of materials of different nature (fluids and solids). It is |
01768266 | en | [
"phys.phys.phys-optics",
"spi.opti",
"spi.nano"
] | 2024/03/05 22:32:16 | 2015 | https://laas.hal.science/hal-01768266/file/Calvez_ICTON2015_AlOx_for_3D_integration_vsubmit.pdf | S Calvez
email: [email protected]
G Lafleur
A Larrue
P.-F Calmon
A Arnoult
O Gauthier-Lafaye
G Almuneau
AlOx/AlGaAs technology for multi-plane integrated photonic devices
Keywords: III-V semiconductor oxidation, anisotropy, microdisk resonators
The III-V semiconductor /oxide technology has become the standard fabrication technique for Vertical-Cavity Surface-Emitting Lasers. Current research aims to further enhance the performance of these emitters and diversify the range of devices that can be made using this technology.
In this paper, we present a new model of the oxidation process which includes the anisotropic behaviour observed during conventional lateral oxidation. Furthermore, we demonstrate that this technology can be used as an innovative method to make micro-disk resonators with vertically-coupled access waveguides, an approach which can be generalised to fabricate other types of multi-plane photonic devices.
INTRODUCTION
The oxidation of III-V semiconductors is a process which selectively transforms high-aluminium-containing semiconductor alloys of high index of refraction (n AlAs ~2.9) into aluminium oxide (AlOx), an insulator with lower index of refraction (n AlOx ~1.6). Initially considered as a degradation and failure mechanism [START_REF] Dallesasse | III-V Oxidation: Discoveries and Applications in Vertical-Cavity Surface-Emitting Lasers[END_REF], this process has since then gained recognition and wide commercial success for its use in the fabrication of Vertical-Cavity Surface-Emitting Lasers where laterally-oxidized buried layers set the electrical injection profile and define the emission spatial mode content [START_REF] Choquette | Advances in selective wet oxidation of AlGaAs alloys[END_REF]. Recently, the research in this field has been primarily concerned with further expanding the capabilities of these oxide-confined VCSELs by improving their performance, for instance by increasing their modulation bandwidth thanks to multi-oxide layers [START_REF] Chang | Efficient, High-Data-Rate, Tapered Oxide-Aperture Vertical-Cavity Surface-Emitting Lasers[END_REF], or by extending their wavelength coverage to the mid-infrared region [START_REF] Laaroussi | Oxide confinement and high contrast grating mirrors for Mid-infrared VCSELs[END_REF]. However, a secondary strand of research activities on III-V semiconductor/AlOx technology has also emerged with the objective to further exploit the oxidation process [START_REF] Dallesasse | Oxidation of Al-bearing III-V materials: A review of key progress[END_REF] in wider range of devices including nonlinear optical converters [START_REF] Fiore | Phase matching using an isotropic nonlinear optical material[END_REF][7], transistor lasers [START_REF] Walter | Laser operation of a heterojunction bipolar light-emitting transistor[END_REF] or photovoltaic cells [START_REF] Pan | High efficiency group iii-v compound semiconductor solar cell with oxidized window layer[END_REF].
In this article, we present our recent contributions to the latter research strand. In particular, we report the characterisation and the analysis of the anisotropy of the oxide formation in AlGaAs. We also show how this technology can be exploited to create multiple-plane photonic devices and, more specifically, microdisk resonators that vertically-coupled to their access waveguides.
ALGAAS WET OXIDATION: AN ANISOTROPIC PROCESS
To-date, the oxidation of Al-containing III-V semiconductors (and AlGaAs in particular) has mostly been considered as an isotropic process. As a result, the oxidation of a thin (typically <100nm thick) layer is commonly treated as a one-dimensional phenomenon during which the lateral position of the oxide/semiconductor interface evolves as a function of the oxidation time. In essence, the established models are all based on the empirical law established by Deal and Grove [START_REF] Deal | General Relationship for the Thermal Oxidation of Silicon[END_REF] for the planar oxidation of silicon. Refinements have been added to take into account the effect of the finite thickness of the layer to be oxidized [START_REF] Osinski | Temperature and thickness dependence of steam oxidation of AlAs in cylindrical mesa structures[END_REF][12] and, to an extent, to include the first-order modification of the process dynamics resulting from the continuously varying perimeter of the oxide/semiconductor interface as the oxidation progresses [START_REF] Osinski | Temperature and thickness dependence of steam oxidation of AlAs in cylindrical mesa structures[END_REF][12] [START_REF] Alonzo | Effect of cylindrical geometry on the wet thermal oxidation of AlAs[END_REF].
A few reports have however highlighted that the process is actually anisotropic [START_REF] Vaccaro | AlAs oxidation process in GaAs/AlGaAs/AlAs heterostructures grown by molecular beam epitaxy on GaAs (n11) A substrates[END_REF][15] [START_REF] Chouchane | Local stressinduced effects on AlGaAs/AlOx oxidation front shape[END_REF] although evidence of this fact could also be found in earlier work [START_REF] Huffaker | Native-oxide defined ring contact for low threshold vertical-cavity lasers[END_REF] [START_REF] Choquette | Advances in selective wet oxidation of AlGaAs alloys[END_REF]. In particular, P.O. Vaccaro et al observed that oxidized thin AlGaAs layers on (110) and (311)-oriented GaAs substrates present an in-plane three-fold-symmetry anisotropy [START_REF] Vaccaro | AlAs oxidation process in GaAs/AlGaAs/AlAs heterostructures grown by molecular beam epitaxy on GaAs (n11) A substrates[END_REF] [START_REF] Koizumi | Lateral wet oxidation of AlAs layer in GaAs/AlAs heterostructures grown by MBE on GaAs (n11) A substrates[END_REF]. We have also shown that the oxidation of (~500nm)-thick layers of AlGaAs on conventional (100)-GaAs substrates leads to tapered vertical profiles [START_REF] Chouchane | Local stressinduced effects on AlGaAs/AlOx oxidation front shape[END_REF] whose tapered angle is attributed to the embedded strain resulting from the AlOx (~7%-)reduced volume compared to the AlGaAs material.
Here, we draw the attention to the in-plane anisotropy observed upon oxidation of thin (68nm-thick) Al 0.98 Ga 0.02 As layers on (100)-oriented GaAs wafers and propose an extension of the model of the oxidation process to render this anisotropic behaviour.
To begin with, Figure 1 presents a sequence of infrared microscope images of the lateral oxidation of a 35µmdiameter disk from the edges a dry-etched circular mesa [START_REF] Almuneau | Real-time in situ monitoring of wet thermal oxidation for precise confinement in VCSELs[END_REF]. The sample was oxidised at 400°C in a reduced pressure environment (~0.5 atm.), using a H 2 /N 2 /H 2 O gas steam mixture generated by an evaporator-mixer system operating at 95°C. In the pictures, the [0 -1 1]-oriented sample cleaved edge is set along the horizontal axis and the oxide part appears in white while the remaining (central) AlAs section is light grey. This change in intensity is induced by the change in the multilayer stack reflectivity at the observation wavelength (790 nm) and is caused by the modification of the optical path in the Al 0.98 Ga 0.02 As/AlOx layer upon oxidation. It can clearly be observed that the oxidation of the circular mesa tends towards a diamond-shaped aperture (t ox >100 min), highlighting that the process is indeed anisotropic with a faster reaction rate along the {0 -1 0} directions. To more accurately reproduce this anisotropic oxidation, we simulate the oxidation using a truly bi-dimensional model where the oxidation process is considered to be the combination of an anisotropic reaction, which takes place at the oxide-semiconductor interface, and an isotropic diffusion process which accounts for the transfer of the reactants (and As-based by products) from the outer part of the mesa to the reaction interface (and viceversa).
Figure 2 shows the calculated oxidation contours (with a diffusion coefficient D=20µm 2 /min, a fast reaction rate coefficient k max =0.12 µm/min and anisotropic factor a aniso =0.1) corresponding to the experimental oxidation process presented in Figure 1. The visual agreement between experimental data and numerical simulations confirms the appropriateness of the developed model.
VERTICALLY-COUPLED MICRODISK RESONATORS
The ability to predict and in-situ monitor the oxidation of III-V compounds opens an avenue to realise photonic devices relying on oxidation patterns of greater complexity than VCSELs [START_REF] Dallesasse | III-V Oxidation: Discoveries and Applications in Vertical-Cavity Surface-Emitting Lasers[END_REF] or straight waveguides [START_REF] Dallesasse | Oxidation of Al-bearing III-V materials: A review of key progress[END_REF]. As an proof of principle, we have recently made microdisk resonators with vertically-coupled access waveguides [START_REF] Calvez | Vertically Coupled Microdisk Resonators Using AlGaAs/AlOx Technology[END_REF]. The AlGaAs multi-layer stack is, in this case, constituted of two coupled GaAs waveguides whose coupling layer vertical structure includes two Al 0.98 Ga 0.02 As layers to be oxidized. The lower layer permits the introduction of the lateral confinement required to establish a buried rib-type access waveguide while the upper layer serves to significantly reduce the vertical coupling between the resonator and the remaining underlying slab waveguide. The transmission characteristics of a 75µm-diameter microdisk (see Figure 2) revealed a Q factor of ~4610 for the fundamental whispering gallery modes of the microdisk, corresponding to an attenuation coefficient in the disk of ~4.8 cm -1 and a coupling factor to the access waveguide of 64.8%.
CONCLUSIONS
We have reported a new model of the oxidation of III-V semiconductor that takes into account the anisotropy of the process and demonstrated that this technology can be used to create multi-plane photonic devices via the fabrication of micro-disk resonators with vertically-coupled access waveguides.
Figure 1 :
1 Figure 1: Sequence of infrared images of the anisotropic lateral oxidation of a 35µm-diameter disk mesa.
Figure 2 :
2 Figure 2: Sequence of calculated two-dimensional oxidation profiles (in red) of a 35µm-diameter disk (blue mesa) for regularly separated oxidation times ranging from 0 to 2 hours
Figure 3 :
3 (a) Composite (IR, visible) image of a 75µm-diameter microdisk resonator coupled to its access waveguide. (b) Transmission characteristic of this device.
ACKNOWLEDGEMENTS
The authors would like to acknowledge that this work was partly supported by the Centre National d'Etudes Spatial (CNES) and the French RENATECH network of micro-fabrication facilities. |
01768290 | en | [
"phys.phys.phys-optics",
"spi.opti",
"spi.nano"
] | 2024/03/05 22:32:16 | 2016 | https://laas.hal.science/hal-01768290/file/Calvez_SPIE2016_submission.pdf | S Calvez
G Lafleur
C Arlotti
A Larrue
P.-F Calmon
A Arnoult
G Almuneau
O Gauthier-Lafaye
III-V-semiconductor vertically-coupled whispering-gallery mode resonators made by selective lateral oxidation
Keywords: resonator, disk, whispering gallery mode, integration, oxidation, AlOx, GaAs
Integrated whispering-gallery mode resonators are attractive devices which have found applications as selective filters, low-threshold lasers, high-speed modulators, high-sensitivity sensors and even as nonlinear converters. Their performance is governed by the level of detrimental (scattering, bulk, bending) loss incurred and the usable loss represented by the coupling rate between the resonator and its access waveguide. Practically, the latter parameter can be more accurately controlled when the resonator lies above the access waveguide, in other words, when the device uses a vertical integration scheme. So far, when using such an integration technique, the process involved a rather technically challenging step being either a planarization or a substrate transfer step.
In this presentation, we propose and demonstrate an alternative method to fabricate vertically-coupled whispering-gallery mode resonators on III-V semiconductor epitaxial structures which has the benefit of being planarization-free and performed as single-side top-down process. The approach relies on a selective lateral thermal oxidation of aluminum-rich AlGaAs layers to define the buried access waveguide and enhance the vertical confinement of the whispering-gallery mode into the resonator. As a first experimental proof-of-principle of this approach, 75 µm-diameter micro-disk devices exhibiting quality factor reaching ~4500 have been successfully made.
INTRODUCTION
Integrated whispering-gallery mode resonators are compact devices which can exhibit greatly enhanced intra-cavity fields at selected wavelengths, the exploitation of which makes them attractive for applications such as narrowbandwidth filters, low-threshold lasers, high-speed modulators, high-sensitivity sensors or low-power nonlinear converters [START_REF] Matsko | Optical resonators with whispering gallery modes I: basics[END_REF][START_REF] Feng | Silicon photonics: from a microresonator perspective[END_REF][START_REF] Ward | WGM microresonators: sensing, lasing and fundamental optics with microspheres[END_REF] . As indicated above, their usefulness is essentially governed by the ability to obtain high quality factors. To do so, it is necessary to simultaneously minimize the amount of detrimental (scattering, bulk, bending) loss in the resonator and obtain the "so-called" critical coupling condition where the latter loss level is equal to the coupling rate between the resonator and its access waveguide(s). Since the resonator in/out coupling is commonly done in the form of an evanescent coupling scheme, it is easier to meet the latter condition with the technological approach which offers the most tolerant and controllable process. It turns out to be when the resonator lies above the access waveguide or, in other words, when the device is made by vertical integration [START_REF] Wen Tee | Fabrication-tolerant active-passive integration scheme for vertically coupled microring resonator[END_REF][START_REF] Ghulinyan | Oscillatory Vertical Coupling between a Whispering-Gallery Resonator and a Bus Waveguide[END_REF] . To implement such geometry in practice, the fabrication of a buried (access) waveguide with a planar top surface is needed. This task happens to be technologically complex, typically requiring a demanding technological step such as a planarization step [START_REF] Suzuki | Integrated-optic ring resonators with two stacked layers of silica waveguide on Si[END_REF][START_REF] Ghulinyan | Monolithic Whispering-Gallery Mode Resonators With Vertically Coupled Integrated Bus Waveguides[END_REF] or a substrate transfer [START_REF] Absil | Vertically coupled microring resonators using polymer wafer bonding[END_REF] .
In this paper, following our recent demonstration [START_REF] Calvez | Vertically Coupled Microdisk Resonators Using AlGaAs/AlOx Technology[END_REF] , we present a technologically simpler method to fabricate verticallycoupled micro-disk resonators in III-V semiconductors based on the use of a selective lateral oxidation step to define the buried access waveguide.
DESIGN
A schematic representation of a vertically-coupled microdisk under study is shown in Figure 1. The layered vertical structure is composed of an air/GaAs/Al 0.3 Ga 0.7 As/Al 0.98 Ga 0.02 As/Al 0.3 Ga 0.7 As resonator waveguide on top of an AlOx confined buried access waveguide. The access waveguide stack is an asymmetric Al 0.3 Ga 0.7 As/GaAs/Al 0.6 Ga 0.4 As waveguide with an Al 0.98 Ga 0.02 As layer inserted in the Al 0.3 Ga 0.7 As upper cladding. The high-aluminum-containing layers will be selectively transformed into AlOx to form the waveguide aperture and enhance the vertical confinement of the resonator modes. The thickness of the GaAs cores (680nm for the resonator and 400nm for the access waveguide) were selected to obtain identical slab waveguide effective indexes. The chosen 330nm Al 0.3 Ga 0.7 As separation lead to the resonator supermode of the coupled slab waveguides to present a 1% (respectively 6.2%) intensity overlap with the core of the access waveguide slab when the Al 0.98 Ga 0.02 As layers are considered to be oxidized (respectively unoxidized).
The position (150 nm) and width of the AlOx aperture were chosen to achieve >3-µm-wide singlemode waveguides to facilitate cleaved-facet injection and to be compatible with the (~1-µm-)spatial resolution of oxidation furnace monitoring system [START_REF] Almuneau | Realtime in situ monitoring of wet thermal oxidation for precise confinement in VCSELs[END_REF] . Figure 2 shows the effective index contrast between the oxidized and unoxidized regions and the maximum width of the aperture for singlemode operation as a function of the separation between the oxide layer and the waveguide core. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact [email protected] with any questions or concerns. For a separation of 150 nm, further waveguide calculations carried out using 2x1D effective index and a vectorial mode solver (WGModes) [START_REF] Fallahkhair | Vector Finite Difference Modesolver for Anisotropic Dielectric Waveguides[END_REF] show that the waveguide can be considered to remain singlemode up to a aperture width of ~3.7 µm. Neglecting the access slab waveguide, the evolution of the resonator mode effective index as a function the microdisk diameter was also evaluated using a cylindrical geometry and a vectorial finite-difference mode solver (WGMS3D) [START_REF] Krause | Finite-Difference Mode Solver for Curved Waveguides With Angled and Curved Dielectric Interfaces[END_REF] . Figure 4 shows, as expected, that the resonator effective index reduces as the diameter decreases. This variation also influences the coupling between the resonator and the access waveguide effective index (n access ~3.267) as the coupling beat length shrinks from 28.7 µm at a diameter of 400 µm down to 6.6 µm for a 50-µm-diameter microdisk. 4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact [email protected] with any questions or concerns.
FABRICATION
The above-described layer structure was grown by molecular beam epitaxy on a standard (001) GaAs wafer. The devices were then fabricated using two lithographic and etching steps permitting the successive definition of the access guides and the resonators. As shown in Figure 5, the subsequent wet oxidation of both Al 0.98 Ga 0.02 As layers was performed at a rate of ~0.12 μm/min in a custom furnace under optical monitoring and stopped after an oxidation extent of ~14 µm leading to 3.7-µm-wide-apertured buried access waveguides. The wafer was then thinned down to ~150 µm, cleaved into samples which were subsequently mounted on custom Si-based sub-mounts. An image of a resulting microdisk is shown in Figure 6. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact [email protected] with any questions or concerns.
CHARACTERISATION AND ANALYSIS
The mounted devices were characterized using the setup shown in Figure 7. The transmission characteristics were recorded using a step-tunable laser with central wavelength of 1.6 µm, a 10 pm spectral resolution and a 100 kHz linewidth. Figure 7 (right) shows the measured transmission characteristic of a 75-µm-diameter microdisk over one free spectral range. The intensity contrast (c=(I max -I min )/(Imax+Imin)) and loaded Q-factor (Q=λ/Δλ) are respectively measured to be 0.403 and 4450. To extract more meaningful parameters, namely the amplitude coupling coefficient (κ) and the intensity absorption coefficient (α), maps of the contrast and Q factors were calculated using the general equation [START_REF] Yariv | Universal relations for coupling of optical power between microresonators and dielectric waveguides[END_REF] :
T = ( ) ( ) ϕ ϕ cos 2 1 cos 2 2 2 2 2 coupler disk coupler disk coupler disk coupler disk t a t a t a t a - + - + (1)
where t coupler is the amplitude transmission coefficient of the coupling region (t coupler =(1-κ 2 ) 0.5 ) and a disk stands for the amplitude transmission coefficient (a disk =exp(-α π D/2)), and φ corresponds to the phase accumulated over one round-trip along the resonator periphery. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact [email protected] with any questions or concerns. As it can be readily seen from equation ( 1) and inferred from Figure 8, two solutions (t coupler ,a disk )=(0.72,0.93) and (t coupler ,a disk )=(0.93,0.72) corresponding respectively to the over-coupled (low-loss) resonator (κ=0.69, α=6.2 cm -1 ) and under-coupled (high-loss) resonator (κ=0.37, α=27.8 cm -1 ) situations can provide a match to the above-mentioned characteristic values (Q=4450, c=0.403). The measurements carried out to-date do not allow the discrimination between these two situations although experimental techniques exist to alleviate this undetermination [START_REF] Lee | A General Characterizing Method for Ring Resonators Based on Low Coherence Measurement[END_REF][START_REF] Zhou | Characterisation of microring resonator optical delay and its dependence on coupling gap using modulation phase-shift technique[END_REF] . Nevertheless, similar analysis was performed on devices with greater diameters (up to 300 µm) and the summary of the performance is provided in Figure 9. With a view to understand the device performance limiting factors, the bending losses and substrate-leakage-like losses induced by the presence of the slab access waveguide were evaluated by vectorial finite-difference mode analysis. As shown in Figure 10, the substrate loss dominates with a value of 9.2 cm -1 (compounded with the bending loss) for a 75-µm-diameter microdisk rising to 15.0 cm -1 for 300µm-diameter disks. This also suggests that, in spite of a relatively thin separation layer, the devices are operating in under-coupling regime, possibly because of the unfavorable effective index mismatch between the resonator and the access waveguide. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact [email protected] with any questions or concerns.
CONCLUSIONS
A novel method exploiting selective lateral oxidation has been established to fabricate vertically-coupled whisperinggalery-mode resonators on III-V semiconductors. The first fabrication run lead to 75-µm-diameter microdisks with Q factor reaching 4450 at a wavelength of ~1600 nm. The performance of the current set of devices was identified to be limited by the leakage to the slab access waveguide. Future work will focus on the modification of the structure to reach higher Q-factors.
Figure 1 .
1 Figure 1. Device schematic (dark blue: GaAs -light blue Al 0.3 Ga 0.7 As -light purple Al 0.6 Ga 0.4 As -yellow: AlOx)
Figure 2 .
2 Figure 2. Access waveguide cross-section diagram and key characteristics (at a wavelength of 1.55 µm).
9727 - 9
9 V. 3 (p.2 of 7) / Color: No / Format: A4 / Date: 1/15/2016 8:54:57 PM SPIE USE: ____ DB Check, ____ Prod Check, Notes:
Figure 3 .
3 Figure 3. Effective indexes and mode lateral widths at a wavelength of 1.55 µm for the first two TE-polarized modes that the oxide-apertured waveguide might support.
Figure 4 .
4 Figure 4. Resonator mode effective index at a wavelength of 1.55 µm
9727 - 9
9 V. 3 (p.3 of 7) / Color: No / Format: A4 / Date: 1/15/2016 8:54:57 PM SPIE USE: ____ DB Check, ____ Prod Check, Notes:
Figure 5 .
5 Figure 5. Evolution of the lateral extent of the oxide (white part on the images) as monitored by infrared reflectometry. The width of the mesa is 32 µm.
Figure 6 .
6 Figure 6. SEM picture of a 75-µm-diameter microdisk.
9727 - 9
9 V. 3 (p.4 of 7) / Color: No / Format: A4 / Date: 1/15/2016 8:54:57 PM SPIE USE: ____ DB Check, ____ Prod Check, Notes:
Figure 7 .
7 Figure 7. Characterization setup and typical close-to-resonance response of a 75µm-diameter microdisk.
Figure 8 .
8 Figure 8. Calculated Q-factor (in log scale) and contrast maps for a 75-µm-diameter microdisk.
9727 - 9
9 V. 3 (p.5 of 7) / Color: No / Format: A4 / Date: 1/15/2016 8:54:57 PM SPIE USE: ____ DB Check, ____ Prod Check, Notes:
Figure 9 .
9 Figure 9. Calculated Q-factor and contrast maps for a 75-µm-diameter microdisk.
Figure 10 .
10 Figure 10. Calculated Q-factor and contrast maps for a 75µm-diameter microdisk.
9727 - 9
9 V. 3 (p.6 of 7) / Color: No / Format: A4 / Date: 1/15/2016 8:54:57 PM SPIE USE: ____ DB Check, ____ Prod Check, Notes:
ACKNOWLEDGEMENTS
Clément Arlotti would like to acknowledge the Délégation Générale de l'Armement and the Centre National d'Etudes Spatiales for supporting his PhD studentship. The work was also partly supported by the Centre National d'Etudes Spatiales research Grant R et T: R-S13-LN-0001-025 and the French Renatech Network of cleanroom facilities. |
01768394 | en | [
"info.info-ai",
"info"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768394/file/sharp-paper-preprint.pdf | Alban Gaignard
email: [email protected]
Khalid Belhajjame
Hala Skaf-Molli
SHARP: Harmonizing cross-workflow provenance
Keywords: Reproducibility, Scientific Workflows, Provenance, Prov Constraints 8 SAMtools mpileup
PROV has been adopted by a number of workflow systems for encoding the traces of workflow executions. Exploiting these provenance traces is hampered by two main impediments. Firstly, workflow systems extend PROV differently to cater for system-specific constructs. The difference between the adopted PROV extensions yields heterogeneity in the generated provenance traces. This heterogeneity diminishes the value of such traces, e.g. when combining and querying provenance traces of different workflow systems. Secondly, the provenance recorded by workflow systems tends to be large, and as such difficult to browse and understand by a human user. In this paper, we propose SHARP, a Linked Data approach for harmonizing cross-workflow provenance. The harmonization is performed by chasing tuple-generating and equalitygenerating dependencies defined for workflow provenance. This results in a provenance graph that can be summarized using domain-specific vocabularies. We experimentally evaluate the effectiveness of SHARP using a real-world omic experiment involving workflow traces generated by the Taverna and Galaxy systems.
Introduction
Reproducibility has recently gained momentum in (computational) sciences as a means for promoting the understanding, transparency and ultimately the reuse of scientific experiments. This is particularly true in the life sciences where Next Generation Sequencing (NGS) equipments produce tremendous amounts of omics data, and lead to massive computational analysis (aligning, comparing, filtering, etc.). Life scientists urgently need for reproducibility and reuse to avoid duplication of storage and computing efforts.
Pivotal to reproducibility is provenance [START_REF] Davidson | Provenance and scientific workflows: challenges and opportunities[END_REF], which documents the experiment, including information about the activities that were conducted during the experiment, the agents that were involved, the resources and programs that were utilized as well as the data artifacts that were used and generated. Several researchers have investigated the use of provenance as a means for tracing pack the execution of experiment (see e.g., [START_REF] Zhao | Using semantic web technologies for representing e-science provenance[END_REF][START_REF] Miles | The requirements of using provenance in e-science experiments[END_REF][START_REF] Altintas | Provenance collection support in the kepler scientific workflow system[END_REF][START_REF] Alper | Enhancing and abstracting scientific workflow provenance for data publishing[END_REF]). We note however that experiments may involve multiple scientists, each of them is responsible for conducting and analyzing the execution of part of the overall experiment, using his/her favorite data analysis tool (workflow system, programming or scripting language, etc.), which may be different from those used by the rest of the team. This is particularly the case for interdisciplinary projects involving scientists with different backgrounds and expertise. In order to exploit the provenance generated by the different data analysis tools utilized within the scope of an experiment, there is therefore the need for harmonizing and interlinking the provenance traces such tool recorded and generated. The adoption of the W3C PROV recommendations [START_REF] Missier | The w3c prov family of specifications for modelling provenance metadata[END_REF] (in particular the PROV-O ontology [START_REF] Lebo | Prov-o: The prov ontology[END_REF] given increasing number of provenance-producing environments adopting semantic web technologies) by a number of data analysis tools has to a certain extent lessen the severity of the provenance harmonization problem. Yet, the fact that such environments use PROV extensions that extend PROV differently, means that there is a need for aligning the provenance traces generated by those tools. Moreover, the provenance graphs generated by those environments need to be interlinked by identifying the entities that refer to the same real world entity.
Interlinking and harmonizing provenance data is essential to deliver a global account of what happened during scientific experiments. It is, however, by no means sufficient for promoting the understanding and re-usability of the experiment and its associated results. Indeed, the provenance graph generated are often large and contain low level and cumbersome information that is targeted for the consumption of machines. This calls for abstraction mechanisms for providing a human user with a global view on what happens in the experiment, by deriving from the raw provenance information, high level and succinct information that helps users in understanding the experiment and the results of its execution in its entirety. In this paper, we propose SHARP that addresses the above issues. We propose the following contributions:
-An approach for interlinking and harmonizing provenance traces recorded by different workflow systems based on PROV inferences.
-An application of provenance harmonization towards Linked Experiment
Reports by using domain-specific annotations as in [START_REF] Gaignard | From scientific workflow patterns to 5-star linked open data[END_REF]. -An evaluation with real world omic use case illustrating the feasibility of SHARP.
The paper is organized as follows. Section 2 describes motivations and problem statement. Section 3 presents the harmonization of multiple PROV Graphs and its application towards Linked Experiment Reports. Section 4 reports our experimental results. Section 5 summarizes related works. Finally, conclusions and future works are outlined in Section 6.
Due to costly sequencing equipment and massively produced data, DNA sequencing is generally outsourced to third-party facilities. Therefore, part of the scientific experiment is conducted by the sequencing facility which requires dedicated high throughput computing infrastructures, and a second part is conducted by the scientists themselves to analyze and interpret the results of sequencing using traditional computing resources. Figure 2.1 illustrates a concrete example of such experiment, which is composed of two workflows enacted by different workflow systems, namely Galaxy [START_REF] Afgan | The galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update[END_REF] and Taverna [START_REF] Wolstencroft | The taverna workflow suite: designing and executing workflows of web services on the desktop, web or in the cloud[END_REF]. The first workflow (WF1), in blue in Figure 2.1, is implemented in Galaxy and addresses common DNA data pre-processing. Such workflow takes as input two DNA sequences from two biological samples s1 and s2, represented in green. For each sample, the sequence data is stored in forward 4 (.R1) and reverse (.R2) files. The first sample has been split by the sequencer in two parts, (.a) and (.b). The very first processing step consists in aligning (Alignment 5 ) short sequence reads onto a reference human genome (GRCh37). Then the two parts a and b are merged 6 into a single file. Then the aligned reads are sorted 7 prior to genetic variant identification 8 (Variant Calling). This primary analysis workflow finally produces a VCF 9 file which lists all known genetics variations compared to the GCRh37 reference genome.
Taverna workflow @research-lab
Galaxy workflow @sequencing-facility
Variant
The second workflow (WF2) is implemented with Taverna, and highly depends on scientific questions. It is generally conducted by life scientists possibly from different research labs and with less computational needs. Such workflow proceeds as follows. It first queries a database of known effects to associate a predicted effect 10 (Variant effect prediction). Then all these predictions are filtered to select only those applying to the exon parts of genes (Exon filtering). The results obtained by the executions of such workflows allow the scientists to have answers for questions such as Q1 : "From a set of gene mutations, which are common variants, and which are rare variants ?", Q2 : "Which alignment algorithm was used when predicting these effects ?", or Q3: "A new version of a reference genome is available, which genome was used when predicting these effects ?". While Q1 can be answered based on provenance tracking from WF1, Q2 and Q3 need for an overall tracking of provenance at the scale of both WF1 (Galaxy) and WF2 (Taverna) workflows.
While the two workflow environments used in the above experiments (Taverna and Galaxy) track provenance information conforming to the same W3C standardized PROV vocabulary, there are unfortunately impediments that hinder their exploitation. i)-The heterogeneity of the provenance languages used to encode workflow runs, despite the fact that they extend the same vocabulary PROV, does not allow the user to issue queries that use and combine traces recorded by different workflow languages. ii)-Heterogeneity aside, the provenance traces of workflow runs tend to be large, and thus cannot be utilized as they are to document the results of the experiment execution. We show how the above issues can be addressed by, i) applying graph saturation techniques and PROV inferences to overcome vocabulary heterogeneity, and ii) summarizing harmonized provenance graphs for life-science experiment reporting purposes.
Harmonizing multiple PROV Graphs
Faced with the heterogeneity in the provenance vocabularies, we can use classical data integration approaches such as peer-to-peer data integration or mediatorbased data integration [START_REF] Doan | Principles of Data Integration[END_REF] Both options are expensive since they require the specification of schema mappings that often require heavy human inputs. In this paper, we explore a third and cheaper approach that exploits the fact that many of the provenance vocabularies used by workflow systems extend the W3C PROV-O ontology. This means that such vocabularies already come with (implicit) mappings between the concepts and relationships they used and those of the W3C PROV-O. Of course, not all the concepts and relationships used by individual mappings will be catered for in PROV. Still this solution remains attractive because it does not require any human inputs, since the constraints (mappings) are readily available. We show in this section how the provenance traces that are encoded using different PROV extensions can be harmonized by capitalizing on such constraints.
Tuple-Generating Dependencies
Central to our approach to harmonizing provenance traces is the saturation operation. Given a possibly disconnected provenance RDF graph G, the saturation process generates a saturated graph G 8 obtained by repeatedly applying some rules to G until no new triple can be inferred. We distinguish between two kinds of rules. OWL entailment rules includes, among other things, rules for deriving new RDF statements through the transitivity of class and property relationships. Prov constraints [START_REF] Cheney | Constraints of the provenance data model[END_REF], these are of interest to us as they encode inferences and constraints that need to be satisfied by provenance traces, and can as a such be used for deriving new RDF provenance triples.
In this section, we examine such constraints by identifying those that are of interest when harmonizing the provenance traces of workflow executions, and show (when deemed useful) how they can be translated into SPARQL queries for saturation purposes. It is worth noting that the W3C Provenance constraint document presents the inferences and constraints assuming a relational-like model with possibly relations of arity greater than 2. We adapt these rules to the context of RDF where properties (relations) are binary. For space limitations, we do not show all the inferences rules that can be implemented in SPARQL, we focus instead on representative ones. We identify three categories of rules with respect to expressiveness (i) rules that contain only universal variables, (ii) rules that contain existential variables, (iii) rules making use of n-array relations (with n ě 3). The latter is interesting, since RDF reification is needed to represent such relations. For exemplary rule, we present the rules using tuple-generating dependencies TGDs [START_REF] Abiteboul | Foundations of Databases[END_REF], and then show how we encode it in SPARQL. A TGD is a first order logic formula @ xy φpx, ȳq Ñ Dz ψpȳ, zq, where φpx, ȳq and ψpȳ, zq) are conjunctions of atomic formulas.
Transitivity of alternateOf. Alternate-Of is a binary relation that associates two entities e 1 and e 2 to specify that the two entities present aspects of the same thing. The following rule states that such a relation is transitive, and it can be encoded using a SPARQL construct query, in a straightforward manner. alternateOfpe1, e2q, alternateOfpe2, e3q Ñ alternateOfpe1, e3q.
Inference of Usage and Generation from Derivation
The following rule states that if an entity e 2 was derived from an entity e 1 , then there exists an activity a, such that a used e 1 and generated e 2 .
wasDerivedFrompe2, e1q Ñ D a usedpa, e1q, wasGeneratedFrompe2, aq.
Notice that unlike the previous rule, the head of the above rule contains an existential variable, namely the activity a. To encode such a rule in SPARQL, we make use of blank nodes 11 for existential variables as illustrated below.
CONSTRUCT { ?e_2 prov:wasGeneratedBy _:blank_node . _:blank_node prov:used ?e_1 } WHERE { ?e_2 prov:wasDerivedFrom ?e_1 } Inference of Usage and Generation from Derivation Using the Qualification patterns In the previous rule, derivation, usage and generation are represented using binary relationships, which do not pose any problem to be encoded in RDF. Note, however, that PROV-DM allows such relationships to be augmented with optional attributes, for example, usage can be associated with a timestamp specifying the time at which the activity used the entity. The presence of extra optional attributes increases the arity of the relations that can no longer be represented using an RDF property. As a solution, the PROV-O opts for qualification patterns 12 introduced in [START_REF] Dodds | Linked Data Patterns: A pattern catalogue for modelling, publishing, and consuming Linked Data[END_REF]. To illustrate this, Figure 3.1 shows how a qualified usage can be encoded in RDF. The following rule shows how the inference of usage and generation from derivation can be expressed when such relationships are qualified. It can also be encoded using a SPARQL Construct query with blank nodes. qualifiedDerivationpe 2 , dq, provEntitypd, e 1 q Ñ D a, u, g qualifiedUsagepa, uq, provEntitypu, e 1 q, qualifiedGenerationpe 2 , gq, provActivitypg, aq.
Equality-Generating Dependencies
As well as the tuple-generating dependencies, we need to consider equalitygenerating dependencies (EGDs), which are induced by uniqueness constraints.
An EGD is a first order formula: @xφpxq Ñ px 1 " x 2 q, where φpxq is a conjunction of atomic formulas, and x 1 and x 2 are among the variables in x. We give below an example of an EGD, that is implied by the uniqueness of the generation that associates a given activity a with a given entity e.
wasGeneratedBypgen 1 , e, a, attrs 1 q, wasGeneratedBypgen 2 , e, a, attrs 2 q Ñ pgen 1 " gen 2 q
Having defined an example EGD, we need to specify what it means to apply it (or chase it [START_REF] Fagin | Data exchange: semantics and query answering[END_REF]) when we are dealing with RDF data. The application of an EGD has three possible outcomes. To illustrate them, we will work on the above example EGD. Typically, the generations gen 1 and gen 2 will be represented by two RDF resources. We distinguish the following cases:
(i) gen 1 is a non blank RDF resource and gen 2 is a blank node. In this case, we add to gen 1 the properties that are associated with the blank node gen 2 , and remove gen 2 . (ii) gen 1 and gen 2 are two blank nodes. In this case, we create a single blank node gen to which we associate the properties obtained by unionizing the properties of gen 1 and gen 2 , and we remove the two initial blank nodes. (iii) gen 1 and gen 2 are non blank nodes that are different. In this case, the application of the EGD (as well as the whole saturation) fails. In general, we would not have this case, if the initial workflows runs that we use as input are valid (ie., they respect the constraints defined in the W3C Prov Constraint recommendation [START_REF] Cheney | Constraints of the provenance data model[END_REF].
To select the candidate substitutions (line 5 of Algorithm 1), we express the graph patterns illustrated in the previous cases 1 and 2 as a SPARQL query. This query retrieves candidate substitutions as blank nodes coupled to their substitute, i.e., another blank node or a URI.
For each of the found substitution (line 6), we merge the incoming and outgoing relations between the source node and the target node. This operation is done in two steps. First, we navigate through the incoming relations of the source node (line 9), we copy them as incoming relations of the target node (line 10), and finally remove them from the source node (line 11). Second, we repeat Algorithm 1: EGD pseudo-code for merging blank nodes produced by PROV inference rules with existential variables.
Input : G 1 : the provenance graph resulting from the application of TGD on G Output: G 2 : the provenance graph with substituted blank nodes, when possible. this operation for the outgoing relations (lines 12 to 14). We repeat this process until we can't find any candidate substitutions.
1 begin 2 G 2 Ð G 1 3 substitutions Ð new
Full provenance harmonization process
Multi-provenance linking. This process starts by first linking the traces of the different workflow runs. Typically, the outputs produced by a run of a given workflow are used to feed the execution of a run of another workflow as depicted in Figure 2.1.
The main idea consists in providing an owl:sameAs property between the PROV entities associated with the same physical files. The production of owl:sameAs can be automated as follows : i) generate a fingerprint of the files (SHA-512 is one of the recommended hashing functions), ii) produce the PROV annotation associated the fingerprint to the PROV entities, iii) generate, through a SPARQL CONSTRUCT query, the owl:sameAs relationships when fingerprints are matched. When applied to our motivating example (Figure 2.1), the PROV entity annotating the V CF F ile produced by the Galaxy workflow becomes equivalent to the one as input of Taverna workflow. A PROV example associating a file name and its fingerprint is reported below: <http://fr.symetric#c583bef6-de69-4caa-bc3a-00000000> a prov:Entity ; rdfs:label "my-variants.vcf"^^xsd:String ; crypto:sha512 "1d305986330304378f82b938d776ea0be48eda8210f7af6c 152e8562cf6393b2f5edd452c22ef6fe8c729cb01eb3687ac35f1c5e57ddefc4 6276e9c60409276a"^^xsd:String .
The following SPARQL Construct query can be used to produce owl:sameAs relationships :
CONSTRUCT { ?x owl:sameAs ?y } WHERE { ?x a prov:Entity . ?x crypto:sha512 ?x_sha512 . ?y a prov:Entity . ?y crypto:sha512 ?y_sha512 . FILTER( ?x_sha512 = ?y_sha512 ) }
Multi-provenance reasoning. Once the traces of the workflow runs have been linked, we saturate the graph obtained using OWL entailment rules. This operation can be performed using an existing OWL reasoner (e.g., [START_REF] Carroll | implementing the semantic web recommendations[END_REF][START_REF] Jena | Reasoners and rule engines: Jena inference support[END_REF]). We then start by repeatedly applying the TGDs and EGDs derived from the W3C PROV constraint document, as illustrated in section 3.1 and 3.2. The harmonization process terminates when we can no longer apply any existing TGD or EGD. This harmonization process raises the question as to whether such process will terminate. The answer is affirmative. Indeed, it has been shown in the W3C PROV Constraint document that the constraints are weakly acyclic, which guarantees the termination of the chasing process in polynomial time (see Fagin et al. [START_REF] Fagin | Data exchange: semantics and query answering[END_REF] for more details).
Application of provenance harmonization: domain-specific experiment reports
In this section we propose to exploit previously harmonized provenance graphs by transforming them into Linked Experiment Reports. These reports are no longer machine-only-oriented and benefit from a humanly tractable size, and domain-specific concepts. Several ontologies and controlled vocabularies have been proposed to capture and organize knowledge associated to in silico experiments.
Domain-specific vocabularies. Workflow annotations. P-Plan13 is an ontology aimed at representing the plans followed during a computational experiment.
Plans can be atomic or composite and are a made by a sequence of processing Steps. Each Step represents an executable activity, and involves input and output Variables. P-Plan fits well in the context of multi-site workflows since it allows to work at the scale of a site-specific workflow as well as at the scale of the global workflow. Domain-specific concepts and relations. To capture knowledge associated to the data processing steps, we rely on EDAM 14 which is actively developed, in the context of the Bio.Tools registry, and which organizes common terms used in the field of bioinformatics. However these annotations on processing tools do not capture the scientific context in which a workflow takes place. SIO 15 , the Semantic science Integrated Ontology, has been proposed as a comprehensive and consistent knowledge representation framework to model and exchange physical, informational and processual entities. Since SIO has been initially focusing on Life Sciences, and is reused in several Linked Data repositories, it provides a way to link the data routinely produced by PROV-enabled workflow environment to major linked open data repositories, such as Bio2RDF.
NanoPublications 16 are minimal sets of information to publish data as citable artifacts while taking into account the attribution and authorship. NanoPublications provide named graphs mechanisms to link Assertion, Provenance, and Publishing statements. In the remainder of this section, we show how fine-grained and machine-oriented provenance graphs can be summarized into NanoPublications.
Linked Experiment Reports Based on harmonized multi-provenance graphs, we show how to produce NanoPublications as exchangeable and citeable scientific experiment reports. Figure 3.3 drafts how data artifacts and scientific context can be related to each other for the motivating scenario introduced in section 2. The expected Linked Experiment Report would be a NanoPublication as follows. For the sake of simplicity we omitted the definition of namespaces, and we used the labels of SIO predicates instead of their identifiers.
Table 1: [before,after] metrics characterizing the impact of the provenance harmonization process. wDF refers to wasDerivedFrom properties and wIB refers to wasInfluencedBy.
The processing time of the OWL entailments, TGDs, and EGDs provenance harmonization process is near to 5 seconds as shown in Table 1. This is negligible in the context of scientific workflows, which generally rely on possibly long batch job submissions. With respect to the inferred predicates, Table 1 also shows that the number of wasInfluencedBy (wIB) is important. In spite of its loose semantics, these inferred statements could be helpful for tracing data lineage in provenance graphs. Even if not present in the original PROV graph, SHARP was able to produce these common data lineage relations. We can also note that the harmonization process does not allow to infer wasDerivedFrom (wDF) relations. By design, the PROV inference regime does not allow the inference of new wasDerivedFrom relations, which means that a particular attention should be paid to initially capture this provenance relation.
Usage of semi-automatically produced NanoPublications
We run the multi-site experiment of section 2 using Galaxy and Taverna workflow management systems. The Galaxy workflow has been designed in the context of the SyMeTRIC systems medicine project, and was run on the production Galaxy instance 18 of the BiRD bioinformatics infrastructure. The Taverna workflow was run on a desktop computer. Provenance graphs were produced by the Taverna built-in PROV feature, and by a Galaxy dedicated provenance capture tool 19 , Table 2: Most prominent predicates when considering the initial two PROV graphs and their harmonization (PROV++)
We executed the summarization query proposed in section 3.4 on the harmonized provenance graph. The resulting NanoPublication (assertion named graph) represents the input DNA sequences aligned to the GRCh37 human reference genome through an sio:is-variant-of predicate. It also links the annotated variants (Taverna WF output) with the prepossessed DNA sequences (Galaxy WF inputs). Related to the Q3 life-science question highlighted in section 2, this NanoPublication can be queried to retrieve for instance the reference genome used to select and annotate the resulting genetic variants.
Related Works
Data harmonization (integration) [START_REF] Doan | Principles of Data Integration[END_REF] and summarization [START_REF] Aggarwal | Graph data management and mining: A survey of algorithms and applications[END_REF] have been largely studied in different research domains. Our objective is not to invent yet another technique for integrating and/or summarizing data. Instead, we show how provenance constraint rules, domain annotations, and Semantic Web techniques can be combined to harmonize and summarize provenance data into linked experiment reports.
There have been several proposals and tools that tackle scientific reproducibility 20 . For example, Reprozip [START_REF] Chirigati | Reprozip: Using provenance to support computational reproducibility[END_REF] captures operating system events that are then utilized to generate a workflow illustrating the events that happened and their sequences. While valuable, such proposals neither address the harmonization of provenance traces recorded by different analysis tools that utilize dif-ferent PROV extension nor machine-and human-tractable experiment reports, as proposed in SHARP.
Datanode ontology [START_REF] Daga | Describing semantic web applications through relations between data nodes[END_REF] proposes to harmonize data by describing relationships between data artifacts. Datanode allows to present in a simple way dataflows that focus on the fundamental relationships that exist between original, intermediary, and final datasets. Contrary to Datanode, SHARP uses existing PROV vocabularies and constraints to harmonize provenance traces, thereby reducing harmonization efforts.
LabelFlow [START_REF] Alper | Labelflow: Exploiting workflow provenance to surface scientific data provenance[END_REF] proposes a semi-automated approach for labeling data artifacts generated from workflow runs. Compared to LabelFlow, SHARP uses existing PROV ontology and Semantic Web technology to connect and harmonizes the dataflows. Moreover, LabelFlow is confined to single workflows, whereas SHARP targets a collection of workflow runs that are produced by different workflow systems.
In previous work [START_REF] Gaignard | From scientific workflow patterns to 5-star linked open data[END_REF], we proposed PoeM to produce linked in silico experiment reports based on workflow runs. As SHARP, PoeM leverages Semantic Web technologies and reference vocabularies (PROV-O, P-Plan) to generate provenance mining rules and finally assemble linked scientific experiment reports (Micropublications, Experimental Factor Ontology). SHARP goes steps forward by proposing the harmonization of provenance traces recorded by different workflow systems.
Conclusions
In this paper, we presented SHARP, a Linked Data approach for harmonizing cross-workflow provenance. The resulting harmonized provenance graph can be exploited to run cross-workflow queries and to produce provenance summaries, targeting human-oriented interpretation and sharing. Our ongoing work includes deploying SHARP to be used by scientists to process their provenance traces or those associated with provenance repositories, such as ProvStore. For now, we work on multi-site provenance graphs with centralized inferences. Another exciting research direction would be to consider low-cost highly decentralized infrastructure for publishing NanoPublication as proposed in [START_REF]Decentralized provenance-aware publishing with nanopublications[END_REF].
Fig. 2 . 1 :
21 Fig. 2.1: A multi-site genomics workflow, involving Galaxy and Taverna workflow environments.
Fig. 3 . 1 :
31 Fig. 3.1: Example of a qualified relationship.
Figure 3 .Fig. 3 . 2 :
332 Figure 3.2 presents inferred statements in dashed arrows resulting from the application of this rule.
Fig. 3 . 3 :
33 Fig. 3.3: Expected experiment report, linking the most relevant multi-site workflow artifacts with domain specific statements, and scientific context.
List ă P air ă N ode, N ode ąą pq G 1 .listStatementsp˚, ˚, sourceq) do 10 G 2 Ð G 2 .addpin.getSubjectpq, in.getP redicatepq, targetq 11 G 2 Ð G 2 .delpinq 12 foreach (out P G 1 .listStatementspsource, ˚, ˚q) do 13 G 2 Ð G 2 .addptarget, out.getP redicatepq, out.getObjectpqq
4 repeat
5 S Ð f indSubstitutionspG 1 q
6 foreach (s P S) do
7 source Ð sr0s
8 target Ð sr1s
9 foreach (in P
14
G 2 Ð G 2 .delpoutq 15 until pS.sizepq " 0q
DNA sequencers can decode genomic sequences in both forward and reverse directions which improves the accuracy of alignment to reference genomes.
[START_REF] Alper | Labelflow: Exploiting workflow provenance to surface scientific data provenance[END_REF] BWA-mem: http://bio-bwa.sourceforge.
net 6 PICARD: https://broadinstitute.github.io/
picard/ 7 SAMtools sort: http://www.htslib.org
https://www.w3.org/TR/rdf11-concepts/#dfn-blank-node
http://purl.org/net/p-plan
http://edamontology.org
http://sio.semanticscience.org
https://provenance.ecs.soton.ac.uk/store/
https://galaxy-bird.univ-nantes.fr/galaxy/
https://github.com/albangaignard/sharp-prov-toolbox
:head { ex:pub1 a np:Nanopublication . ex:pub1 np:hasAssertion :assertion1 ; np:hasAssertion :assertion2 . ex:pub1 np:hasProvenance :provenance . ex:pub1 np:hasPublicationInfo :pubInfo . } :assertion1 { ex:question a sio:Question ; sio:has-value "What are the effects of SNPs located in exons for study-Y samples" ; sio:is-supported-by ex:referenceGenome ; sio:is-supported-by ex:sample_001 ; sio:is-supported-by ex:annotatedVariants . } :assertion2 { ex:referenceGenome a sio:Genome . ex:sample_001 a sio:Sample ; sio:is-variant-of ex:referenceGenome ; sio:has-phenotype ex:annotatedVariants .
ex:annotatedVariants sio:is-supported-by ex:referenceGenome . } :provenance { :assertion2 prov:wasDerivedFrom :harmonizedProvBundle .} :pubInfo { ex:pub1 prov:wasAttributedTo ex:MyLab . }
To produce this NanoPublication, we identify a data lineage path in multiple PROV graphs, beforehand harmonized (as proposed in section 3). Since we identified the prov:wasInfluencedBy as the most commonly inferred lineage relationship, we search for all connected data entities through this relationship. Then, when connected data entities are identified, we extract the relevant ones so that they can be later on incorporated and annotated through new statements in the NanoPublication. The following SPARQL query illustrates how :assertion2 can be assembled from a matched path in harmonized provenance graphs. The key point consists in relying on SPARQL property path expressions (prov:wasInfluencedBy)+ to identify all paths connecting data artifacts composed by one or more occurrences of the prov:wasInfluencedBy predicate. Such SPARQL queries could be programmatically generated based on P-Plan templates as it has been proposed in our previous work [START_REF] Gaignard | From scientific workflow patterns to 5-star linked open data[END_REF].
CONSTRUCT {
GRAPH :assertion { ?ref_genome a sio:Genome . ?sample a sio:Sample ; sio:is-variant-of ?ref_genome ; sio:has-phenotype ?out . ?out rdfs:label ?out_label . ?out sio:is-supported-by ?ref_genome . } } WHERE { ?sample rdfs:label ?sample_label. FILTER (contains(lcase(str(?sample_label)), lcase("fastq"))) . ?ref_genome rdfs:label ?ref_genome_label. FILTER (contains(lcase(str(?ref_genome_label)), lcase("GRCh"))) . ?out ( prov:wasInfluencedBy )+ ?sample ?out tavernaprov:content ?out_label . FILTER (contains(lcase(str(?out_label)), lcase("exons"))) . }
Experimental results and discussion
As a first evaluation, we ran two experiments. The first one evaluates the performance of harmonization in terms of execution time, number and nature of inferred relations. In a second experiment, we evaluated the ability of the system to answer the domain-specific questions of our motivating scenario.
Harmonization of heterogeneous PROV traces
In this experiment, we used provenance document of ProvStore 17 . Specifically, we selected three documents, namely P A (ID 113207), P B (ID 113206), and P C (ID 113263). These documents have different sizes from 10 to 666 triples and use different concepts and relations of PROV. We ran the provenance harmonization process as described in this paper using such documents on a classical desktop computer (4-cores CPU, 16GB of memory). We computed the mean time and standard deviation based on five executions of the harmonization, as well as the size of the provenance graph before and after the harmonization. based on the Galaxy API, the later transforms a user history of actions into PROV RDF triples. Table 2 presents a sorted count of the top-ten predicates in i) the Galaxy and Taverna provenance traces without harmonization, ii) these provenance traces after the first iteration of the harmonization process: |
01768401 | en | [
"info.info-ai"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768401/file/sharp-demo-preprint.pdf | Alban Gaignard
email: [email protected]
Khalid Belhajjame
Hala Skaf-Molli
SHARP: Harmonizing Galaxy and Taverna workflow provenance
Keywords: Reproducibility, Scientific Workflows, Provenance, Prov Constraints
SHARP is a Linked Data approach for harmonizing cross-workflow provenance. In this demo, we demonstrate SHARP through a real-world omic experiment involving workflow traces generated by Taverna and Galaxy systems. SHARP starts by interlinking provenance traces generated by Galaxy and Taverna workflows and then harmonize the interlinked graphs thanks to OWL and PROV inference rules. The resulting provenance graph can be exploited for answering queries across Galaxy and Taverna workflow runs.
Introduction
Imagine a system that allows scientists to answer queries like Which parameters were used by my colleagues in their workflow that would explain my workflow results ! Answering such queries requires the exploitation of provenances traces generated by different workflow systems. PROV has been adopted by a number of workflow systems for encoding the traces of workflow executions. However, workflow systems extend PROV differently which yields heterogeneity in the generated provenance traces. This heterogeneity diminishes the value of such traces, e.g. when combining and querying provenance traces of different workflow systems.
In this demo, we present SHARP; an approach for interlinking and harmonizing provenance traces of different workflow systems using PROV inferences. We demonstrate SHARP through a real-world omic experiment involving workflow traces generated by the Galaxy and Taverna systems.
SHARP: Harmonizing multi-PROV Graphs
SHARP exploits the fact that provenance vocabularies used by workflow systems extend the W3C PROV-O ontology. It uses reasoning for harmonizing provenance traces thanks to OWL entailment regime and PROV constraints [START_REF] Cheney | Constraints of the provenance data model[END_REF]. SHARP identifies three categories of rules of Prov constraints with respect to expressiveness (i) rules that contain only universal variables, (ii) rules that contain existential variables, and (iii) rules making use of n-array relations (with n ě 3). SHARP formalizes constraints as tuple-generating dependencies TGDs and equality-generating-dependencies EGD [START_REF] Abiteboul | Foundations of Databases[END_REF], and then implements them as Jena rules.
The following example illustrates usage and generation from derivation rule. The following rule states that if an entity e 2 was derived from an entity e 1 , then there exists an activity a, such that a used e 1 and generated e 2 .
wasDerivedFrompe2, e1q Ñ D a usedpa, e1q, wasGeneratedFrompe2, aq. The overall provenance harmonization process consists in i) interlinking data artifacts through owl:sameAs property, ii) applying OWL inferences, iii) applying PROV constraints TGDs and EGDs as a set of inference rules. This process terminates because PROV constraints are known to be weakly acyclic [START_REF] Fagin | Data exchange: semantics and query answering[END_REF].
SHARP Demo Scenario
We use a real scientific experiment conducted through two bioinformatics workflows. The first workflow, in blue in Figure 3.1, is implemented in Galaxy [START_REF] Afgan | The galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update[END_REF] and addresses DNA data pre-processing which is loosely coupled to the scientific hypothesis. This workflow takes as input two DNA sequences from two biological samples s1 and s2, represented in green. The second workflow is implemented with Taverna [START_REF] Wolstencroft | The taverna workflow suite: designing and executing workflows of web services on the desktop, web or in the cloud[END_REF], and highly depends on scientific questions. It is generally conducted by life scientists possibly from different research labs and with less computational needs. In the following section, we explain how SHARP interlinks and harmonizes provenances graphs produced by two workflow systems.
Taverna workflow @research-lab
Galaxy workflow @sequencing-facility
Variant
Implementation
Galaxy PROV
For now, it is not possible to export provenance from Galaxy workflow environment4 through a standard schema and format. To address this issue and to allow the combination of multiple workflow execution traces, we developed a loosely coupled tool aimed at exporting Galaxy user histories into PROV RDF data. This tool results in a command-line interface and a web application which allows users to list and export the content of their Galaxy Workflow histories, as illustrated in Figure 4.1.
Users provide the URL of their Galaxy workflow portal, and their private API key. Then, the tool communicates with the Galaxy API through the HTTP protocol and JSON documents. It can produce as a result RDF PROV triples in the Turtle syntax, or a graphical representation of the PROV sub-graph. The command line interface has been implemented in Java and makes use of the Jersey library for connecting to the REST API. The command line interface is available as open-source 5 . The back-end web application was also implemented in Java. Jetty was used as a standalone web server hosting an HTTP REST API, and a mongoDB database was setup to collect basic usage metrics. The frontend was implemented in HTML and JavaScript (BackboneJS, D3.JS). The web application will be available shortly as open-source in the same repository.
Multi PROV harmonization
We implemented the PROV harmonization process in a command line tool available as open-source6 . This tool can be used to infer new PROV statements for a single provenance trace by providing an input trace in the Turtle syntax. More interestingly, it can be used to interlink and harmonize cross-workflow provenance traces by specifying multiple provenance traces as input, accompanied with owl:sameAs statements. Finally, based on inferred prov:wasInfluencedBy predicates, cross-workflow data lineage can be visualized.
This tool has been implemented in Java and is supported by Jena7 for RDF data management and reasoning tasks. PROV Constraints8 inference rules have been implemented in the Jena syntax9 . HTML and JavaScript (D3.JS) code templates have been used to generate harmonized provenance visualization.
The automatic generation of owl:sameAs statements between files based on hashing techniques, as well as a web interface are still under active development. These features should available soon in the same repository.
Results
This demonstration shows that SHARP allows to homogenize PROV graphs and achieve unified SPARQL queries across multiple provenance traces. For instance, the following query assembles a data influence graph between multiple workflow execution traces in which some data artifacts play two roles, i) output for a give workflow, and ii) input for other workflows. This query matches prov:wasInfluencedBy properties resulting from harmonization process. These properties were not initially stated neither in the Taverna nor in the Galaxy provenance traces.
Conclusions
In this paper, we presented SHARP, a Linked Data approach for harmonizing crossworkflow provenance. The resulting harmonized provenance graph can be exploited to run cross-workflow queries. Our ongoing work includes the automation of owl:sameAs property generation as well as providing a unified web interface. As future works, we will address human-centered interpretation of possibly massive PROV graphs through domain-specific summarization techniques.
Fig. 3 .
3 Fig. 3.1: A multi-site genomics workflow, involving Galaxy and Taverna systems.
Fig. 4 .
4 Fig. 4.1: A web application to export provenance graphs from Galaxy user histories in PROV.
Figure 5 .
5 1 shows the resulting data lineage graph associated with the two workflow traces of our motivating use case (Figure 3.1). While the left part of the graphs represents the Galaxy workflow invocation, the right part represents the Taverna one.
Fig. 5 . 1 :
51 Fig. 5.1: prov:wasInfluencedBy properties between Galaxy and Taverna.
https://usegalaxy.org
galaxy-PROV: https://github.com/albangaignard/galaxy-PROV
sharp-prov-toolbox: https://github.com/albangaignard/sharp-prov-toolbox
Jena: https://jena.apache.org
https://www.w3.org/TR/prov-constraints/
https://github.com/albangaignard/sharp-prov-toolbox/blob/master/ SharpProvToolbox/src/main/resources/provRules_all.jena |
01768451 | en | [
"spi.gciv.geotech",
"spi.gciv.cd"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-01768451/file/Cuisinier%20et%20Masrouri%20Engineering%20Geology.pdf | Keywords: Expansive Soil, Suction, Compacted Soil, Hydromechanical behaviour, Compressibility, Laboratory Tests
A study of the hydromechanical behaviour of a compacted swelling material in the range of suctions comprised between 0 and 40 MPa was performed. This study has required the development of two kinds of suction controlled oedometer devices based on two different suction control techniques. In the range of suctions higher than 8.5 MPa, the saturated salt solutions method was used and a new oedometer using this suction control technique was developed. For suctions lower than 8.5 MPa an osmotic oedometer was used. Despite the differences between the applied suction components (matric and total), the correlation between the two methods was verified for the tested material. The second part of the paper presents a set of oedometer tests conducted under various suctions. The effect of the applied suction on the hydromechanical parameters was studied. First, two swelling phases were highlighted: a low swelling phase above a suction of 4 MPa and a high swelling one below this value. These phases were considered as being related to the microstructure of compacted swelling clays. Secondly, it was shown that the slopes of the elastic part and of the plastic part of the consolidation curves were not influenced significantly by the applied suctions. In opposition, the preconsolidation pressure is affected by the decrease of the applied suctions even in the low swelling phase. Such a behaviour could be explained by the effects of wetting on the microstructure.
INTRODUCTION
Because of their very low permeability, compacted swelling soils are used for the construction of engineered barriers in waste disposal facilities. During their lifetime, these materials undergo wetting/drying cycles, i.e. suction variations. These soils exhibit large volume variations in response to suction changes. However, the relationship between suction variations and compressibility is not well known as far as compacted swelling materials are concerned. The experimental investigation of the hydromechanical behaviour of unsaturated swelling soils requires the use of suction controlled devices in a comprehensive range of suction, from low to very high suctions. However there are very few suction controlled studies performed over an extensive range of suction in the literature [START_REF] Bernier | Suction controlled experiments on Boom clay[END_REF]Al Mukthar et al. 1999;[START_REF] Villar | Investigation of the behaviour of bentonite by means of suctioncontrolled oedometer tests[END_REF][START_REF] Alonso | Expansive bentonite/sand mixtures in cyclic controlled-suction drying and wetting[END_REF][START_REF] Cui | A model for the volume change behavior of heavily compacted swelling clays[END_REF]. This is mainly related to the fact that the methods commonly used in experimental testing, such as the air overpressure method [START_REF] Richards | Capillary conduction of liquids through porous medium[END_REF] and the osmotic method [START_REF] Kassif | Experimental relationship between swell pressure and suction[END_REF] investigate only a small suction range, from saturation up to a few MPa (Fig. 1). Some developments of these methods have been used, but the maximum suction they are able to reach is about 14 MPa [START_REF] Villar | First results of suction controlled oedometer tests in highly expansive montmorillonite[END_REF][START_REF] Delage | The relationship between suction and the swelling properties in a heavily compacted swelling clay[END_REF]. The only available method to reach several hundred MPa is the control of relative humidity by means of salt solutions. However, this is a complex method to implement because of some uncertainties on the imposed suction due to temperature and pressure variations, and the measurement of the exact value of the relative humidity inside the testing device [START_REF] Delage | The relationship between suction and the swelling properties in a heavily compacted swelling clay[END_REF][START_REF] Cuisinier | Comportement hydromécanique des sols gonflants compactés[END_REF].
This paper presents a study of the hydromechanical behaviour of a compacted swelling soil carried out on a range of suction comprised between 0 and 40 MPa. In the first part, oedometers using the osmotic method and others based on the saturated salt solutions technique are introduced. The feasibility of these devices and the correlation between the two suction control methods is also discussed. In the second part, a set of suction controlled oedometer tests is presented. From these results, the influence of suction on the swelling potential ∆H/H, the preconsolidation pressure p 0 (s), the slopes of the elastic part κ, and of the plastic part λ(s), of the consolidation curves is discussed.
The results are considered with two independent variables: the net vertical stress σ v *, defined as the difference between the total vertical stress and the pore-air pressure, and the suction s, which corresponds to the difference between the pore-air pressure and the pore-water pressure [START_REF] Coleman | Stress-strain relations for partly saturated soils[END_REF][START_REF] Matyas | Volume change characteristics of partially saturated soils[END_REF]).
DESCRIPTION OF THE SUCTION CONTROLLED OEDOMETER DEVICES
Figure 1 shows that at least two complementary suction control techniques are required to perform a study over an extensive suction range. The salt solutions technique for suctions higher than 8.5 MPa and the osmotic method for suctions lower than this value were selected.
Salt solutions method
The basic principle of the saturated salt solutions technique is to introduce a sample inside a hermetic chamber where the relative humidity is maintained constant with a salt solution. The water exchange occurs by vapour transfer. The relative humidity Hr (%) is linked to the suction, through Kelvin's law:
) Hr ln( g M T R s w γ = (1)
with R = universal gas constant (8.31 J.mol -1 .K -1 ); γ w = unit weight of water (9.81 kN.m -3 ); g = gravitational constant (9.81 m.s -2 ); M = molecular weight of water (18 10 -3 kg.mol -1 ); T = absolute temperature (K). It was possible to apply different suctions with this method, depending on the kind of salt solution used and its concentration. In this study, totally saturated salt solutions were selected.
The value of the relative humidity imposed by a given salt is highly dependent on temperature [START_REF] Afnor Nf | Mesure de l'humidité de l'air -Générateurs d'air humide à solutions salines pour l'étalonnage des hygromètres[END_REF]. In our tests, the room temperature was maintained at 20 ± 0.15 °C. The relative humidity imposed by a given saturated salt solution is known with an uncertainty comprised between 1 and 2 % (AFNOR, 1999). [START_REF] Delage | The relationship between suction and the swelling properties in a heavily compacted swelling clay[END_REF] and [START_REF] Cuisinier | Développement d'un appareil oedométrique à succion contrôlée pour l'étude des sols gonflants[END_REF] have demonstrated that these uncertainties limit the use of saturated salt solutions to suctions higher than 8.5 MPa, because below this value the relative uncertainty on the imposed suction is higher than 15 %.
Given thermodynamic considerations, the soil suction is the sum of several components; a matric potential (adsorption and capillary), an osmotic potential (related to solute concentrations), and other components (related to pressure and temperature) which are supposed as being of negligible importance here. A comprehensive review of these considerations is available in [START_REF] Fredlund | Soils mechanics for unsaturated soils[END_REF]. With the saturated salt solutions method, total suction is imposed during a test.
The new odometer device using saturated salt solutions is shown in Figure 2. This device combines, in the same system, the functions of a basic oedometer (AFNOR, 1997) and of a closed chamber with constant relative humidity (ISO 483, 1998). Based on this principle, two different devices, with two maximum vertical pressures (1 200 and 20 000 kPa) were developed. The range of suction that could be attained was comprised between 8.5 and 292.4 MPa. The diameter of the sample was 7.4 cm in the low vertical stress oedometer and 5 cm in the other one. In both cases, the initial height of the sample was about 1 cm. The different elements of the device in contact with the sample were made of porous steel in order to facilitate the vapour transfer between the sample and the saturated salt solution. Following a change of the applied suction and/or the vertical stress, several weeks are required to reach deformation equilibrium [START_REF] Cuisinier | Comportement hydromécanique d'un sol gonflant sous très fortes succions[END_REF]. Therefore, one test might take several months to be completed. The validation of the efficiency of these new oedometers was presented in [START_REF] Cuisinier | Study of the hydromechanical behaviour of a swelling soil under high suctions[END_REF].
Osmotic method
In this method, a semi-permeable membrane, is introduced between a solution of macromolecules and an unsaturated soil sample [START_REF] Zur | Osmotic control of the matric soil-water potential: I. Soil water system[END_REF]. The exchange of water is due to the process of osmosis. The amount of exchanged water, and therefore the suction, is controlled by the macromolecule concentration: the higher the concentration, the higher the suction. The macromolecule commonly in use is the polyethyleneglycol (PEG) with a molecular weight of 20 000 or 6 000 Da1 . The relationship between PEG concentration and suction (Fig. 3) is known for suctions ranging from 0 to 1.5 MPa and is independent of the PEG molecular weight [START_REF] Williams | An evaluation of polyethyleneglycol (PEG) 6 000 and PEG 20 000 in the osmotic control of soil water matric potential[END_REF]. In this range, [START_REF] Cui | Étude du comportement d'un limon compacté non saturé et de sa modélisation dans un cadre élastoplastique[END_REF] has proposed an empirical calibration relationship between PEG concentration and suction:
2 c 11 s = (2)
where s is the suction in MPa and c the concentration of the PEG solution expressed in g of PEG per g of water. This equation is also reproduced in Figure 3. The temperature influences the relationship between PEG concentration and suction [START_REF] Guiras-Skandaji | Déformabilité des sols argileux non saturés : etude expérimentale et modélisation[END_REF]. In order to limit this effect, the temperature was maintained at 20 ± 1.5 °C.
With the osmotic suction control method, only the matric suction component of a sample is mastered during a test.
It was necessary to calibrate the PEG solution from a suction of 1.5 MPa up to 8.5 MPa. For such a calibration, a sample of PEG solution, prepared at a known concentration, is enclosed in a hermetic chamber where the relative humidity is kept constant by a saturated salt solution. [START_REF] Delage | The relationship between suction and the swelling properties in a heavily compacted swelling clay[END_REF] have performed such tests and their results, obtained with PEG 6 000, are plotted in Figure 3. An additional calibration test was made with a similar procedure using PEG 6 000 for a suction of 8.5 MPa. The final concentration of the PEG solution was 0.879 g of PEG per g of water. The result of this test was in good agreement with existing data and the empirical relationship. The osmotic method could therefore be used up to 8.5 MPa.
A schematic representation of an osmotic oedometer is presented in Figure 4 ( [START_REF] Kassif | Experimental relationship between swell pressure and suction[END_REF]. A pump allows to circulate a solution of macromolecules (PEG), at a given concentration. The solution passes through the grooved base of the oedometer, which was designed to allow fluid to circulate through the whole bottom surface of the sample. Between the sample and the PEG solution a semi-permeable membrane (Spectra/Por® n°4) was introduced to prevent PEG macromolecules from passing toward the sample. In our device, the maximum vertical pressure accessible was 1 800 kPa. The range of suction was comprised between 0 and 8.5 MPa. The deformability of the semi-permeable membrane and the overall device was evaluated. The evaporation through the upper face of the sample was minimized by positioning a plastic film all around the oedometer. The diameter of the sample was 7 cm and its initial height was about 1 cm. When a given suction was applied, approximately ten days were required for a deformation equilibrium to be reached. The mechanical loading was made in the same manner as in a typical oedometer test, and approximately 2 days were needed to reach a deformation equilibrium for a given stress step [START_REF] Cuisinier | Comportement hydromécanique des sols gonflants compactés[END_REF].
MATERIAL AND SAMPLE PREPARATION
The study was conducted with an artificially prepared mixture (40 % of silt and 60 % of bentonite). The mineralogical composition of the materials was determined by X ray diffractometry. The silt contained 60 % quartz, 20 % montmorillonite, 11 % feldspar, and the remaining part was made up of kaolinite and mica. The bentonite was composed of more than 90 % of calcium montmorillonite. The main physical properties of the materials and of the mixture are shown in Table 1.
The size of the particles used to prepare the samples was less than 400 µm (obtained by sieving). The materials were mixed together and wetted up to a gravimetric water content of 15 % (dry side of optimum). Then the mixture was statically compacted under a vertical pressure of 1 000 kPa. This low gravimetric water content, close to the shrinkage limit, was selected in order to prevent the shrinkage of the sample when very high suctions were imposed. It was not possible to prepare samples at a gravimetric water content lower than 15 % as they would not have been sufficiently cohesive to be handled.
Under these conditions, the initial dry unit weight of the samples was about 12.7 kN.m -3 . The initial matric suction, measured by the Filter Paper Method (ASTM, 1995a), was comprised between 20 and 25 MPa and the osmotic suction was comprised between 1 and 2 MPa. The swelling potential ∆H/H when a sample is fully saturated, and the swelling pressure P s , required to eliminate height variations during wetting, measured by the free swelling method (ASTM, 1995b), were respectively 19 % and 250 kPa. The gravimetric water content determined after full saturation inside an oedometer was 49 %.
STUDY OF HYDROMECHANICAL BEHAVIOUR FROM LOW TO VERY HIGH
SUCTIONS
In the remaining part of the paper, the tests are referenced in order to identify the type of suction control method: O for osmotic and S for saturated salt solutions. The subsequent letters indicate qualitatively the stress paths followed: W for wetting phase, D for drying phase and L for the loading/unloading phase.
Test program
The stress paths followed are plotted in Figure 5. All the tests began at point "A" under a low vertical pressure of about 10 kPa, and an initial suction of about 20/25 MPa. During the first phase, a different suction was applied in several steps for each test. In the second phase the samples were loaded in several steps and then unloaded under constant suction. The test OWL4 is not represented in Figure 5 as the applied suction was 0 MPa in this test.
Correlation between two suction control methods
As these two methods do not impose the same suction components, the correlation between the results obtained with them was checked. Two tests were conducted: OWL1 in the osmotic oedometer and SWL1 in the saturated salt solutions oedometer. In both tests, the same stress paths were followed. These paths began by a wetting from the initial suction to a suction of 8.5 MPa under a low vertical pressure (10 kPa). Following equilibrium under a constant suction, the samples were loaded in several steps and then unloaded (Fig. 6). The swelling potential ∆H/H registered at the end of the wetting phase, the slope of the consolidation curves κ, the slope of the virgin compression line λ(s), and the preconsolidation pressure p 0 (s), were determined and summarized in Table 2. In the same table, the initial and final gravimetric water contents of the samples w i and w f , are given. As is shown in Figure 6 and in Table 2, λ(s) and κ are not significantly affected by the suction control method contrary to p 0 (s), which is 80 kPa lower for the test performed in the osmotic oedometer. The swelling potential and the final water content are both higher in the test performed in the osmotic oedometer.
The prepared samples contained a certain amount of dissolved salts. [START_REF] Olsen | Osmosis: a cause of apparent deviations from Darcy's law[END_REF] has demonstrated that, when exposed to distilled water, the dissolved salts contained inside a soil sample tend to diffuse outwards of the sample, and the water inward of the sample. This phenomenon occurred in our osmotic tests as the semi-permeable membrane is permeable to water and solutes. In fact, during a test in the osmotic oedometer, a certain amount of water exchange is controlled by the PEG concentration, but an additional amount of water enters the soil sample because of this solute gradient. This could explain the higher water content and swelling potential ∆H/H in test OWL1 than in test SWL1 where such a phenomenon was not possible. The larger ∆H/H in test OWL1 than in test SWL1 tend to soften the soil structure and consequently, p 0 (s) is low in test OWL1. On the other hand, small variations in the calibration of osmotic suction applied by PEG solution or the uncertainty associated to suction imposed by salt solutions may explain the small differences observed in both tests.
The effect of suction components on the compressibility parameters, λ(s) and κ, is not significant in our tests. This could be explained by the nature of the clay used. During the vertical loading stage, the salt concentration is probably higher in sample SWL1 than in sample OWL1. [START_REF] Olson | Mechanisms controlling compressibility of clays[END_REF] have also shown that the salt concentration slightly affects the compressibility parameters, λ(s) and κ of calcium montmorillonite.
These tests demonstrate that the influence of the osmotic component of suction is not significant for a suction of 8.5 MPa, under which the correlation between both suction control techniques is checked for this material.
Hydromechanical behaviour from low to high suctions
Figure 7 depicts the relationship between the void ratio and the net vertical stress for all the tests. The hydromechanical parameters determined from these curves are presented in Table 2. The parameters λ(s) and κ are plotted in Figure 8. It seems that the parameter κ was not significantly affected by the applied suction in opposition to the parameter λ(s). These observations are similar to other data available in the literature (e.g. [START_REF] Alonso | Special problem soils. General report[END_REF].
Figure 9 compares the variation of p 0 (s) and ∆H/H after wetting at the applied suctions. First, it can be seen that the volume of the sample did not decrease significantly when a suction higher than the initial suction was applied (test SDL1 -40 MPa). During wetting two swelling phases were highlighted: a low swelling phase above 4 MPa and a high swelling phase below this value. It is well known that the swelling of clays occurs in several steps [START_REF] Kassif | Experimental relationship between swell pressure and suction[END_REF][START_REF] Komine | Prediction for swelling characteristics of compacted bentonite[END_REF]. The first step corresponds to the initial hydration of clays with the insertion of between the unit clay layer. This could explain the low swelling phase: the clay particles fill the initial voids of the sample. The high swelling phase started when the major part of the macropores of the sample were already filled by the expanded clay particles.
Figure 9 demonstrates that p 0 (s) decreased continuously from 40 down to 0 MPa. A basic assumption is to link this parameter to the density of the sample: the lower the density, the lower the preconsolidation pressure. But this interpretation is not sufficiently satisfactory to explain such a sharp decrease of preconsolidation pressure. A possible explanation could be the void filling by the clay particles in the low swelling phase. In fact, during the first stage of wetting, the macrostructural density is not changed significantly, but the microstructure of the sample is altered dramatically, decreasing the mechanical properties of the studied material.
However, further microstructural investigations are required before a conclusion can be arrived at on this point. It will also enable the influence of the montmorillonite swelling to be determined more precisely.
CONCLUSION
Two kinds of suction controlled oedometers, one with saturated salt solutions technique (control of total suction) and the other with the osmotic method (control of matric suction)
were used in this study. The correlation between the two methods was checked. It was found that, for the studied soil, the methods used did not significantly affect the values of the hydromechanical parameters determined.
The swelling upon wetting was studied as a function of the applied suction. Two swelling phases have been highlighted: a low swelling phase for suctions higher than 4 MPa and a high swelling phase below 4 MPa. The preconsolidation pressure is strongly affected by the suction decrease between the initial suction and 4 MPa, that is to say during the low swelling phase. This demonstrates that the preconsolidation pressure of a compacted swelling soil is not only a function of its density but it depends also on the microscopic repartition of the clay particles inside the soil.
Our results highlight the extreme sensitivity of the hydromechanical behaviour of a compacted swelling soil to any variation in suction, even in the range of very high suctions.
Figure 2
Schematic of the suction controlled oedometer device using saturated saline solutions [START_REF] Cuisinier | Comportement hydromécanique des sols gonflants compactés[END_REF]. Schematic of the suction controlled oedometer device using osmotic solutions.
Cuisinier and Masrouri
Figure 5
Stress paths followed.
Cuisinier and Masrouri
Figure 6
Correlation between the two suction control techniques.
Cuisinier and Masrouri
Figure 7
Results of the different suction controlled oedometer tests.
Cuisinier and Masrouri
Figure 8
Parameters κ and λ(s) as a function of the applied suction during mechanical loading (white signs: salt solutions technique; black signs: osmotic technique).
Cuisinier and Masrouri
Figure 9
Percent heave (∆H/H) after wetting to a given suction and preconsolidation pressure p 0 (s) as a function of the suction applied during mechanical loading (white signs: salt solutions technique; black signs: osmotic technique).
Cuisinier and Masrouri
Figure 2 .
2 Figure 2. Schematic of the suction controlled oedometer device using saturated salt solutions.
Figure 3 .
3 Figure 3. Calibration curve between suction and PEG concentration.
Figure 4 .
4 Figure 4. Schematic of the suction controlled oedometer device using osmotic solutions.
Figure 5 .
5 Figure 5. Stress paths followed.
Figure 6 .
6 Figure 6. Correlation between the two suction control techniques.
Figure 7 .
7 Figure 7. Results of the different suction controlled oedometer tests.
Figure 8 .
8 Figure 8. Parameters κ and λ(s) as a function of the applied suction during mechanical
Figure 9 .
9 Figure 9. Percent heave (∆H/H) after wetting to a given suction and preconsolidation pressure
Figure 3
Table 2
2 Hydromechanical parameters from s = 0 to 40.0 MPa. Range of suction of the main suction control techniques.
[O: osmotic; S: saturated salt solutions; W: wetting; D: drying; L: loading]
Test Stress Suction under w i w f ∆Η/Η κ λ(s) p 0 (s)
number paths which sample (%) (%) (%) (kPa)
(Fig. 6) is loaded*
(MPa)
SDL1 A-I-J-I 39.7 14.9 12.4 -0.2 0.02 0.26 1 090
SL A-B-A 20.8 14.9 15.1 0 0.03 0.28 1 000
SWL1 A-C-D-C 8.5 15.0 16.8 0.5 0.03 0.31 450
SWL2 A-C-D-C 8.5 15.0 17.0 0.5 0.03 0.30 490
OWL1 A-C-D-C 8.5 14.7 17.7 1.2 0.03 0.29 370
OWL2 A-C-E-F-E 4 14.5 20.7 2.8 0.03 0.30 200
OWL3 A-G-H-G 1.2 14.9 26.4 11.6 0.03 0.23 65
OWL4 / 0 14.7 42.0 16.2 0.05 0.22 50
*: the initial total suction for all the samples is about 20/25 MPa
1 Dalton (Da) = 1.6605.10 -24 g.
Range of suction of the main suction control techniques.
Cuisinier and Masrouri |
01768454 | en | [
"sdv.ba.zv",
"sdv.ee.ieo",
"sde.be",
"sdu.envi",
"sdu.stu"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768454/file/Hoy%20et%20al.%202017%20JAB%20Final%20version%20for%20repository%20%281%29.pdf | S R Hoy
S J Petty
A Millon
D P Whitfield
M Marquiss
D I K Anderson
M Davison
X Lambin
email: [email protected]
Density-dependent increase in superpredation linked to food limitation in a recovering population of northern goshawks Accipiter gentilis
Keywords: food-stress hypothesis, intraguild predation, mesopredator suppression
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ABSTRACT
A better understanding of the mechanisms driving superpredation, the killing of smaller mesopredators by larger apex predators, is important because of the crucial role superpredation can play in structuring communities and because it often involves species of conservation concern. Here we document how the extent of superpredation changed over time, and assessed the impact of such temporal variation on local mesopredator populations using 40 years of dietary data collected from a recovering population of northern goshawks (Accipiter gentilis), an archetypical avian superpredator. We then assessed which mechanisms were driving variation in superpredation, e.g., was it opportunistic, a response to food becoming limited (due to declines in preferred prey) or to reduce competition. Raptors comprised 8% of goshawk diet on average in years when goshawk abundance was high, which is higher than reported elsewhere. Additionally, there was a per capita increase in superpredation as goshawks recovered, with the proportion of goshawk diet comprising raptors increasing from 2% to 8% as the number of goshawk home-ranges increased from ≤14 to ≥25. This increase in superpredation coincided with a population decline in the most commonly killed mesopredator, the Eurasian kestrel (Falco tinnunculus), which may represent the reversal of the "mesopredator release" process (i.e., mesopredator suppression) which occurred after goshawks and other large raptors declined or were extirpated. Food limitation was the most likely driver of superpredation in this system given: 1) the substantial decline of two main prey groups in goshawk diet, the increase in diet diversity and decrease in goshawk reproductive success are all consistent with the goshawk population becoming food-limited; 2) it's unlikely to be purely opportunistic as the increase in superpredation did not reflect changes in the availability of mesopredator species; and 3) the majority of mesopredators killed by goshawks do not compete with goshawks for food or nest sites.
INTRODUCTION
Understanding the mechanisms driving variation in superpredation, the killing of smaller mesopredators by larger apex predators, is an important issue in ecology. This is partly because superpredation can directly impact mesopredator population dynamics which may then cascade to affect lower trophic levels [START_REF] Paine | Food webs: linkage, interaction strength, and community infrastructure[END_REF]), but also because many of the superpredator and mesopredator species involved are of conservation concern [START_REF] Palomares | Interspecific killing among mammalian carnivores[END_REF][START_REF] Caro | The potential for interspecific competition among African carnivores[END_REF][START_REF] Ripple | Wolves and the ecology of fear : can predation risk structure ecosystems ?[END_REF][START_REF] Ritchie | Predator interactions, mesopredator release and biodiversity conservation[END_REF]. However, despite this, and the crucial role superpredation can play in structuring communities, it is still not clear what mechanism (or combination of mechanisms) drives one predator to kill another.
Optimal foraging theory suggests that predators should attempt to kill prey when the energy gained outweighs the energetic cost and potential risk of injury involved [START_REF] Berger-Tal | Look before you leap: Is risk of injury a foraging cost?[END_REF]). However, mesopredators are unlikely to represent a profitable prey source, even when they fall within the preferred size range of the superpredator, because their densities are often relatively low compared to that of other prey species. Furthermore, the risk of injury associated with attacking mesopredators is presumably higher than for other prey types, because mesopredators have evolved to kill other species (Lourenço et al. 2011a). Consequently, several alternative (but not mutually exclusive) hypotheses have been put forward to explain superpredation. The competitor-removal hypothesis suggests that superpredators kill mesopredators to free up shared resources [START_REF] Serrano | Relationship between raptors and rabbits in the diet of eagle owls in southwestern Europe: competition removal or food stress?[END_REF]. This leads to the prediction that superpredation will largely involve mesopredator species which compete with the superpredator for food or other resources, such as nest sites (e.g. intraguild predation). In contrast, the predator-removal hypothesis suggests that superpredation is a pre-emptive tactic to decrease the probability of the superpredator or their offspring being killed (Lourenço et al. 2011b). Under this scenario, the mesopredator species expected to be killed the most frequently are those which pose a threat to the superpredator or their offspring.
Alternatively, rather than being a response to the presence of other predators, the foodlimitation hypothesis, also known as the food-stress hypothesis, suggests that mesopredators are killed to make up the shortfall in the superpredators diet when preferred prey species decline [START_REF] Polis | The ecology and evolution of intraguild predation: Potential Competitors That Eat Each Other[END_REF][START_REF] Serrano | Relationship between raptors and rabbits in the diet of eagle owls in southwestern Europe: competition removal or food stress?[END_REF], Rutz and Bijlsma 2006, Lourenço et al. 2011a, b). Food limitation may also occur if there is an increase in the number of individuals (conspecifics or other species) exploiting preferred prey species, particularly if increasing predator densities elicit anti-predator behaviours in their prey (such as spatial or temporal avoidance of risky areas) which make prey more difficult to find and/or catch. Many populations of large predator species are currently increasing in abundance and recovering their former ranges across both North America and Europe [START_REF] Maehr | Large Mammal Restoration: Ecological and Sociological Challenges in the 21st Century[END_REF][START_REF] Deinet | Wildlife comeback in Europe: The recovery of selected mammal and bird species[END_REF][START_REF] Chapron | Recovery of large carnivores in Europe's modern human-dominated landscapes[END_REF].
Consequently, if superpredation is a response to density dependent food limitation, then the extent of superpredation occurring might be expected to increase during the recolonisation process, even if mesopredators are not a preferred prey species. However, whether such a per capita increase in superpredation has actually occurred or whether it coincides with or follows the colonisation process is as yet unknown.
Here we evaluate support for the food-limitation hypothesis, and other proposed determinants of superpredation in a recovering population of northern goshawks (Accipiter gentilis), using data collected between 1973 and 2014, over a 964km 2 area of Kielder Forest, United Kingdom (55°13′N, 2°33′W). The northern goshawk (hereafter goshawk) is an archetypical avian superpredator known to prey upon a large diversity of both avian and mammalian prey, including other raptors (Rutz et al. 2006[START_REF] Sergio | Intraguild predation in raptor assemblages: a review[END_REF], Lourenço et al. 2011a).
Goshawks are the apex predator in this study system as Kielder Forest lacks other predator species known to prey on goshawks, such as Eurasian eagle owls (Bubo bubo) [START_REF] Chakarov | Mesopredator release by an emergent superpredator: a natural experiment of predation in a three level guild[END_REF].
Goshawks were extirpated from the UK in the late 19 th century. However, scattered populations were subsequently re-established in the 1960s and 70s after birds escaped or were released by falconers [START_REF] Marquiss | The goshawk in Britain[END_REF], Petty and Anderson 1995[START_REF] Petty | History of Northern Goshawk Accipiter gentilis in Britain[END_REF]. In Kielder Forest the goshawk population recovered rapidly after the first recorded breeding attempt in 1973, and 25-33 goshawk home-ranges are now occupied (see Appendix S1; Petty & Anderson 1995). Such a large increase in goshawk abundance, was presumably concomitant with an increase in intraspecific competition for food and nest sites. However, goshawks may also have become food-limited because of a long-term decline in the abundance of red grouse (Lagopus lagopus) and a substantial decline of feral pigeon (Columba livia) in recent years in England [START_REF] Robinson | BirdTrends 2015: trends in numbers, breeding success and survival for UK breeding birds[END_REF], as both species are important prey for goshawks in our study area (Petty et al. 2003a). The term feral pigeon includes both racing and homing pigeons.
The first aim of this study was to quantify the extent of superpredation and then to test the prediction that there has been a per capita increase in superpredation during the colonisation process by examining goshawk dietary data. Our second aim was to determine whether superpredation was impacting local populations of the most commonly preyed upon mesopredator species, for which local population trends are well characterised. We then assessed whether the goshawk population had become food-limited as the population recovered. It is difficult to directly assess food limitation for generalist predators, such as goshawks, without comprehensive prey abundance surveys (Rutz and Bijlsma 2006).
Consequently, we examined two different lines of evidence to proximately assess food limitation. First, we examined temporal variation in goshawk diet to determine whether there had been any decline in the contribution of certain prey species/groups known to be important for goshawks. We then examined changes in the reproductive success of the goshawk population, because reproductive success is closely associated with food availability in goshawks and other raptor species [START_REF] Newton | Population limitation in birds[END_REF], 1998, Rutz and Bijlsma 2006[START_REF] Millon | Variable but predictable prey availability affects predator breeding success: natural versus experimental evidence[END_REF]. Lastly, we evaluated evidence supporting the alternative hypotheses of superpredation by examining which mesopredator species were being killed by goshawks (e.g. were they known to compete with goshawks for food or nesting sites).
METHODS
Kielder Forest is situated in Northumberland, in the north of England, adjacent to the border with Scotland. For a map of the study area see (Petty et al. 2003b). Each year active goshawk home-ranges were located by searching suitable nesting habitat within the forest (between the end of February and end of the breeding season). The locations of active nests were recorded and these sites were then visited at least four times to establish whether a breeding attempt took place, to record the number of chicks that fledged and collect dietary data.
Quantifying superpredation
To quantify superpredation and to determine whether there had been a per capita increase in superpredation as the goshawk population expanded, we used goshawk dietary data. Specifically, we quantified the proportion of goshawk diet comprised of other raptor species each year. Here we use the term raptor to refer to all diurnal and nocturnal birds of prey.
Goshawk diet was characterised by searching for the remains of prey (feathers, bones and fur) in the area surrounding active nest sites during nest visits between March-August, 1975-2014(except between1999-2001), in the same way as described in Petty et al.(2003). When possible, at the end of the breeding season the top layer of nesting material was removed from active nests and searched for additional prey remains. Prey remains were removed or buried to avoid double counting in subsequent searches. We were able to identify 7763 prey items to species level by comparison with reference collections. It was not always possible to differentiate carrion crow (Corvus corone) from rook (C. frugilegus) remains. Therefore crow/rook refers to the abundance of both species in the diet, although rooks were scarce in the study area. We identified and quantified the minimum number of individuals of medium to large prey species by counting skeletal remains, whereas small avian prey (less than 100g) were identified and quantified by plucked feathers. Collecting and quantifying dietary data in this way is likely to underestimate the contribution of small prey items (Ziesemer 1983). However, this should not influence the results of our analyses as such items are relatively unimportant to goshawk diet in terms of biomass.
Once the proportion of goshawk diet comprising raptors had been calculated for each nest site/year, we examined how it varied in relation to the number of occupied goshawk homeranges (measured as a continuous variable) using generalised linear mixed effect models (GLMM) with a binomial error structure fitted using the lme4 package [START_REF] Bates | Fitting Linear Mixed-Effects Models Using lme4[END_REF].
Goshawk diet has previously been shown to change with altitude, presumably reflecting changes in abundance and diversity of prey species at different altitudes (Marquiss andNewton 1982, Toyne 1998). Consequently, we also examined whether the contribution of raptors to diet varied with the altitude of the goshawks nest site. Goshawk home-ranges were grouped into three altitudinal bands as follows: low, if the nest site was 225m or below, medium if between 226-354m, and high if 355m or above. We used these cut-offs because goshawk homeranges above 355m were generally surrounded by open moorland habitat, whereas homeranges below 225m were surrounded by forest, pasture and water (streams, rivers and a large reservoir). The identity of home-ranges and year were both fitted as random effects to account for variation in diet between years and between home-ranges. Model selection was based on Akaike's information criterion corrected for small sample size (AICc) and AICc weights (W; [START_REF] Burnham | Model Selection and Multimodel Inference: a Practical Information-theoretic Approach[END_REF]. The best performing model will have a ∆AIC of zero, because ∆AICc is the AICc for the model of interest minus the smallest AICc for the set of models being considered. Models are generally considered inferior if they have a ∆AICc > 2 units.
AICc weights (w) are an estimate of the relative likelihood of a particular model for the set of models being considered. Model assumptions were validated by visually inspecting residual plots; but they did not reveal any obvious nonlinear relationships, unless otherwise mentioned.
Correlograms (with a lag distance up to 10km) were used to check for spatial-autocorrelation in the residuals of the best performing model. However, we found no evidence of spatialautocorrelation in this, nor any other analyses of goshawk diet.
Impact on local mesopredator populations
To assess the impact of changes in goshawk predation on local populations of the three most frequently preyed upon raptor species Eurasian kestrels (Falco tinnunculus), tawny owls (Strix aluco) and Eurasian sparrowhawks (Accipiter nisus; Petty et al. 2003), we first examined whether the proportion of goshawk diet comprising these three raptor species changed with goshawk abundance (measured as a continuous variable) using the same GLMM approach described above. We then used dietary data to estimate the minimum number of each mesopredator species killed each year by goshawks when ≤14, 15-24 and 25+ goshawk homeranges were occupied (for methods see Appendix 2). We used these three goshawk abundance categories to keep broadly similar sample sizes despite large variation in the number of prey remains recovered each year (range 10-678). Temporal variation in predation rates on kestrel, tawny owl and sparrowhawk were then compared to changes in the local population dynamics of these species. Annual counts of territorial kestrel pairs in and around the forest have been recorded since 1975 as part of a larger study on merlins (Falco columbarius; [START_REF] Newton | Population and breeding of Northumbrian Merlins[END_REF][START_REF] Little | Merlins Falco columbarius in Kielder Forest: influences of habitat on breeding performance[END_REF]. Breeding tawny owls have been monitored continuously in a subsection of the forest since 1979 [START_REF] Petty | Ecology of the Tawny owl Strix aluco in the spruce forests of Northumberland and Argyll[END_REF][START_REF] Petty | Value of nest boxes for population studies and conservation of owls in coniferous forests in Britain[END_REF][START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. The number of occupied sparrowhawk territories in a subsection of the study area has been recorded since 1974 [START_REF] Petty | Breeding biology of the sparrowhawk in Kielder Forest[END_REF]Petty et al. 1995).
Assessing food limitation
Declines in important prey
To indirectly infer if the goshawk population had become food-limited we assessed whether there had been any decline in the contribution of important prey species/groups in the diet. We first examined how the dominance of main prey species in the diet changed as the goshawk population expanded. This was done by ranking species from most to least important, firstly in terms of their frequency contribution to diet and then in terms of their biomass when the number of occupied goshawk home-ranges was ≤14, 15-24 and 25+. A full list of species killed by goshawk and their mean body mass values used in biomass calculations can be found in Appendix 3. Certain taxonomic groups are known to be important to goshawk diet. For example, Columbiformes are an important prey for goshawks across most of Europe, comprising up to 69% of all prey items (reviewed in Rutz et al. 2006). Whereas, Tetraonidae comprised almost 80% of goshawk diet in some years at northerly latitudes, and in southern Europe Lagomorphs were an important prey source (reviewed in Kenward 2006). We therefore categorised prey into taxonomic groups as follows: raptors, pigeons (Columbiformes), corvids (Corvidae), game birds (Tetraonidae and Phasianidae) mammals (mainly Lagomorpha, and Sciuridae), and 'other'. This 'other' group largely consists of passerines but also includes other prey species, which are only occasionally taken. We then estimated both the frequency and biomass contribution of these groups to goshawk diet and examined how the frequency contribution varied in relation to goshawk abundance (measured as a continuous variable) using the same GLMM approach. We were unable to assess whether variation in the proportion of goshawk diet comprised of these different prey species/groups were related to changes in the abundance of these prey species/groups as local population trends were not available.
Lastly, because diet diversity had generally been observed to increase in other raptor populations as they became food-limited (Rutz andBijlsma 2006, Lourenço et al. 2011a), we also examined how prey diversity changed with goshawk abundance using estimates of the Shannon-Wiener diversity index when ≤14; 15-24; 25+ goshawk home-ranges were occupied.
Goshawk reproductive success
We also indirectly assessed whether the goshawk population had become food-limited by examining how goshawk reproductive success varied in relation to the number of occupied home-ranges (measured as a continuous variable). In this analysis, we used two different measures of reproductive success: the average number of chicks fledged per successful breeding attempt and the proportion of breeding attempts which failed. We did not analysed variation in goshawk reproductive success prior to 1977 because goshawks did not reproduce successfully until a few years after the first home-ranges became established. Because the relationship between goshawk abundance and the number of chicks fledged per successful breeding attempt appeared to be non-linear we used generalised additive models (GAM) to characterise this relationship, fitted using the mgcv package [START_REF] Wood | Package: mgcv 1.8-9 Mixed GAM Computation Vehicle with GCV/AIC/REML Smoothness Estimation[END_REF]. In contrast, the relationship between the proportion of failed goshawk breeding attempts and goshawk abundance could be adequately characterised by generalised linear models (GLM) with a binomial error structure. All analyses were carried out in R version 3.0.3 (R Core Development Team 2015). Descriptive statistics are presented as the mean ± 1SD.
RESULTS
Superpredation increased during the colonisation process
Overall, raptors comprised 6% of all identifiable prey killed by goshawks (N= 7763) and represented 4% of goshawk prey in terms of biomass (Appendix 4). There was a per capita increase in superpredation as goshawks recovered, with the proportion of goshawk diet comprised of raptors increasing from 2% to 8% as the number of goshawk home-ranges increased from ≤14 to ≥25. However, the proportion of raptors in goshawk diet was best modelled by an interaction between goshawk abundance and the altitude of the goshawk homerange (Table 1). The contribution of raptors to goshawk diet increased with goshawk abundance in home-ranges at the two lower elevations bands (≤ 225m and 226-354m). However, there was no significant change in the proportion of raptors in the diet at higher altitudes (e.g., above 350m), where the contribution of raptors to goshawk diet was highest (Fig. 1a).
Impact on local mesopredator populations
Kestrels and tawny owls were both ranked within the 10 most important prey species, both in terms of their frequency and biomass contribution to diet (Table 2). Kestrels were the most commonly predated raptor species, representing almost half (49%) of all raptors killed by goshawks (N = 465; Appendix 5). However, this proportion declined from 55% to 39% as the number of occupied goshawk home-ranges increased from <15 to >24 (Appendix 5). Kestrels contributed most to goshawk diet in high altitude home-ranges (Fig. 2a). The number of kestrels estimated to be killed each year by the goshawk population initially increased with goshawk abundance, from 14 [11-18 95% CI] when fewer than 15 goshawk home-ranges were occupied to when 15-24 home-ranges were occupied. However, it then declined to 176 when more than 24 goshawk home-ranges were occupied (Table 3). At the same time, there has been a substantial decline in the number of kestrel pairs recorded breeding in the study site. For example, there were 29 breeding pairs in 1981 compared to only five pairs in 2014.
Tawny owls then sparrowhawks were the next most commonly preyed upon raptor species, representing 23% and 10% respectively of all raptors killed by goshawks (Appendix 5). The contribution of both tawny owls and sparrowhawks in goshawk diet increased as the number of goshawk home-ranges increased; however, there was no evidence to suggest that it varied with altitude (Table 1; Fig. 2b-c). The rank order importance of tawny owls to goshawk diet also increased from 9 to 7 as the number of occupied goshawk home-ranges increased from 15-24 to ≥25 (Table 2). Our estimates suggested there was huge increase in the number of tawny owls killed by goshawks each year, from an average of 5 [3-8 95% CI] to 159 owls [141-176 95% CI] as the number of occupied goshawk home-ranges increased from <15 to >24 (Table 3). The number of sparrowhawks killed by goshawks was also estimated to increase, from 1 [1-2 95% CI] to 53 [44-61 95% CI] as the number of occupied goshawk home-ranges increased from <15 to >24 (Table 3). Despite the estimated increase in predation on both tawny owls and sparrowhawks, there was no evidence to suggest that local populations had declined.
That is, there was little interannual variation in the number of occupied tawny owl territories, which averaged 56 ± 4.07 between 1985-2014 [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF], and sparrowhawks were known to occupy 7-14 home-ranges between 1974-1979 [START_REF] Petty | Breeding biology of the sparrowhawk in Kielder Forest[END_REF]) and 7-16 home-ranges between 2002-2014 (unpublished data).
Assessing food limitation
Declines in important prey species/groups
Almost half (48%) of all identifiable prey items were pigeons (Appendix 4). Wood pigeon (Columba palumbus) then feral pigeon were the two commonest prey species, irrespective of the number of goshawk home-ranges occupied (Table 2). The proportion of pigeons in goshawk diet declined over the study period as goshawk abundance increased, irrespective of home-range altitude (Fig. 1b; Table 4). For example, the biomass contribution of pigeons to diet declined from 52% to 40% as the number of home-ranges increased from <15 to >24. This decline in pigeon in goshawk diet appeared to be driven by a decrease in feral rather than wood pigeons. The contribution of pigeons to goshawk diet was lowest in higher altitude homeranges, where moorland habitat predominated (Fig 1b ;Table 4).
Crow/rook, red grouse and rabbit consistently ranked within the top-5 most important prey species, both in terms of biomass and frequency contribution to diet, irrespective of the number of goshawk home-ranges occupied (Table 2). The proportion of corvids and mammals in the diet increased with goshawk abundance (Fig 2c-d). For example, corvids and mammals comprised 11% and 4% respectively of diet (in terms of frequency) when <15 home-ranges were occupied, but 19% and 8% of diet respectively when >24 home-ranges were occupied (Appendix 4). In contrast, the contribution of game birds (including red grouse) declined as goshawk abundance increased, in all three altitudinal categories (Table 4; Fig. 2e). Although the dietary contribution of corvids and mammals did not vary with altitude, the proportion of game birds (especially red grouse) was noticeably higher for high altitude home-ranges (e.g., above 350m; where moorland habitat is more common). Goshawk diet also became more diverse as goshawk abundance increased, with the Shannon-Wiener diversity index increasing by 24% from 2.1 to 2.6 when the number of occupied goshawk territories increased from being <15 to >24.
Declines in reproductive success
Overall the reproductive success of goshawks declined as the number of occupied home-ranges increased. This decline appeared to be driven by both a decline in the average number of chicks fledging per successful breeding attempt and an increase in the number of nesting attempts failing (Fig. 3). The number of chicks fledging per successful breeding attempt decreased from an average of 2.90 ± 0.24 chicks to 2.29± 0.36 chicks as the number of occupied goshawk home-ranges increased from <15 to >24 (Fig. 3a). The proportion of successful breeding attempts declined from an average of 0.81 ± 0.18 to 0.58 ± 0.14 when the number of occupied home-ranges increased from < 15 to > 24 (Fig. 3b).
DISCUSSION
Superpredation has increased during the recolonization process
The amount of superpredation in our study site, particularly in recent years, is noticeably higher than recorded elsewhere. For example, raptors comprised up to 8% of goshawk diet in Kielder Forest when goshawk abundance was high (Appendix 4) which is higher than the average of 2% estimated in a review of goshawk diet in Europe (Rutz et al. 2006, Lourenço et al. 2011a).
Whilst many other studies provide a snapshot indication of the frequency of superpredation in a given system, relatively few have documented temporal variation in the frequency of superpredators killing other predators (but see Serrano 2000, Rutz andBijlsma 2006), particularly in a recovering superpredator population. That there was a per capita increase in superpredation as the goshawk population recovered (Fig. 1a) has potentially important implications for conservation and management, because similar increases in superpredation may be expected in other superpredator populations currently recolonising former ranges in both North America and Europe [START_REF] Maehr | Large Mammal Restoration: Ecological and Sociological Challenges in the 21st Century[END_REF][START_REF] Deinet | Wildlife comeback in Europe: The recovery of selected mammal and bird species[END_REF][START_REF] Chapron | Recovery of large carnivores in Europe's modern human-dominated landscapes[END_REF].
For example, if increases in superpredation negatively affect the dynamics of mesopredator species which are also of conservation concern it could lead to management conundrums for conservation projects aimed at restoring apex predator populations. However, it is important to note that when apex predator populations were reduced or extirpated, many previously suppressed mesopredator populations increase dramatically [START_REF] Soulé | Reconstructed of rapid extinctions of dynamics birds in urban habitat islands[END_REF][START_REF] Crooks | Mesopredator release and avifaunal extinctions in a fragmented system[END_REF][START_REF] Ritchie | Predator interactions, mesopredator release and biodiversity conservation[END_REF]. Consequently, any declines in mesopredator populations which accompany the restoration of large predators may represent the reversal of this "mesopredator release" process (i.e., mesopredator suppression), rather than a shift to a new state.
Impact on local mesopredator populations
It's important to note that our calculations of how many kestrels, tawny owls and sparrowhawks were killed by goshawks each year is not only likely to include breeding birds (i.e. the ones on which local population counts were based) but also non-breeders (e.g., "floaters"), individuals migrating through the study site (in the case of kestrels) and immigrants from neighbouring populations. Nevertheless, the large increase in the number of kestrels being killed each year (from an estimated 14 to 176; Table 3) coincided with a decline in the local kestrel population, which is consistent with an increase in goshawk predation having a negative impact upon the local kestrel population. However, the decline in kestrels could also be partly related to other factors, such as habitat changes or a decline in the amplitude of field vole (Microtus agrestis) population cycles in the study area [START_REF] Cornulier | Europe-wide dampening of population cycles in keystone herbivores[END_REF], as they are the main prey for kestrels in our study site. Nevertheless, our results suggest that goshawks were killing a progressively greater proportion of a declining kestrel population, which may have contributed to the study area becoming a sink habitat, as previously suggested by Petty et al. (2003a).
In contrast, local breeding population of tawny owls and sparrowhawks did not decline over the study period, despite the substantial increase in the number killed each year by goshawks (Table 3; Fig. 2). This suggests that goshawk predation on tawny owls and sparrowhawks is compensatory rather than additive. Indeed, the impact of goshawk predation on the local tawny owl population is likely to be mitigated by goshawks selectively killing individuals with low reproductive values (e.g. juveniles and old owls, which have a lower probability of surviving and reproducing than prime-aged adults [START_REF] Millon | Pulsed resources affect the timing of first breeding and lifetime reproductive success of tawny owls[END_REF][START_REF] Millon | Natal conditions alter age-specific reproduction but not survival or senescence in a long-lived bird of prey[END_REF], thus reducing the overall impact of predation at the population level [START_REF] Hoy | Age and sex-selective predation as moderators of the overall impact of predation[END_REF]. Another factor which may be compensating for the increase in goshawk predation on tawny owls is the increase in immigrants entering the local population in recent years [START_REF] Millon | Dampening prey cycle overrides the impact of climate change on predator population dynamics: A long-term demographic study on tawny owls[END_REF]. Hence, goshawk predation may also have led to Kielder becoming a sink habitat for tawny owls. Unfortunately, we do not have equivalent data for sparrowhawks to be able to evaluate whether changes in immigration and/or selective predation of individuals with low reproductive values was also mitigating the impact of increased goshawk predation.
Goshawks have become food-limited
The substantial decline of two main prey groups (pigeon and game birds) in goshawk diet, the increase in diet diversity and decrease in goshawk reproductive success are all consistent with the goshawk population becoming increasingly food-limited as the population increased. The decline of pigeon and game birds in goshawk diet over the study period (Figs. 2b and2c), presumably reflected a decline in the availability of two important prey species, namely feral pigeon and red grouse. Although, we cannot directly compare observed changes in the prevalence of feral pigeon and red grouse in diet to changes in their abundance (because regional population trends are not available), there is indirect evidence to suggest that the availability of these two prey species has declined. At a national level there has been a long-term decline in red grouse populations in England, and feral pigeon populations are also thought to have declined by 26% since 1995 [START_REF] Robinson | BirdTrends 2015: trends in numbers, breeding success and survival for UK breeding birds[END_REF]. The decline in feral pigeons in goshawk diet may also be related to a decline in the number of stray racing pigeons entering the forest, because of a sustained decrease in the number of people participating in pigeon racing since the late 1980's (RPRA 2012). Furthermore, there is also anecdotal evidence of a local decrease in the abundance of red grouse and their main habitat over the study period (M.D., personal observation).
The increase in diet diversity we observed also indirectly supports the notion that there has been a decline in the availability of important prey, because when such food become scarce, predators are forced to switch to alternative species to make up the shortfall. Indeed, diet diversity was found to be negatively related to the abundance of important prey species for both goshawks (Rutz and Bijlsma 2006), sparrowhawks [START_REF] Millon | Predator-prey relationships in a changing environment: The case of the sparrowhawk and its avian prey community in a rural area[END_REF]) and Eagle owls (Bubo bubo; [START_REF] Serrano | Relationship between raptors and rabbits in the diet of eagle owls in southwestern Europe: competition removal or food stress?[END_REF]. Thus, together our results are consistent with goshawks switching to alternative prey species (raptors, corvids and mammals) as the availability of preferred prey species (e.g. feral pigeons and grouse) declined and they become food limited.
The decline in goshawk reproductive success as goshawk numbers increased (Fig. 3) provided additional and independent evidence that goshawks became food-limited given that goshawk reproductive success is known to be positively related to food availability (Rutz and Bijlsma 2006). A decline in reproductive success could also arise if goshawks had smaller home-ranges in high density years. However, this seems unlikely given that the average distance between goshawk nest sites has varied little since the mid-80's (mean distance between nest sites = 3.97 km ± 0.43; coefficient of variation = 0.11). A decline in reproductive success with increasing density could also arise if individuals establishing home-ranges in later years were forced to settle in "poor-quality sites" because all the "good-quality sites" were already occupied [START_REF] Rodenhouse | Site-dependent regulation of population size: a new synthesis[END_REF]. Although the biggest decline in reproductive success was observed in territories established towards the end of the study period, reproductive success also declined in territories established in the early and mid-part of the study period (Appendix 6). One likely reason for the decline in reproductive success, hence food availability being population wide (rather than restricted to certain home-ranges), may be because goshawk hunting ranges overlap (Kenward 2006), such that individuals nesting in the later established "poor-quality sites" may still forage and deplete prey in areas used for hunting by birds nesting in "goodquality site".
Mechanisms underlying superpredation
The increase in the proportion of raptors in the diet as goshawk abundance increased (Fig. 1), viewed in combination with the results of other analyses is consistent with predictions for the food-limitation hypothesis of superpredation. That is, as the availability of preferred prey (pigeon and grouse) declined goshawks appear to have switched to alternative, less profitable prey species, such as raptors. Furthermore, predictions of alterative hypotheses were not supported by our data. For example, if superpredation was purely opportunistic then changes in the frequency of mesopredator species in the diet are expected to reflect changes in mesopredator abundance [START_REF] Polis | The ecology and evolution of intraguild predation: Potential Competitors That Eat Each Other[END_REF]). However, the contribution of kestrels to goshawk diet was higher in the later part of the study, despite kestrels declining. Furthermore, only two buzzards were known to have been killed by goshawks, despite a substantial increase in the abundance of buzzards in the forest since 1995 (over 80+ home-ranges now occupied). Our results also do not provide support for the predator-removal hypothesis given that the raptor species killed by goshawks were of no or little threat either to adult or juvenile goshawks (Appendix 5). Support for the competitor removal-hypothesis is also lacking because the majority (83%) of raptors killed by goshawks are unlikely to compete with goshawks for food as they are largely dependent on field voles (e.g., kestrels and tawny owls; Appendix 5), yet voles only make up 0.06% of goshawk diet in terms of biomass. Furthermore, buzzards were seldom preyed upon by goshawks, yet they are known to compete with goshawks for nest sites and kill some of the same species as goshawks [START_REF] Bijlsma | Voedselkeus van Havik Accipiter gentilis, Sperwer A. nisus en Buizerd Buteo buteo in de Flevopolders[END_REF], Krüger 2002a, b). We therefore conclude that food limitation is the most likely driver of superpredation in this system given:
1) the decline in two main prey groups, the increase in diet diversity and the decrease in goshawk reproductive success suggest that the goshawk population has become food-limited;
2) superpredation does not appear to be purely opportunistic, given that variation in goshawk predation on different raptor species did not mirror local mesopredator population trends; and 3) the species of mesopredator killed offer little support for either the predator-or competitorremoval hypotheses of superpredation.
CONCLUSIONS
Here we have provided evidence to show how superpredation varied over time in a recovering population of an apex predator, the northern goshawk. Our results suggest that increasing rates of superpredation were a response to declining food availability (pigeon and grouse) linked to increasing goshawk numbers. Thus, this study offers insights into the mechanisms driving variation in superpredation. We found evidence suggesting that an increase in goshawk predation may be contributing to a decline in the most frequently predated mesopredator, Eurasian kestrels, a species which is also of conservation concern nationally. Thus, our results indicate that superpredation is likely to be an important factor to consider when developing conservation and management strategies for mesopredator species in the future. However, rather than a shift to a new alternative state, we suggest that the decline in kestrel numbers (and their likely persistence in refuges/areas with lower superpredator abundance) possibly represents a reversal of a mesopredator release process (i.e. mesopredator suppression) following the extirpation of goshawks, and decline in other larger raptor species in the UK.
Thus the results presented here may also offer insights into how other raptor communities will change in areas where goshawks are starting to recover. Lastly, in addition to the direct effect that an increase in predation can have on mesopredator population dynamics by increasing mortality rates, it is also important to consider that recovering superpredator populations may also be influencing mesopredator dynamics by negatively affecting mesopredator reproduction success. For example, mesopredators are more likely to abandon breeding attempts when superpredator densities are high [START_REF] Mueller | Intraguild predation leads to cascading effects on habitat choice, behaviour and reproductive performance (J Quinn[END_REF][START_REF] Hoy | Food availability and predation risk, rather than intrinsic attributes, are the main factors shaping the reproductive decisions of a long-lived predator[END_REF].
Table 1. Model selection and parameters for the analysis of variation in the proportion of goshawk diet comprised of all raptor species and also kestrels, tawny owls and sparrowhawks separately. We examined whether diet varied in relation to goshawk abundance (number of home-ranges occupied) and the altitude of the goshawk home-range (e.g., below 225m, between 226-354m and above 355m). The most parsimonious model will have a ∆AICc = 0 and is highlighted in bold. The number of parameters estimated in each model is designated in the np column. AICc weights (W) are an estimate of the relative likelihood of a model.
Figure 1 .
1 Figure 1. Changes in the proportion of northern goshawk breeding season diet comprised of:
Figure 2 .
2 Figure 2. Changes in the proportion of northern goshawk breeding season diet comprised of:
Figure 3 .
3 Figure 3. Inter-annual variation in goshawk reproductive success measured as: a) the number
Table 2 .
2 The 10 most important prey species in northern goshawk breeding season diet, ranked in order of decreasing importance in terms of both their frequency in the diet and biomass contribution to diet when the number of occupied goshawk home-ranges was estimated to be less than 14, between 15-24 and 25 or more.
All raptors Kestrel Tawny owl Sparrowhawk
Model
np Estimate SE ΔAICc W Estimate SE ΔAICc W Estimate SE ΔAICc W Estimate SE ΔAICc W
1. Null 2 26.74 <0.01 11.87 <0.01 13.51 <0.01 9.36 0.01
2. Goshawk abundance (GA) 4 0.06 0.02 14.46 <0.01 0.02 0.02 12.53 <0.01 0.11 0.03 0 0.84 0.12 0.04 0 0.61
3. Altitude (226-354m) 5 -0.01 0.19 13.2 <0.01 -0.18 0.26 2.7 0.18 0.15 0.35 17.09 <0.01 -0.2 0.49 9.89 <0.01
Altitude (above 355m) 0.95 0.28 0.96 0.38 0.36 0.52 0.92 0.76
4. Altitude (226-354m) 6 0.03 0.18 4.89 0.08 -0.15 0.25 3.94 0.1 0.22 0.34 3.66 0.13 -0.12 0.47 2.66 0.16
Altitude (above 355m) 0.84 0.27 0.95 0.38 0.14 0.5 0.58 0.72
+ Goshawk abundance 0.05 0.02 0.02 0.02 0.11 0.03 0.11 0.04
5. Altitude (226-354m) 8 -0.87 0.68 0 0.92 -0.98 0.93 0 0.71 -0.72 1.62 6.94 0.03 -4.18 1.94 2.07 0.22
Altitude (above 355m) 2.57 1.01 3.18 1.26 -2.98 3.73 -1.22 3.97
Goshawk abundance 0.03 0.03 0.01 0.04 0.08 0.06 0.01 0.06
Altitude (226-354m) x GA 0.04 0.03 0.04 0.04 0.04 0.06 0.16 0.08
Altitude (above 355m) x GA -0.07 0.04 -0.09 0.05 0.11 0.13 0.07 0.15
Table 3 .
3 Estimated number of kestrels, tawny owls and sparrowhawks killed during the breeding season (March-August) each year by the Kielder Forest goshawk population when the number of occupied goshawk home-ranges was estimated to be less than 14, between 15-24 and 25 or more.
Species Occupied goshawk home-ranges Estimated % biomass of goshawk diet Average number killed per pair Mean number killed each year the by entire goshawk population 95% CI lower bound 95% CI upper bound
< 14 0.47 2.20 14 11 18
Kestrel 15-24 2.29 10.23 223 197 248
> 25 1.63 6.44 176 154 198
< 14 0.34 0.70 5 3 6
Tawny owl 15-24 1.69 3.33 72 62 83
> 25 3.32 5.79 159 141 176
< 14 0.04 0.21 1 1 2
Sparrowhawk 15-24 0.37 1.66 36 28 45
> 25 0.48 1.92 53 44 61
Table 4 .
4 Model selection and parameters for the analysis of variation in the proportion of goshawk diet comprised of different prey groups (pigeons, corvids, game birds, mammals and other). We examined whether diet varied in relation to goshawk abundance (number of home-ranges occupied) and the altitude of the goshawk home-range (e.g., below 225m, between 226-354m and above 355m). The most parsimonious model will have a ∆AICc = 0 and is highlighted in bold. The number of parameters estimated in each model is designated in the np column. AICc weights (W) are an estimate of the relative likelihood of a model.
Pigeons Corvids Game birds Mammals Other
Model
np Estimate SE ΔAICc W Estimate SE ΔAICc W Estimate SE ΔAICc W Estimate SE ΔAICc W Estimate SE ΔAICc W
Null 2 16.82 <0.01 7.94 0.02 23.01 <0.01 2.60 0.19 6.77 0.03
Goshawk abundance (GA) 4 -0.02 0.01 10.49 <0.01 0.04 0.01 0 0.8 -0.03 0.01 9.66 0.01 0.03 0.01 0 0.68 0.03 0.01 0 0.77
Altitude (226-354m) 5 0.04 0.10 4.04 0.08 -0.11 0.12 10.96 <0.01 0.32 0.22 22.75 <0.01 -0.05 0.17 6.55 0.03 -0.05 0.13 9.21 0.01
Altitude (above 355m) -0.42 0.15 -0.02 0.19 0.55 0.26 0.001 0.26 0.16 0.20
Altitude (226-354m) 6 0.03 0.10 0 0.62 -0.09 0.12 3.55 0.14 0.29 0.21 0 0.64 -0.03 0.17 4.04 0.09 -0.04 0.13 3.20 0.16
Altitude (above 355m) -0.40 0.15 -0.04 0.19 0.85 0.26 -0.06 0.26 0.12 0.20
+ Goshawk abundance -0.02 0.01 0.04 0.01 -0.04 0.01 0.03 0.01 0.03 0.01
Altitude (226-354m) 8 -0.29 0.34 1.54 0.29 -0.12 0.42 5.96 0.04 0.30 0.60 1.15 0.36 0.01 0.64 7.76 0.01 0.32 0.49 6.10 0.04
Altitude (above 355m) -1.29 0.60 0.77 0.71 -0.78 1.15 -0.83 1.39 1.03 0.89
Goshawk abundance -0.03 0.01 0.04 0.02 -0.04 0.02 0.03 0.03 0.05 0.02
Altitude (226-354m) x GA 0.01 0.01 0.002 0.02 -0.001 0.03 -0.002 0.03 -0.01 0.02
Altitude (above 355m) x GA 0.04 0.02 -0.03 0.03 0.06 0.04 0.03 0.05 -0.04 0.03
Goshawk chicks were only included in the diet if there was evidence to suggest that it was a case of cannibalism rather than fledglings dying in the nest.
Prey group Common name Mass (g)
Other Goldcrest (Regulus regulus) 6
Other Great spotted woodpecker (Dendrocopos major) 85
Other Great tit (Parus major) 18.5
Other Green woodpecker (Picus viridis) 190
Other Kittiwake (Rissa tridactyla) 410
Other Lapwing (Vanellus vanellus) 230
Other Lesser black-backed gull (Larus fuscus) 830
Other Lesser redpoll (Acanthis cabaret) 11
Other Mallard (Anas platyrhynchos) 1090
Other Meadow pipit/tree pipit (Anthus pratensis/A. trivialis) 19
Other Mistle thrush (Turdus viscivorus) 130
Other Moorhen (Gallinula chloropus) 320
Other Newt (Triturus vulgaris) 30
Other Oyster catcher (Haematopus ostralegus) 540
Other Pied wagtail (Motacilla alba) 21
Other Redshank (Tringa totanus) 120
Other Robin (Erithacus rubecula) 18
Other Siskin (Spinus spinus) 15
Other Skylark (Alauda arvensis) 38.5
Other Snipe (Gallinago gallinago) 110
Other Song thrush (Turdus philomelos) 83
Other Starling (Sturnus vulgaris) 78
Other Swallow (Hirundo rustica) 18.5
Other Teal (Anas crecca) 330
Other Tree creeper (Certhia familiaris) 10
Mammal Other Rat (Rattus norvegicus) Whinchat (Saxicola rubetra) 360 17
Mammal Other Red squirrel (Sciurus vulgaris) Willow warbler (Phylloscopus trochilus) 200 10
Mammal Other Stoat (Mustela erminea) Woodcock (Scolopax rusticola) 266.25 280
Mammal Pigeon Weasel (Mustela nivalis) Collared dove (Streptopelia decaocto) 90.25 200
Other Pigeon Blackbird (Turdus merula) Feral pigeon (Columba livia) 100 300
Other Pigeon Black-headed gull (Chroicocephalus ridibundus) Wood pigeon (Columba palumbus) 290 450
Other Raptor Blue tit (Cyanistes caeruleus) Barn owl (Tyto alba) 10.5 300
Other Raptor Budgerigar (Melopsittacus undulatus) Common buzzard (Buteo buteo) 35 890
Other Raptor Chaffinch (Fringilla coelebs) Common kestrel (Falco tinnunculus) 24 208
Other Raptor Coal tit (Periparus ater) Long-eared owl (Asio otus) 9 290
Other Raptor Common frog (Rana temporaria) Merlin (Falco columbarius) 22.7 205
Other Raptor Common gull (Larus canus) Northern goshawk (Accipiter gentilis) † 400 1000
Other Raptor Common lizard (Zootoca vivipara) Short-eared owl (Asio flammeus) 4 330
Other Raptor Common toad (Bufo bufo) Sparrowhawk (Accipiter nisus) 55 205
Other Raptor Crossbill (Loxia curvirostra) Tawny owl (Strix aluco) 43 470
† Other Cuckoo (Cuculus canorus) 120
Other Curlew (Numenius arquata) 985
Other Domestic chicken (Gallus gallus domesticus) 1900
Other Eurasian bullfinch (Pyrrhula pyrrhula) 21
Other Fieldfare (Turdus pilaris) 100
ACKNOWLEDGEMENTS
We are grateful to R. Lourenço and A.K. Mueller for their helpful comments. We thank Forest
Research for funding all fieldwork on goshawks during 1973-1996, Forest Enterprise for funding fieldwork after 1998 and T. Dearnley and N. Geddes for allowing and facilitating work in Kielder Forest. This work was also partly funded by a Natural Environment Research Council studentship NE/J500148/1 to SH and a grant NE/F021402/1 to XL and by Natural Research. We thank I. Yoxall and B. Little for the data they collected and their contributions to this study. Lastly, we thank English Nature and the British Trust for Ornithology for kindly issuing licences to monitor goshawk nest sites.
Ziesemer, F. 1983. Untersuchungen zum Einfluss des Habichts (Accipiter gentilis) auf Populationen seiner Beutetiere.
Appendix 1: The number of occupied northern goshawk home-ranges in Kielder Forest, UK Appendix 2: The average number of kestrels, tawny owls and sparrowhawks killed by the goshawk population each year To estimate the average number of each species killed by the goshawk population each year, we first calculated the average number of each species killed per pair of goshawks, each year when 1-14, 15-24 and 25+ goshawk home-ranges were occupied, using the following equation taken from Petty et al. (2003).
IK is the estimated number of individuals killed by a pair of goshawks between March and August (184 days). CF = estimated total food consumption of a female goshawk during the breeding season (189g of food per day * 184 days). CM = total food consumption of a male goshawk during the breeding season (133g of food per day * 184 days). The daily food consumption values used for male and female goshawk are the same as those used by Petty et al. (2003), originally calculated by Kenward et al. (1981). CY = total food consumption of young goshawks (i.e. offspring) during the breeding season (161g of food per day (CF+CM/2) * 108 days * mean fledged brood size of breeding pairs). The mean fledged brood size of goshawks was 2.19 in years when fewer than 15 home-ranges were occupied, 1.93 when 15-24 home-ranges were occupied and 1.31 when 25 or more home-ranges were occupied. The CY estimate assumes that young goshawks: 1) hatch around mid-May; 2) do not leave their natal territory until August; and 3) that juveniles have the same overall food intake as adults. Although young nestlings require less food than adults, older nestlings require more, such that when averaged over the entire period nestling food intake can be assumed to be equivalent to that of adults. M = average mass of the prey species. We used an average mass of 208g for kestrel (Ratcliffe 1993); 470g for tawny owl and 205g for sparrowhawk (Robinson 2005). PT = proportion biomass of the prey species in the diet. We used the dietary data to estimate of the proportion biomass of each of the three mesopredator species in goshawk diet for each of the three goshawk abundance categories (i.e. using pooled annual diet data collected when the number of occupied goshawk home-ranges was 1-14, 15-24 and 25+). This average proportion was then used in the above equation to calculate the number of individuals of each species killed during the breeding season by a goshawk pair. To get an estimate of the total number of each species killed each year by the entire goshawk population and how that has changed as the goshawk population increased in abundance, we multiplied our estimate of the number of individuals killed by a pair of goshawks (IK) by the average number of home-ranges occupied by goshawks for each of the goshawk abundance categories. The average number of homeranges occupied in each goshawk abundance category was estimated to be 6.5, 21.75 and 27.38 when 1-14, 15-24 and 25+ goshawk home-ranges were occupied respectively.
Appendix 3: List of the species killed by northern goshawks in Kielder Forest, and the taxonomic prey group they were assigned to, along with the body mass used for each species to estimate their percentage biomass contribution to goshawk diet. We were not always able to differentiate between male and female prey remains, consequently we used the midpoint between the average mass for males and females in our biomass estimates. Body mass estimates for birds were obtained from the British Trust for Ornithology's website (www.bto.org/birdfacts) and mass estimates for mammals were obtained from the British Mammal Societies website (http://www.mammal.org.uk). Appendix 5: Occurrence of raptor species in the breeding season diet of a northern goshawk population in Kielder Forest, UK when the number of goshawk home-ranges occupied each year was estimated to be 1-14, 15-24 and 25 or more.
Species n % Biomass % Frequency % of raptors Total 1-14 15-24 25+ Total 1-14 15-24
25+
Total 1-14 15-24 25+ |
01768472 | en | [
"chim.poly"
] | 2024/03/05 22:32:16 | 2018 | https://theses.hal.science/tel-01768472/file/GRANGE_JEREMIE_2018.pdf | Adham Shaikh
Asaf Avidan Different
Pulses Asgeir
Peacock Tail
Chromakey Dreamcoat
Dayvan Cowboy
A Perfect Circle Rose Blue Imagine
Par où commencer ?
Beaucoup, beaucoup de monde à remercier.
Tout d'abord je tiens à remercier le LCPO de m'avoir accueilli et formé. A la longue, le laboratoire est devenu une sorte de seconde maison où il fait bon être, travailler et apprendre. Merci aux deux directeurs, Sébastien Lecommandoux et Henri Cramail de m'en avoir ouvert les portes.
Biensûr, je remercie également à la fois les acteurs du projet Rubbex ainsi que l'entreprise Michelin pour tout ce qu'ils ont pu m'apporter tout au long de cette thèse comme connaissances, à la fois sur le caoutchouc naturel mais aussi sur la chimie des polymères en général et sur le milieu industriel.
Ensuite vient le tour de mes chefs. Que dire ? Soit beaucoup, soit rien ! Je vais donc tâcher de faire preuve d'esprit de synthèse pour simplement vous dire merci pour tout ce que vous avez su m'apporter tant sur le plan scientifique que sur le plan humain. Je pense que vous en êtes conscients, mais vous êtes tous les deux de sacrés mentors pour moi et j'espère un jour être amené à retravailler avec vous quel que soit le projet, pour apprendre toujours plus à votre contact. Merci encore pour tout.
Et puis il y a les collègues. Ceux qui m'ont vu (supporté) pendant ces 3 ans et plus. J'ai hésité à faire une liste, mais je pense que les concernés se reconnaitrons d'eux-mêmes et savent très bien à quel point ils comptent pour moi (#N1-04autenthique). Merci à vous tous pour les rencontres, les discussions, les fous rires, les débats, etc… Pour paraphraser deux artistes que j'aime beaucoup je dirais que « je suis un peu de moi, mais beaucoup de vous tous quand j'y pense ». Bonne route à tous et j'espère vous recroiser rapidement.
Enfin, je voudrais remercier ma famille et mon Equipe de vie. Merci d'être là quel que soit l'instant.
On en a traversé des choses ensemble…Vous étiez là tout au long de cette aventure de dingue, ces 3 ans de burn-out permanant et vous ne m'avez pas lâché ! Vous avez su me rebooster quand il fallait, m'aider à sortir la tête de l'eau par moment et me donner de la force et du courage quand j'en ai eu besoin. Merci pour ça, merci pour tout. #CROU
Résumé en français
Résumé en français
Ce projet de recherche se propose de générer de nouvelles connaissances sur le caoutchouc naturel (NR) afin d'essayer de gérer au mieux cette ressource renouvelable. En particulier, il a été largement montré que le caoutchouc naturel est composé d'environ 93% de polyisoprène (PI) 1,4-cis de très forte masse molaire, mais également de 7% d'autres composés (lipides, protéines, minéraux, …). Ces derniers sont vraisemblablement responsables de la très grande différence de propriétés observée entre NR et caoutchouc synthétique. De plus, de nos jours, le seul modèle rendant compte de ces observations est le modèle décrit par Tanaka, à savoir que les chaines de PI sont fonctionnalisées en bout de chaine par des protéines et des lipides (Figure 1). Ce modèle qui propose donc que les chaines de PI soient plutôt sous la forme de copolymères est largement admis mais nécessite néanmoins d'être évalué afin de démontrer que la structure d'un tel copolymère peut rendre compte des propriétés supérieures du NR.
L'objectif principal de ce projet de thèse est donc la synthèse de copolymères modèles et l'évaluation de leurs propriétés afin de mieux comprendre la structuration du NR lors du stockage et le durcissement généralement observé. La synthèse des macromolécules mimant le modèle de Tanaka peut être décomposée en plusieurs étapes :
-L'obtention d'un noyau polyisoprène totalement « 1,4-cis » -Le greffage d'un phospholipide en ω de la chaîne PI -Le greffage d'une protéine en α de la chaîne PI
Résumé en français
De plus, ces méthodes permettent un parfait contrôle des extrémités de chaîne avec la possibilité de synthétiser des PI « homo » ou « hétéro » téléchéliques. Pour toutes ces raisons, la dégradation contrôlée du NR a été préférée.
Dans un premier temps, nous sommes partis de feuilles de NR issues de 2 clones (PB235 et RRIM600). Nous avons caractérisé en détail ces différents NR et mis en place des méthodes d'extraction afin de pouvoir ensuite dégrader les chaines de polyisoprène. Les meilleurs résultats ont été obtenus avec le THF et le toluène.
Nous avons ensuite décidé de réaliser la dégradation oxydative du PI naturel ainsi obtenu par époxydation à l'acide méta-chloroperbenzoïque (m-CPBA) suivi d'une rupture à l'acide périodique (Figure 2). Différentes masses molaires ont ainsi pu être obtenues avec des dispersités relativement faibles. L'utilisation de deux matières premières naturelles et d'une matière première synthétique a également permis de déterminer des différences de réactivité au cours de la dégradation. En effet, à partir d'une certaine valeur de masse molaire, la dégradation devient difficile voir impossible avec le PI naturel. Dans le cas du PI synthétique en revanche, la dégradation est possible même à de très faible taux. Une masse de travail de 10 000 g/mol a été fixée pour le reste de l'étude, afin d'évaluer aisément l'efficacité des réactions mises en place et de permettre une bonne caractérisation des (co)polymères.
Dans un premier temps, un PI fonctionnalisé avec deux chaines grasses, de longueur et structure variable, a été obtenu (Figure 3). L'analyse DSC de ces hybrides PI/Lipide a montré dans certains cas une cristallisation et une fusion du bout de chaine lipidique à des températures variables selon la nature des chaines grasses liées (Tableau 1). La température de cristallisation la plus haute a été obtenue après fonctionnalisation de la chaîne PI par deux acides lignocériques (C24:0). L'influence de la taille de la chaîne polymère sur cette propriété a également été étudiée, montrant une disparition de la cristallisation pour des PI de 25 000 g/mol. En revanche, les températures de fusion et de cristallisation obtenues pour des hybrides de masse molaire 5 000 g/mol sont en tout point similaires à celles présentées dans le tableau ci-dessus. Il a donc été proposé que les chaines lipidiques greffées présentent la capacité de créer des nodules de cristallisation regroupant ainsi plusieurs chaines PI entre elles (Figure 4). Ces hybrides PI/Lipides représentant des modèles de caoutchouc naturel, il a ensuite été proposé d'étudier leur propriété de cristallisation à -25°C, température définie dans la littérature comme étant la température la plus favorable à la cristallisation du PI. Des analyses DSC ont donc été menées sur des synthons PI de 10 000 g/mol fonctionnalisés (ou non) par des chaînes grasses ainsi qu'en présence (ou non) d'esters gras libres. Comme le montre la Figure 5, il a été montré que la fonctionnalisation par les lipides empêche la cristallisation du PI. En revanche, l'addition de lipides libres (Figure 6) a permis de retrouver une cristallisation du PI. Le meilleur résultat a été obtenu après ajout d'un mélange 4% (massique) d'acide stéarique / 4 % (massique) de linoléate de méthyle à l'hybride PI/Lipide initial (ici PI/acide stéarique). Il a donc été conclu que si la fonctionnalisation du PI par des chaines grasses empêche sa cristallisation à -25°C, l'ajout de lipides libres a, en revanche, un effet bénéfique sur la cristallisation du polymère. Ces résultats sont en tout point similaires à ceux existants dans la littérature pour le NR prouvant ainsi que l'hybride PI/Lipide synthétisé est un bon modèle de caoutchouc naturel. Dans un second temps, le greffage de protéine en α de la chaîne PI principale a été étudié. Des synthons PI modifiés et accepteurs (Figure 7) de protéines ont été synthétisés via l'insertion en bout de chaîne polymère d'une fonction maléimide fortement réactive vis-à-vis des fonctions thiols des protéines (cystéines). Deux protéines ont été utilisées pour le couplage avec la chaîne PI : la Lipase B issue de Candida antarctica (CALB) et l'albumine de sérum bovin (BSA). Dans les deux cas, des essais de couplage ont été menés, formant des émulsions très stables (Figure 8). Plusieurs tentatives de caractérisation de ces émulsions ont été réalisées (SEC, SDS-PAGE, RMN) mais n'ont pas permis de déterminer de façon claire la formation (ou non) de l'hybride PI/Protéine.
En revanche, l'analyse par microscopie optique de ces émulsions a montré une différence dans les tailles des gouttes formées et ce, plus particulièrement dans le cas du couplage PIMal/BSA. Bien que ces analyses ne constituent pas une preuve suffisante, elles semblent être en faveur de la formation (certainement à faible rendement) du copolymère. de macro-amorceurs de PI porteurs de fonctions amino-alcools en bout de chaine (Figure 9).
Différentes architectures de copolymères PI/Polypeptide ont donc été synthétisées, ouvrant ainsi la voie à la synthèse d'un copolymère tri-bloc polypeptide-PI-lipide, proche du modèle de Tanaka.
Introduction
For approximately 20 years and with the growth of environmental issues, like global warming or fossil fuel feedstock decrease, the chemical industry is slowly adapting and transforming toward more sustainable processes and starting materials. A wide variety of new materials (monomer, polymer, composites, etc…) emerged either inspired or directly taken from natural resources (lignin, vegetal oil, celluloses, polysaccharides, etc…). More generally, this new approach usually starts from the observation of Nature, the understanding and/or characterization of the desired new process/molecule/material and either its direct use or modification to replace an existing material presenting any environmental or economical issue (like petro-sourcing for example). Figure 1 presents a schematic diagram of various biorefinery processes summarizing the raw-materials of interest and the final valuable products available by a "greener" approach witnessing of the change of mind that chemistry is currently living. Natural rubber (NR) is an old biopolymer reported to be used by Mesoamerican people as early as 1600 BC 2 . Contrarily to other compounds from biomass, NR has been industrially used for about 200 years now, after the discovery of vulcanization in 1838 by Charles Goodyear, and the global consumption is still increasing (Figure 2). It is produced by more than 2 500 plant species but the main industrial source of NR comes from Hevea brasiliensis, a tree, principally cultivated in South-East Asia (Thailand and Indonesia) 3 .
Depending on the plant species, the microstructure of natural poyisoprene varies from 100% 1,4-trans for Gutta-Percha to 100% 1,4-cis for Hevea tree.
NR from Hevea exhibits thermo-mechanical properties (fatigue resistence, high hysteresis, etc…) that makes it essential for some applications like plane or truck tires, medical material or seismic anti-vibration systems. These astonishing properties can not be fully mimicked by synthetic rubbers as their origins are not still completely understood. Furthermore, as NR comes from biomass, its structure is highly dependent on parameters such as season, age of the tree, nature of the soil where it is farmed and can also face microbiologic attacks thus blocking the production, and causing variations in industrial processes 4 . Isoprene rubber (IR) is the synthetic homologue of NR. It is obtained from the polymerization of isoprene, a monomer extracted from petroleum cracking fractions. This material presents several advantages compared to NR like the low production cost and fewer variation of structure after optimization of the synthetic process. This material was extensively studied to get independent from NR but unfortunately it was rapidly established that IR was not able to compete with the mechanical properties of the natural material.
Among all the studies trying to determine the origins of the property differences between IR and NR, Tanaka and co-workers summarized about 30 years of their own researches [5][6][7][8] on that field and gave birth to the only versatile explanation to date by postulating that the polyisoprene (PI) chain present in NR is not only a single linear chain of polymer but is substituted in α and ω positions by a proteinic and a lipidic moiety respectively (Figure 3). Tanaka explained that such a molecule could self-assemble (Figure 4) creating micro-domains of either lipids and/or proteins, thus forming a physical network. Considering the presence in the material of free lipids and proteins, those anchors can be formed either only by chain-ends of various linear PI or with the inclusion of free compounds. This model also suits for the self-assembly of rubber particles in water media (i.e. in latex) with the formation of a lipidic membrane stabilized by the amphiphilic behavior of proteins (Figure 5) thus forming particles in water. Surprisingly, the veracity of this model has never been checked directly in the literature.
The goal of this PhD work is then to synthesize the "tri-block" molecule reported in Figure 3 and to study the properties of such copolymer to validate (or not) the model proposed by Tanaka. To this end, a general chemical pathway was established starting from a pure 1,4-cis hetero-telechelic PI, which will be functionalized at both chain-ends either by a lipidic moiety or a protein. Attention will also be paid on both di-blocks "PI-Lipid" and "PI-Protein" as none of them have already been reported in the literature. Figure 6 summarizes the different pathways considered during this PhD work. The choice of the pathways will be justified all along the different chapters of the document.
In a first chapter, a bibliographic study will present generalities about both IR and NR : NR biosynthesis and properties, the main property differences between NR and IR and also an overview of the synthesis of IR. A brief summary of the existing literature describing Polymer/Protein and Polymer/Lipid coupling will follow, showing that PI was never particularly studied in this field, thus enhancing the challenge of this PhD thesis.
The following chapters will focus on the synthetic pathway. First will be presented the synthesis of a 1,4-cis pure heterotelechelic PI varying the length of the polymeric chain.
Several reaction parameters will be studied. Then, the coupling of PI with lipids will be focused on. The different properties of this hybrid materials will be investigated in details.
Finally, it will be presented the coupling of PI with proteins, which is the most challenging part of the thesis. This manuscript will end up with a general conclusion.
REFERENCES:
( [3][4][5][6] that nowadays, the genomic diversity of the Hevea brasiliensis produced is very low which could lead to crop failure or fungal issues and thus to a decrease in the global supply of the material. For this reason the work of the team of Cornish focuses mainly on finding alternative sources of NR with properties comparable to the one extracted from Hevea brasiliensis. One good candidate for this substitution is the rubber extracted from Guayule which is comparable to traditional NR but is considered as non-allergenic compare to NR from Hevea brasiliensis which possesses many allergens mostly due to its protein content.
However, Guayule presents one major drawback: the rubber is produced in all compartments of the plant (leaves, roots, limb, etc.). Consequently, the recovery of the rubber is difficult.
The shrub has to be cut, milled and the rubber is then extracted with organic solvents. On the contrary, with Hevea brasiliensis, the rubber can be obtained directly by tapping the tree.
Figure 1 shows the tapping of an Hevea tree and the recovery of the latex. By centrifugation, this latex can be fractionated into three main parts: namely the cream which is the hydrophobic part of the latex (mainly composed of rubber particles), the C-serum composed of the hydrophilic component (proteins, sugars, minerals etc…) and the lutoids also called "bottom fraction" in the literature. By coagulation of the latex in acidic conditions, NR "ball" can be directly recovered affording a processable material. Rubber particles contained in the latex are indeed stored in the laticifers which are special plant channels (vessels) devoted to the transport and storage of latex solely 7 . To date, no particular evidence helped understanding the production of rubber by plants as no specific interest of this material was reported for the physiology of the vegetal 8 . It was, nevertheless
proposed that latex could be used by the plant as a protection. Two size populations exist in rubber particles (RPs), one small (about 0,2 µm) and one big (about 1 µm) with only few differences reported between them. In both cases, they are composed of a core of PI surrounded by a lipidic membrane where proteins and other rubber components can be adsorbed 1 (Figure 2). Berthelot et al. 9,10 proved that among all the proteins present in the Hevea brasiliensis the two predominant ones are REF (Rubber Elongation Factor) and SRPP (Small Rubber Particle Protein), two relatively small proteins (15 kDa and 24 kDa respectively), rather hydrophobic and which are located on different RPs (REF is adsorbed in bigger RPs while SRPP is part on the small RPs membrane). The exact role of those proteins is still not perfectly understood but first insights seem to prove that SRPP plays a role in latex coagulation and that both of them have a positive effect on the rubber biosynthesis. More generally, NR is a complex material as it is not only constituted of an hydrocarbon polymeric chain but also of a "non-isoprene" part. This part usually represents around 6 wt% of the dry material and is highly dependent, in composition, of the clonal origin of the rubber, the meteorological conditions, the nature of the soil where the tree was grown, etc…
This "non-rubber" part is composed of proteins, carbohydrates, lipids and inorganic constituents and represents the main composition difference with IR. It is thus believed to be involved in the specific and superior properties of NR. Table 1 gives average values of the NR composition as reported by Vaysse et al. 11 The lipid content of various clones of NR [12][13][14] was highly studied by Vaysse and Liengpayoon, highlighting the variation of composition in function of the clonal origin previously mentionned. Regarding the polymer characteristics, NR is usually of high molar mass (~ 1 000 000 g/mol) with a quite broad dispersity (> 2). Moreover, it usually exhibits a bimodal molar mass It must be underlined that when NR is solubilized in organic solvents (toluene, THF, cyclohexane, DCM etc…) a gel fraction is obtained, which can also vary with the clonal origin of the rubber and is assumed to be formed along the biosynthesis of the material due to chain branching or even physical cross-linking 2 induced by the presence of "non-rubber" constituent confering self-assembly properties to the polymeric chain.
b. Biosynthesis of NR
Many studies have been conducted to understand the biosynthesis of NR 15 and are still ongoing nowadays 16 . The overall process belongs to the larger family of isoprenoid (low molar mass coumpounds) biosynthesis 1 . The monomer used for the synthesis of NR is isopentenyl pyrophosphate (IPP, Figure 4). It is produced from sucrose via the cytosolic mevalonate pathway (in cytoplasme) or the methylerythritol pathway (in chloroplast or bacteria). 11 . Once IPP is obtained, it is then isomerized to dimethylallyl pyrophosphate (DMAPP, Figure 4) by an enzyme named IPP isomerase. DMAPP was originally thought to be the initiator of the polymerization reaction 17 and thus PI from NR was supposed to be composed of a long chain of 1,4-cis units starting from one dimethylallyl moiety. But, ozonolysis of NR and NMR studies contradicted this assumption. Indeed, no trace of acetone formation was observed for Hevea brasiliensis NR (coming from ozonolysis of the dimethylallyl moiety) and a small amount of 1,4-trans units was detected by 13 C NMR analysis 2,7,18 . and then a removal of proton to recover a double bound in the cis configuration. This is known to take place in the active site of an enzyme called cis-prenyl transferase or rubber transferase, which is adsorbed/encapsulated in the lipidic membrane of the rubber particles. A metallic cofactor (Mg 2+ ) is needed for the enzyme to be active (Figure 6). Cornish et al. 5,[19][20][21] reported about the structure of the cis-prenyl transferase representing it like a "tunnel" where initiator enters, binds to the active site and starts the polymerization. The presence of an hydrophobic part in the cavity interacting with the synthesized backbone, pouring the polymer inside the RP while the chain is growing was also suggested.
Figure 6: Mechanistic aspects of the rubber biosynthesis
Finally, the last shade existing in the biosynthesis machinery concerns the termination step and the nature of the chain-end of the polymer. To date, no particular evidence exists to explain the termination step. It seems that at some point, the growing chain disconnects from the active site of the enzyme and is packed in the core of the RPs. It was originally thought that the polymer would grow until it reaches a limit molar mass thus triggering the termination step. Nevertheless, this is in contradiction with the general dispersity observed in NR which is usually broad while such a limit value of molar mass would induce a really narrow dispersity. About the terminal chain end of the polymer, infra-red and NMR analyses carried out by Tanaka et al. 2,[22][23][24] showed the existence of fatty acid chains linked to the PI backbone. It is generally accepted that the terminal group comes from the degradation of the pyrophosphate moiety but no evidence of an hydroxyl function coming from its hydrolysis was observed. The main possibility described is the esterification of the chain-end with lipids either in the form of phospholipids or simple ester groups formed after hydrolysis of the pyrophosphate group and reaction with a free fatty acid.
To conclude, this sub-chapter briefly presented NR and its biosynthesis. An overall vision of the knowledges on this field was given by Cornish and is reported here in Figure 7.
This figure exposes the complexity of the biomachinery involved in this interfacial polymerization. But also, this vision is a starting point for various works trying to mimic this biosynthesis using cationic polymerization for exemple as will be developped later in the manuscript. This choice is questionable as the term α in polymer chemistry usually refers to the chain-end coming from the initiator which is, in the case of NR, the protein part. For the sake of homogeneity, in all the manuscript, the α chain-end will refer to the protein one and the ω chain end to the lipidic one.
i. The trans units at the α-chain-end
As explained before, the presence of trans units at the beginning of the polymeric chain was due to the condensation of IPP with a trans-prenyl tranferase. As the molar mass of NR from
Hevea is too high for precise NMR analysis, Tanaka studied NR from other sources (mushroom 25 , Goldenrod leaves 26 and also short polyprenols 27 ) to characterize their structure and compare them with Hevea NR after fractionation to analyze only the shortest chains (Figure 8). Starting from the study of Goldenrod, several dyads of trans and cis units were assigned. Moreover, for Goldenrod and polyprenols, it is known that the biosynthesis proceeds starting from DMAPP and thus the signal of the terminal dimethylally group in these model molecules (named ω in the NMR spectrum) can be observed. Finally, it was demonstrated that this dimethylallyl group was linked to the trans units and that no cis-trans dyad could be observed in the model molecules but only trans-cis and cis-cis which was expectable regarding the "pure" microstructure of NR. Goldenrod rubber and polyprenols exhibit the same structure: DMAPP moiety as chain end, a couple of trans units and then a varying amount of pure cis structure. Comparing with NR from Hevea the same trans signals were visible which explains the trans units in Tanaka's model.
ii. Origin of the α proteic part
It is the most controverted part of the model. Originally, the structure of the α chain-end was supposed to be a dimethylallyl group. However, ozonolysis of NR did not show the formation of acetone and the 13 C NMR analysis (see § 1.3.1.) showed the absence of the dimethylallyl group in NR. To clarify this point, deproteinized NR (DPNR) were produced by the use of proteolytic enzymes and acidic or basic treatments 28 . Figure 9 presents a FTIR spectrum of different types of NR bearing a decreasing amount of proteins. It can be seen that a small band around 3300 cm -1 is decreasing with the ratio of nitrogen present in the polymer. By comparison with model oligopeptides it was concluded that if even after harsh removal of proteins, the same absorption band as oligopeptide is still present, it is because the remaining proteins are linked to the polymer backbone. Nevertheless, this conclusion was denied by
Tanaka himself 29 and it was demonstrated that in fact this band was also present in IR thus showing that it can not be a proof of linkage with proteins. 23 Besides, the gel content decreased with the removal of proteins thus attesting that some of the branching points of the material were involving somehow proteins. The current hypothesis is that the real α chain-end is still unknown but it may interact (at least physically) with proteins or oligopeptides and play a role in the mechanical properties of the material.
iii. Origin of the ω phospholipidic part
The last part of the model is the linkage with phospholipids. As explained before, many lipids are present in the latex, either linked or not to the polyisoprene chains. Free lipids can be removed by a soxhlet acetone extraction. The obtained polymer is referred to as acetone extracted natural rubber (AE-NR) and, when applied to DPNR, the polymer is called (AE-DPNR). Moreover, linked fatty ester can be removed from the polymer backbone by transesterification using sodium methanolate and toluene as a solvent. The obtained polymer is refered to using the prefix "TE" for "transesterified" (for example, transesterified acetone extracted deproteinized natural rubber will be refered to as TE-AE-DPNR). Moreover, AE-DPNR was shown to contain about one phosphorous atom and two fatty chain per chain of PI, leading to the conclusion that the linked fatty esters are phospholipid derivatives 22 . Besides, 31 P NMR analysis performed on AE-DPNR revealed the presence of characteristic signals of both mono and diphosphate moieties 30,31 . Tanaka then postulated that these chain-ends arise from the pyrophosphate group present during the propagation which, after chemical modification either by a direct condensation of a phospholipid moiety or after hydrolysis (to yield a terminal hydroxyl group) and esterification lead to a lipid terminated PI.
Furthermore, it was shown that the enzymatic suppression of phospholipid moieties induced a higher decrease of the gel fraction than the removal of proteins. Finally, after dissolution in toluene it appears that the addition of a small amount of polar and protic solvent (methanol) could partially break the gel fraction, thus indicating that the branching point could be attributed to hydrogen bonding between chains and especially between phospholipid moieties.
iv. Conclusion
In conclusion, Tanakas's model of NR is, to date, the only model to give a link between biosynthesis of NR, physico-chemical properties of the final polymer and its internal structure. Nevertheless, it subsists lack of information regarding the chain-end which was not clearly identified.
d. Cold crystallization of NR and IR
Cold crystallization (CCr) of polyisoprene refers to the capacity of the material to crystallize after being maintained at low temperature for a given time. Many works focused on this property due to the fact that NR exhibits a quicker CCr than its synthetic analogous. This ability of the natural material is one of the main particularity that was not clearly elucidated yet.
From the pioneering work of Wood 32 in 1946 to the more recent work of Kawahara 33 in 2004, many researches focused on a better understanding of CCr for both IR and NR. Wood 32 studied many parameters such as the rate of crystallization of NR or the temperatures at which crystallization can be observed. NR exhibits its highest rate of crystallization at -25°C where half of the final crystallinity is obtained within 2.5 hours. It is also shown that the rate of crystallization follows a Gaussian plot (Figure 11). For instance, sample of NR kept at -50°C or -78°C for 3 weeks did not exhibit any crystallization. The same samples underwent crystallization when temperature was raised to -35°C.
Thus, below -50°C the crystallization of NR becomes negligible as well as for temperatures higher than 14°C (the time for a total crystallization was evaluated to be about a year). CCr of IR was then compared with NR. Burfield and Tanaka 35,36 showed that IR even with a high rate of 1,4-cis units exhibits really poor crystallization when compared to DPNR (i.e.
10% of its crystallinity). Moreover, the rate of crystallization decreased with the reduction of the 1,4-cis content. It might be noted that AE-DPNR still crystallizes quicker than the best IR studied. Like for NR, stearic acid can also work as a nucleating agent for IR, enhancing its speed of crystallization. In conclusion, both the non-isoprene components of NR and its pure microstructure are responsible for it superior properties. Many other studies were performed to better understand the CCr of NR. For example, Tanaka showed a great decrease in the speed of crystallization of the AE-NR compared to NR.
Nevertheless, after transesterification of the NR, the initial speed of crystallization was recovered (Figure 12a). When TE-DPNR and AE-DPNR were doped with 1% (w/w) of lipids (methyl linoleate (ML) and stearic acid (SA) were selected as they represent the main lipids present in NR) (Figure 12b andc), whatever the fatty acid added, the resulting materials exhibited a higher rate of crystallization, the highest one being obtained with AE-DPNR doped with free fatty chains. It was thus suggested that the higher rate of crystallization of NR comes from a synergetic effect between linked and free fatty chains. Nevertheless, this synergetic effect is visible only in the case of AE-DPNR doped with methyl linoleate but not really in the case of doping with stearic acid. 37 Tanaka investigated also the doping of IR with several fatty acids or esters. A plasticizing effect of some unsaturated fatty acids and of their corresponding methyl ester was shown (Table 2). For instance, addition of 30 wt % of linoleic acid (LiA), methyl oleate or methyl linoleate decreases significantly the T g of IR (~ 30°C).
According to the authors, IR samples mixed with even a huge amount of unsaturated fatty acids formed transparent materials (films) contrarily to saturated lipids. This transparency of the final admixture is considered as a proof of solubility of the fatty acid in the polymer matrix while translucency is a proof of immiscibility for pure aliphatic fatty chains. This main difference of behavior could also be related to the nucleating effect of stearic acid and the apparent plasticizing effect of linoleic acid and methyl linoleate. 38 It was also attempted to mimic the CCr of NR by both chemical modification and doping of a high "1,4-cis" rate IR (~ 98%) 39,40 . Using the existing pendant 3,4-units, Kawahara et al.
introduced hydroxyl functions onto the polymeric backbone to further graft various fatty acids to mimic the linked fatty acid of NR. The amount of fatty chains linked to IR was 0.53 wt % as determined by IR analysis, corresponding to an average of 4 fatty chains linked per chain.
After mixing this hybrid polymer with 1 wt % of ML, the rate of CCr increased to a level still lower than the DPNR but much bigger than the initial IR (Figure 13). It was also shown that this behavior is dependent on the nature of the linked fatty acid (Figure 14).
39
To finish with, it was demonstrated that the crystallization of PI showed a non-usual behavior 41,42 . Indeed, crystallized PI shows two different endothermal peaks during melting (Figure 15). Kim 41 explained that both transitions correspond to different types of crystallites which are similar in structure (same X-Ray diffraction pattern) but may differ from their stability and rate of formation. In the literature, both endotherms are always referred to as α and β transitions for high and low crystallization temperature respectively.
As shown in Figure 15, the β transition is the slowest as it appears only after quite a long time of crystallization. The proportion between the two transitions also varies with the degree of crystallinity: the higher the overall crystallinity of the PI, the higher the proportion of β transition. It was also demonstrated that the propagation axes of the crystals are different as well as their thickness 42 . For IR, the proportion of the β transition is bigger than in NR after full crystallization of the sample. In conclusion, the origin of the fast and high CCr of NR has been widely studied for the passed fifty years as it is one of the main properties that cannot be reproduced with high 1,4cis rate IR. The higher speed of crystallization was reported to be obtained at -25°C reaching 75% of the final crystallization rate after 3 hours of isotherm. It was demonstrated that this phenomenon was most probably due to the presence of fatty acid in NR, both free and linked fatty chains playing a role. The nature of the fatty acids present in the material is also important as SA exhibits a nucleating capacity toward PI whereas ML and LiA are described as plasticizers of the polymer. Both effects were described as acting synergistically in the case of NR. Tanaka and Kawahara synthesized a model of NR by grafting different fatty acids on an IR. Nevertheless, the amount of 1,4-cis units is not 100% and the grafted lipids are connected onto the polymeric backbone but not at the chain-ends. This would change the weight proportion between the lipidic and the polymeric chains and give different properties.
Indeed, this model does not exactly follow the behavior of NR as the CCr of IR linked to fatty chains is higher than the pure IR (the reverse for NR if comparing TE-NR with AE-NR). It may thus be possible to get closer to Tanaka's model of NR in order to improve the understanding of the phenomenon.
e. Strain-induced crystallization of NR and IR
Strain-induced crystallization (SIC) is the second major difference of behavior between IR and NR. Industrially, it is a property of high interest as it corresponds to the response of the material under solicitation which has an impact on the applications for both IR and NR.
Even if SIC was first established in 1925 43 the reason for the superiority of the NR toward its synthetic analogue is still under debate. Here, it is proposed a brief overview of both positions developed in the literature without claiming to be exhaustive.
Both IR and NR (vulcanized or not) exhibit SIC phenomenon (even if for IR it requires a high 1,4-cis rate) and usually NR present higher mechanical properties. The higher gap in performances is visible in non-vulcanized state. Stress-strain plots for non-vulcanized IR and NR (Figure 16.a black and red respectively) and for vulcanized ones (Figure 16.b IR in black and NR in red) showed that even in the non-vulcanized state NR exhibits a cross-linked behavior with a decrease of stress during the decrease of strain. This behavior could explain the fast SIC of unvulcanized NR as the PI chains will be easier to align (under stress) thanks to the cross-linking. In comparison, the stress observed in the case of non-vulcanized IR rapidly collapses with the increase of the strain behaving like a linear polymer. This particular behavior was correlated to the existence in the natural material of anchor points at both chain-ends (Tanaka's model) [44][45][46][47] thus creating a pseudo-network that could explain the crosslinked-like behavior of NR. It could also be a clue toward the understanding of the high tensile-strength of vulcanized NR compared to vulcanized IR 48 . Nevertheless, this structural organization exhibited by NR does not give any information about the quicker SIC of NR compared to the IR in vulcanized state (Figure 17). Indeed, the SIC of vulcanized NR is still quicker that the one of the synthetic homologue and the overall rate of crystallization of NR is also higher than that of IR. It is also noteworthy that the gap between crystallization and melting of both materials is the same 49 and values for NR are just shifted to lower values.
On Figure 17b it can be seen that after total crystallization of vulcanized NR it remains crystalline at higher temperature than the synthetic sample. These properties are an example of the limitation of the "pseudo-network" theory as it can not be used to clarify the high rate of crystallinity neither the fast crystallization of vulcanized NR under low strain rate. 49 By analogy with CCr, it was suggested that slow SIC of IR could come from the absence of non-rubber constituents like fatty acids 50,51 that were demonstrated to be nucleating agent of PI in the case of CCr or from defects induced by lowest rate of 1,4-cis units in the synthetic PI 44,52 . Nevertheless, use of stearic acid as nucleating agent for the SIC was demonstrated to be a weak hypothesis by Kohija et al. 50 as the addition of various amounts of SA did not accelerate the SIC in IR. The starting point of the SIC could be the alignment of the polymerchains which crystallize and thus play themselves the role of nucleating agent. This would be more favorable to the stereoregularity theory as the better the microstructure control, the higher the crystallization. This was also confirmed by Toki et al. 52 by studying the SIC of NR and two different IRs bearing different microstructures. They demonstrated that indeed the higher the 1,4-cis content, the higher the SIC, but they could not relate their results to the high tensile-strength of NR, as sample with various 1,4-cis exhibited similar values.
To the best of our knowledge, to date, the origin of the superior mechanical properties of NR (vulcanized or not) compared to IR is still not perfectly clear. Both hypotheses ("non-rubber constituents" or "high rate of 1,4-cis") allow to explain part of those properties but an overall explanation is still expected.
II. Synthesis of Polyisoprene a. Introduction
Isoprene is a quite versatile monomer which can be polymerized by all the traditional technics (i.e. anionic, cationic, radical, metathesis, coordination-insertion). A good control of the microstructure is a crucial parameter as the proportion of each isomer (Figure 18) could highly influence the properties of the final polymer. In the frame of the work presented here, the highest rate of 1-4 cis units was necessary to get as close as possible to Tanaka's model.
Moreover, in order to selectively functionalize the polymer backbone in α or ω position, a good control of both chain-ends as well as different reactive functionalities at both chain-end was required. In this sub-chapter, all the different synthetic methods are compared in order to determine if any polymerization method could be used in our case.
b. Cationic polymerization
Cationic polymerization of isoprene might be both the most promising way of synthesizing PI but also the most challenging one 53 . Indeed, as described previously the mechanism involved in the biosynthesis of NR is close to a cationic process 54 . In general, the cationic polymerization of isoprene follows the mechanism presented in Figure 19. The initiating step is the ionization of the initiator to form either a carbocation or a simple proton (in the case of water as initiating agent) which then reacts with a first isoprene molecule to form a tertiary carbocation. This carbocation can re-arrange to various mesomeric forms before propagation.
This rearrangement is responsible for the formation of various configurations in the final polymer. The terminating group might be variable as many termination reactions can take place in cationic polymerization: the β-elimination of a proton affording a terminal diene (principal termination observed), transfer to water giving a hydroxyl-group at the end of the chain and termination by the counter-ion putting back the "X" function at the end of the chain. Table 3 gives an overview of various conditions used to perform this polymerization highlighting the great number of studies since the 60s. As reported, even if results are highly dependent on the conditions, some general trends can be established:
-the microstructure of the cationic PIs is mainly 1,4-trans, contrarily to NR which bears only 1,4-cis units -a high amount of double bond is lost, associated to a high T g . This was described to be due to the formation of mono-, di-or tri-cycles resulting from side reactions, creating either pendant or internal aliphatic rings. This would increase the glass transition of the final polymers as well as the chain rigidity. Their exact structure remaining unclear.
-Cationic PIs are also characterized by a broad molar mass distribution mainly due to extensive chain-transfer reactions. Moreover, most of the time, PIs are partialy crosslinked Table 3: Different conditions used for cationic polymerization of isoprene -Reproduced from Ouardad et al 53 More recent studies of Kostjuk and Peruch bring new interesting results 53,55-64 . They worked on the cationic polymerization in emulsion and tried also to mimic the biosynthesis of NR using analogs of the natural monomer and initiator. The first publication about water-phase cationic polymerization of isoprene was reported in 2011 by Kostjuk et al 65 . Their pioneering work was compared with a traditional approach using organic solvents. Table 4 reports the results obtained varying the solvent from dichloromethane to α,α,α-Trifluorotoluene (BTF).
Whatever the solvent used, an important loss of double bond is still visible even if this can be limited by decreasing the concentration of the initiating system.
It is also noteworthy that the higher the conversion, the higher the loss of double bond and the distribution of molar masses. The microstructure is highly 1,4-trans as usually reported for cationic PIs. Table 5 gives the results obtained for isoprene polymerization in aqueous media. Three conditions were tested: suspension, dispersion and emulsion. The water polymerization allows pretty high rates of double bonds after the reaction (~98 %) and quite narrow molar mass distribution. This improvement means that no side reaction occurs. As most of the potential side reactions come from β-elimination of proton it seems that even if a proton is released its affinity with water will "wash it" away from the polymer backbone and thus protect the polymer. Molar masses are nevertheless quite low with this process (< 1 200 g/mol) and did not change much even with addition of monomer. Figure 20 describes briefly the polymerizing process in the case of polymerization in suspension but can be generalized to dispersion and emulsion just by addition of organic solvent or surfactant (respectively) into the "PI/isoprene droplet" presented in the scheme. The polymerization proceed at the interface with a lewis acid (LA) stable in water and remaining active in aqueous media. As the monomer and initiator are mostly hydrophobic, it is assumed that initiation starts at the interface where the chains continue to grow. The authors assumed that the flattening of the molar masses is due to a "DP effect" at which the polymer chain becomes too hydrophobic, loses its charge and goes deeper into the organic droplet. Couple of years later, Kostjuk et al. 58 went further in the water polymerization of isoprene using a special family of LA : Lewis acid surfactant combined catalysts (LASCs). They used a complex of ytterbium with sodium dodecyl benzene sulfonate that exhibited both surfactant properties and the capacity to carry the catalyst into the organic phase (i.e. PI / isoprene droplet from Figure 20) allowing to pursue the polymerization without any "DP effect". In this case, protons coming from the interaction of pentachlorophenol with the LA initiate the polymerization. Table 6 summarizes the results obtained for various conditions. The molar masses obtained are high (more than 100 kg/mol for polystyrene/isoprene copolymer) without increasing too much the molar mass distribution.
For polyisoprene samples, high molar masses could also be reached with reasonable dispersities. Interestingly, the glass transition temperature are quite close from the one of NR attesting of a low level of cyclization and double bond loss ( > 94%). Finally, even 1,4-cis units were obtained by this method ( around 20% ) opening the way to "artificial NR". The second pathway mainly developed in our group for the cationic polymerization of isoprene is based on the assumption that the biosynthesis of NR is "pseudo-cationic" 54 . The key steps in this process are both the abstraction of the pyrophosphate group to form a carbocation and the β-elimination of the proton which is highly selective in the case of the enzyme and gives either 1,4-cis or 1,4-trans units. As the abstraction of the pyrophosphate group is assumed to be carried out by a cationic metal present in the enzyme (Mg 2+ , Mn 2+ ) an analogy could be easily made with the LA acid system developed in traditional cationic polymerization. Peruch et al. focused their approach on the use of DMAPP and IPP derivatives for the initiating system and the monomer units respectively. As pyrophosphate monomers are difficult to obtain and are rather unstable toward water, halogenated derivatives as well as hydroxylated ones were selected. In a first attempt 62 , DMAOH and IPOH were used with a boron LA as the catalyst. It was demonstrated that the LA could abstract the hydroxyl group to form a carbocation that could perform the polymerization of IPOH, but the final polymer is not a polyisoprene but a polyol as presented in Figure 22. The present mechanism differs from the biosynthesis as, before the β-elimination, a molecule of IPOH is added to the carbocation. The DMAOH/LA system was used directly for isoprene polymerization 63 . In this case, polyisoprene oligomers were obtained with similar properties as traditional ones obtained by cationic polymerization (i.e. 1,4-trans microstructure, low molar masses, high T g and double bond loss). Other analogs of DMAPP (Figure 23) were used as initiators of the polymerization of isoprene. The nature of the function has a strong influence on the reaction: increase of the kinetics with halogenated compounds but also increase of the insoluble part and the molar mass distribution. Whatever the initiator used, the main characteristics of cationic PIs remained. As most of the transfer reactions come from the β-elimination of a proton, a base (ditertiobutylpyridine) was added to the medium in order to trap all the released protons.
As a consequence, molar mass distribution was drastically decreased (< 2 with the presence of the base and > 4 without), double bond loss was divided by a factor of two and T g that was around 30°C without the base shifted around -50°C. Nevertheless, the microstructure was still predominantly 1,4-trans and the obtained molar masses were quite low (< 3 000 g/mol).
In conclusion, cationic polymerization of isoprene is a tricky technique to obtain 1,4-cis polyisoprene. In term of microstructure 1,4-trans polyisoprene is the main architecture that can be obtained.
Talking about synthetic methods, the polymerization in emulsion seems to be the most promising way to achieve well defined architectures (i.e. no loss of double bond, quite high molar masses and average molar mass distribution) even if the maximum rate of 1,4-cis units is for the moment quite low. Thus, working in "water conditions" could be seen as a real performance as it remains close to the biosynthesis process of NR and uses "greener" conditions for the polymerization than the traditional methods but it does not suit for our project.
c. Anionic polymerization
The anionic polymerization of isoprene is certainly one of the oldest methods to obtain well defined and controlled materials. Nowadays, it is the principal pathway to industrially produce polydienes (i.e. polyisoprene or polybutadiene) and more generally all kind of rubber materials. Figure 24 gives the general procedure of the anionic polymerization of isoprene.
An alkali organo-metallic compound (Li, K, Na) is used as the initiator, the carbanion will add onto isoprene and thus perform the propagation to form the final polymer. One of the main advantage of anionic polymerization is its livingness allowing to synthesize blockcopolymers by sequential addition of a second monomer as in the case of SBS (styrenebutadiene-styrene) tri-block co-polymer for exemple. Concerning the microstructure obtained by anionic polymerization, it was early described 66 that among all alkali metals only lithium was able to produce a high 1,4-cis rate of isoprene units. For example, in bulk conditions, using lithium dispersed in isoprene, IR of about 94% 1,4-cis was obtained. Table 7 summarizes the microstructures reported in the literature for a wide range of alkali metals and illustrates the unique capacity of lithium. But, the microstructure is not only highly dependent on the counter-ion but also on the monomer concentration and the solvent 67 .
The best 1,4-cis rate (>95% [68][69][70] ) can be achieved in non polar and non-protic solvent (i.e alcanes or cyclo-alcanes) using lithium metal or an organo-metallic derivatives (the best one being sec-butyl lithium), at high monomer concentration and low concentration of active species. The range of temperature is relatively adaptable as only few variation of the 1,4-cis content is observed while performing the reaction between -25 and 40°C 71 . Recently, Carlotti et al. [73][74][75] developed what is called the "retarded anionic polymerization" using trialkylaluminium derivatives and alkali metal hydrides to slow down the reaction process allowing to perform anionic polymerization at higher temperature without losing any control of the polymerization and, moreover, decreasing the cost of the process. It is important to note that in this approach the microstructure of the polydiene could be varied with a maximal 1,4 units (cis and trans) of about 80%. The stereoregularity can be tuned by the addition of alkoxyalkyl salts.
Anionic polymerization of isoprene is thus a powerful route to obtain high 1,4-cis polyisoprene with good control of the molar mass distribution. Nevertheless, this method can be tedious as it is highly dependent on the nature of the solvent, alkali metal and concentration. This approach constituted one of the most suitable pathway for our project as a high 1,4-cis rate could be obtained as well as a good control of the chain-ends.
d. Radical polymerization
A lot of efforts were also put on the development of radical polymerization of isoprene, more particularly toward controlled radical polymerization (CRP) techniques (Figure 25), like nitroxide mediated polymerization (NMP) [76][77][78][79][80][81] , reversible addition-fragmentation polymerization (RAFT) [82][83][84][85][86] and atom transfer radical polymerization (ATRP) 87,88 . Generally, isoprene seems to be reluctant to polymerize by radical pathway as in many cases high temperature, high monomer concentration and long time of reaction were required but full conversion was never reached. Selected examples for each technic will be presented below.
i. NMP polymerization
NMP polymerization is the first CRP technic to have been investigated for isoprene 77 .
Literature clearly indicates the high fluctuation of the results obtained with the change of reaction conditions (nature of nitroxide, temperature, concentration and solvent). A conversion of 75% can be obtained after 36 h at 120°C using (2,2,6,6-Tetramethylpiperidin-1yl)oxy (TEMPO). Well defined PI (10 000 g/mol) with narrow dispersity (1.07) were obtained by Benoit et al. 78 Regarding the microstructure, no precise details were given by the authors but they report a microstructure predominently 1,4 (cis and trans) and comparable to the microstructure obtained in free radical polymerization which is expectable as far as, after the "un-capping" in CRP (Figure 25), the reaction processes as a free radical polymerization without any control of the stereo regularity of addition. Cross-linking and high molar mass fraction appeared when conversion was higher than 80% due to transfer to the polymer.
Finally, the use of pyridine enhanced the polymerization rate (50% conversion in 16h)
without any loss of the control. This effect was attributed to the stabilizing effect of the pyridine toward the radical of the nitroxide. To date, NMP polymerization might be a simple method to obtain PI by CRP. The only issue is the difficulty of synthesis of the nitroxide.
ii. RAFT polymerization
Jitchum et al 84 as well as Germack et al 83 reported the RAFT polymerization of isoprene in bulk. RAFT polymerization of isoprene needs high temperature (more than 110°C) and a long time of reaction (> 20h) to obtain reasonable conversions (> 30 %) with low dispersity (< 1.4). If temperature was raised higher than 130°C a total loss of control was observed leading to an increase of the molar mass distribution, side reactions and even degradation of the CTA in some cases. Again, like for NMP, when conversion is higher than 80%, a high molar mass fraction was observed certainly due to chain coupling or cross-linking. This was even more visible with the appearance of an insoluble fraction when conversion reached 95%.
The microstructure observed for the PI formed was mainly composed of 1,4 units (75 %)
without any distinction between cis or trans additionaly to 5% of 1,2 units and 20% 3,4 units.
Bar-Nes et al. 85 reported the synthesis of block-copolymers in emulsion involving isoprene.
They copolymerized isoprene with both styrene and acrylic acid to obtain self-assembly properties. They reported that the isoprene polymerization was once again very slow and tedious to control. Nevertheless, the emulsion polymerization of isoprene was faster than the one in solution (50% of conversion after 10h). This was attributed to the presence of residual styrene moieties in the media that helps the transfer of radicals during the reaction and also because polymerizations are usually faster in emulsion (compared to solution) due to compartmentalization effect 89 .
iii. ATRP polymerization
ATRP for isoprene was hardly described. Wootthikanokkhan et al. 87 reported the impossibility to obtain a PI in bulk by this method. They attributed this to the very poor solubility of the copper derivative (CuBr) in isoprene.
Even with increased amount of copper, they only obtained 5% of conversion after 24h.
Addition of solvent increased the homogeneity of the reaction medium but a decrease of the polymerization rate was observed. Recently Zhu et al. 88 reported a first example of polymerization of isoprene carried out with the ATRP method using a usually poorly reactive system (Copper (I) bromide/2-2'-bipyridine as a catalyst and 2-bromopropionate as initiator).
Maintaining the system at high temperature for a long time (150°C for 72h) and using THF as a solvent for a better homogeneity, a high conversion of 71% was observed. The PI formed had a microstructure of 89% 1,4 units (64% trans) with a molar mass up to 12 kg/mol and a dispersity of 1.6.
iv. Conclusion
In conclusion, radical polymerization of isoprene is suitable for the synthesis of block copolymers due to its living nature but does not provide high amount of 1,4-cis units.
Whatever the method of polymerization, it looks like a good range of molar masses can be achieved with relatively narrow molar mass distribution but no control on the microstructure.
e. Ring opening metathesis polymerization (ROMP)
To the best of our knowledge, there is only one article dealing with the synthesis of telechelic polyisoprene by ROMP of 1,5-dimethyl-1,5-cyclo-octadiene (DMCOD) 90 . Controlled homotelechelic polyisoprene (Figure 26) could be obtained in a wide range of molar masses (1500 -25 000 g/mol ) with low molar mass distribution (1.22 -1.63). A huge screening was made among the existing metathesis catalysts to find an efficient one, as DMCOD is known to be poorly reactive as it is a di-substituted cycle with a low cycle tension. The catalyst represented in Figure 26 is the one giving the best conversion ( ~ 99% ). No description of the final microstructure of the polymer was done. It is also important to highlight that the final PI (acetylated of hydroxylated) is homotelechelic which can be a limitation for specific substitution in α or ω position as both chain-end have the same reactivity. In conclusion, ROMP of DMCOD is a suitable way to obtain homotelechelic PI bearing reactive functions (i.e. hydroxyl in this case) which can be highly valuable for the synthesis of ABA co-polymers. This technique presents several drawbacks : highly dry conditions have to be used as most of the time Grubb's catalysts are highly sensitive to moisture and all the CTA can not be used as the catalytic system can interact with various functions (in the case presented here, it was not possible to work with a CTA bearing hydroxyl functions directly).
f. Coordination polymerization
Coordination polymerization of isoprene is the most studied technic regarding the number of publications and patents. The discovery of Ziegler-Natta catalysis in the early 1950's for the polymerization of olefins was quickly applied to diene monomers such as isoprene or butadiene. In 1979 Schoenberg et al. 91 published a review on various methods for obtaining IR and, at that time, they already reported 1,4-cis content up to 97% as showed in Table 8.
But these results were overtaken with the emergence of the neodymium based catalysts which replaced titanium in the Ziegler-Natta process. It became thus possible to obtain 99.7% Basically neodymium (Nd) systems are considered as Ziegler-Natta catalysts and proceed the same way as the traditional "Aluminium -Titanium" mechanism (Figure 27). In the literature, various Nd complexes have been designed playing on the "X" group and on the addition of ligands to improve several parameters (solubility, selectivity, etc.).
Figure 27: General reaction mechanism for Nd-catalysed polymerization of isoprene
Four groups of Nd catalysts emerged:
-NdX 3 : with X an halogenous atom. This family was the first one reported in the literature.
-Nd(OR) : OR being an alcoholate (Figure 28a)
-NdO or Nd i O : were O suits for carboxylate groups (usually aliphatic chains).
(Figure 28b) and " i O" suits for isooctanoate.
-NdP : were P corresponds to phosphate or phosphonate groups (Figure 28c). It appears that many parameters can influence the microstructure of the polymer like the temperature, the molar ratio of Nd toward co-catalyst, the solubility of the catalyst, the nature of the co-catalyst (aluminium or magnesium for example), the ratio monomer/co-catalyst, the solvent, etc..
The following trends can be drawn from all the published studies 92 :
-The solubility of the catalyst can be improved by the nature of "X" groups attached to the Nd as previously mentioned. As the first generation of catalysts (NdX 3 ) were acting heterogeneously in aliphatic solvents, the use of fatty chains as in the case of Nd(OR), Nd(O) and NdP helped the homogeneity of the system and thus the efficiency of the catalytic system. The use of ligands such as alcohols, phosphates, sulfoxide, boron derivatives or pyridine on NdX 3 was also demonstrated to improve the solubility.
-Co-catalysts used are generally alkyl aluminium derivatives like TIBA (Triisobutyl aluminium), DIBAH (Di-isobutyl aluminium hydride) or DEAH (Diethyl aluminium hydride) giving high rates of 1,4-cis units (> 98%). On the contrary, alkyl magnesium derivatives give high 1,4-trans rate.
-The molar mass and the dispersity of the polymer as well as the microstructure depend on the molar ratio of co-catalyst and Nd. In general, the rate of 1,4-cis and the molar mass drop when this ratio is increased contrarily to the dispersity. For all "generations" of Nd, it has to be mentioned that an excess of aluminium derivative is used (from 1-5 eq with NdP system to 8-100 eq in the case of NdO).
-The solvents used for the polymerization are alcanes or cyclo-alcanes in order to avoid poisoning of the catalyst or side reaction.
-Increase of the temperature can potentially damage the catalytic system (~ above 60°C) or decrease the molar masses.
-This technique proceeds in a "quasi-living" manner. The formation of di-block copolymer is limited to a diene/diene polymer and even the polymerization of styrene can be tricky. Control of chain-ends was not well described.
In 2002, Laubry et al. [93][94][95] from Michelin company patented a method to obtain a "quasi-pure"
1,4-cis PI (99.7% reported) in large scale using relatively smooth conditions (room temperature, small amount of aluminum, bulk conditions). Moreover, they managed to perform the polymerization directly from the C5 fraction of petroleum cracking without any further purification of isoprene. To date, this is one of the highest 1,4-cis reported in the literature even if some other examples attended the same rate like the work of Zhang 96 who used the same type of catalysis but varying the metal (Lutetium and Yterbium) and the ligands (diphenylphosphine derivatives).
To conclude, coordination polymerization is the most interesting system to obtain high 1,4-cis rate in IRs, even industrially. There are only few drawbacks as it is a well understood system with clearly identified key parameters. This polymerization process can even be performed on C5 crack fraction of petroleum and provides a wide range of molar masses in a controlled manner. The main disadvantage is the control of chain-ends which is generally not partcularilly discussed in the litterature. This is an important issue for the selective functionalization aimed in the project.
g. General conclusion on polyisoprene synthesis
Isoprene polymerization is a vast domain of investigation regarding the different technics applicable and the variability of the IR formed in term of microstructure, molar mass, dispersity, etc. As it was explained in the introduction of this sub-chapter our goal was not to be exhaustive and to present all the different systems precisely, but to give an overview of what can be achieved to obtain a high 1,4-cis rate in IR and controlled chain-ends to be employed for the synthesis of Tanaka's co-polymer. Table 9 summarizes the different advantages and drawbacks of each synthetic method taking into account our specifications.
As we are targeting a high rate of 1,4-cis units, cationic polymerization as well as CRP and metathesis can not be taken into account. For the remaining systems (i.e. anionic and coordination), both present determinant disadvantages such as poor control of the chain-ends in the case of coordination polymerization and really sensitive conditions for anionic polymerization (microstructure dependent on many parameters). For all those reasons, in the frame of this work, chemical degradation of NR was chosen in order to obtain heterotelechelic 100% 1,4-cis PI. This method will be presented in the next chapter.
(-) (-) (-) Anionic (+) (+) (+) (+) (+) NMP (-) (-) (+) (+) RAFT (-) (-) (+) (+) (+) ATRP (-) (-) (+) (+) Metathesis (-) (-) (+) (+) (+) Coordination (+) (+) (+) (-)
(+) : the system exhibits this property / (-) : The system does not exhibit this propertiy
III. Polymer coupling: Grafting lipids and proteins
For the synthesis of Tanaka's model molecule, it was important to know what had already been described for the coupling of polymers with both lipids and proteins in order to define a pathway applicable for our project. The Polymer-Lipid coupling was not highly studied in the literature. On the contrary, polymer-protein coupling has been extensively investigated. In this sub-chapter, we aim to present an overview of the methods described to perform the coupling reactions.
a. Polymer -Lipid coupling
In 1984, Reusch 97 introduces the term of "lipopolymer" into the scientific community as referring to "molecules that contain a polymer chain that is bound covalently to a lipid moiety". Some works focused then on the study of such architectures, but working with hydrophilic polymers such as PEG 98-101 or polyethyleneimine 102,103 . These amphiphilic systems were mostly studied as drug carriers for targeted drug delivery. Recently, other studies reported the functionalization of a phospholipid moiety in order to obtain a RAFT 104,105 agent suitable for the growing of a polymer chain. In this case, the studied monomers were morpholin derivatives used as the hydrophilic blocks.
No particular interest was paid, to the best of our knowledge, to the coupling of lipids with a hydrophobic polymer. The only example is the one already described in the "CCr subchapter", where Kawahara et al. 39,106 linked fatty chains of various chain-length to a backbone of PI through hydroboration reaction. Such a low amount of documents on the hydrophobichydrophobic coupling is surprising but also challenging as the impact of the fatty chains on hydrophobic polymer backbones seems to be out of knowledge.
b. Polymer -Protein coupling
Polymer-Protein materials are commonly referred to as Bioconjugates or Macromolecular
Chimeras. This last term was introduced in 2003 by Antonietti 107 to define this kind of complex architecture even if the oldest papers dealing with protein-polymer coupling were from 1977 108,109 . Since this date, the main application of Bioconjugates was the biomedical field focusing on the self-assembly properties of those chimeras and their capacity of being used for drug delivery.
Even if it is a field of high interest, this is out of purpose for our project and that is why we decided to focus more on the grafting methods existing than to the properties of these systems especially regarding the fact that most of the bioconjugates were synthesized starting from hydrophilic polymers contrarily to polyisoprene. The following sections will thus present the two pathways that can be used for coupling.
i. Grafting to
The first synthetic approach reported in the literature is the "grafting to" method which consists in connecting directly both blocks (polymer and protein) using specific chemistry (Figure 29). This method is quite versatile regarding the amount of accessible reactive functions in the peptide backbone 110 like primary amines, carboxylic acids, thiols, etc.. Moreover, the progress in polymer chemistry usually allow to have a great control of the chain-end, which can be chosen to be reactive toward one of the function present on the protein backbone [111][112][113] . One of the main drawback of this strategy is the selectivity of the reaction as one peptide can be present more than once on the backbone of the protein, thus leading to multiple grafting of polymer onto the polypeptide backbone. To prevent this kind of side reaction, modification of the protein was proposed by introducing a controlled amount of a specific function that was absent in the native protein. For example the high selectivity demonstrated by Huisgen cyclo-addition can be used in bioconjugate synthesis by either introducing the azide function 114 or the alkyne function 115 on the protein.
Another strategy widely used is the thiol chemistry using cysteine as the amount of free thiol in proteins can be controlled by the reduction of disulfide bridges for example 115 . In this case, another type of "click-chemistry" called "Thiol-Maleimide" chemistry 116-118 , a metal-free reaction, extracted from the family of Michael additions 115,119,120 can be used. Nevertheless, the main drawback of this strategy, in addition to the selectivity, is the difficulty to graft two polymers together (and especially in a one-to-one ratio) as the chance of encountering of the reactive functions is limited by other parameters such as the molar mass of both blocks, the solubility of both polymers, etc.. For these reasons, many authors preferred to grow one of the two blocks starting from the other. In this last case, CRPs were selected among all the other existing methods as it can generally bear aqueous conditions (the best/only solvent of most of the proteins) and as less side reactions due to the peptide pendant functions were observed. In 2014, Cobo et al. 121 published a review retracing various polymerization pathways. The polyacrylamide family was highly studied thanks to its LCST behavior, the polymeric chain can grow in aqueous media and the self-assembly can be induced by temperature 112,[122][123][124] . Moreover, Nisopropylacrylamide (NIPAM) is a versatile monomer as it can be polymerized by most of the known CRP pathways.
ii. Grafting from
Even if the « grafting from » approach is a powerfull tool already widely studied and reported, it is not applicable in the frame of the project presented here as it is impossible to grow a PI chain fully 1,4-cis via CRP as seen before. Through the activated monomer mechanism, high molar masses can be obtained, but without controlling the polymerization. The balance between basicity and nucleophilicity of the initiator is a key factor. For this reason, the amine mechanism using primary amines is preferred as they are more nucleophilic than basic. But, using primary amine is usually not enough to completely prevent side reactions. The decrease of temperature or the nature of the solvent were also important parameters for the control of the reaction. The polymer thus obtained is a polypeptide (PPep) possessing a helicoidal morphology as can be encountered in proteins. Furthermore, the « R » group of the monomer can be varied and copolymerization reactions could lead to model proteins. This implies that the polymerization must be controlled. One possibility is to initiate the polymerization with primary amines and to perform the ROP at 0°C in DMF 126 . 99% of the chain-ends were still "living" at the end of the polymerization process in this case. Deming et al. [127][128][129][130] developed the use of Cobalt and Nickel catalysts to initiate the polymerization through a different pathway, affording well defined polymers that could be composed of a broad range of monomers bearing different functions. Other methods like the use of aminosilanes 131 , protonated amines 132,133 or even the alkylation of the NCA monomer thus blocking the activated monomer mechanism [134][135][136] were also reported, all leading to a good control of the polymerization. It is noteworthy that this polymerization is never referred to as a "grafting from" method in the field of Bioconjugate synthesis but NCA polymers can be seen as "synthetic protein". Interestingly, a couple of works already studied the self-assembly properties of block copolymers formed by NCA polymer (mostly polybenzylglutamate) and PI [137][138][139][140][141] .
In these works, primary amine terminated PI was obtained by anionic polymerization and used as macroinitiator for the growth of the peptide block(s). Different architectures (di-block or tri-block co-polymers) were studied exhibiting self-assembly properties affording micelles or membranes.
In conclusion, the "Grafting from" pathway usually gives better control toward the linkage between Polymer and Protein (or "protein-like") as it starts by the functionalization of oen block to grow the other. For that purpose, CRP technics are the most employed. Only few examples deal with the direct coupling of hydrophobic polymers with proteins as will be discussed in the following paragraph.
iii. Giant amphiphiles "Giant amphiphiles" can be seen as a sub-collection of the family of Bioconjugates as it characterizes the architecture obtained by the linkage of an hydrophobic polymer chain with a protein. This term was first introduced in 2001 by Hannink et al. 142 as they were the first to report the linkage of a polystyrene chain with a protein, the horse radish peroxidase (HRP).
These particular bioconjugates are usually formed by the "grafting to" pathway and have been developed essentially by two teams (Velonia et al. 114,119,124,[143][144][145][146] and Nolte et al. 115,142,[147][148][149] through Thiol-Maleimide coupling, Huisgen click-chemistry or cofactor reconstitution.
This last method comes from the specific recognition of various proteins toward a special substrate called "co-factor". Usually, this type of linkage is non-covalent but the strength of the bonding is such that it can be considered as irreversible. The most famous example is the Biotin / Streptavidin interaction which is characterized by a linkage energy of 21 kcal.mol -1 and that is widely used in biochemistry 150 . In most of the works of Velonia et al., the proteins that have been used are Lipase B from Candida antarctica (CALB) and BSA and the hydrophobic polymer is a PS chain, generally of 5 kg/mol with controlled chain-ends. The coupling was usually performed in heterogeneous conditions in a mixture of THF and water in various proportions to best solubilize either the protein or the polymer.
CALB presents two advantages: it is a relatively tough protein that can bear various conditions and organic solvents without facing any denaturation or degradation and it is a relatively small protein (about 35 kDa), thus rendering more accessible the desired aminoacid for the coupling with the PS chain. On another end, BSA presents the advantage of bearing natively a free cysteine available for the coupling without necessity for disulfur bridge breaking prior to coupling reaction which is the case for the lipase.
Nevertheless, the coupling reactions usually take long time to be performed (from 1 night to 1 week), are highly dependent of the conditions and poor yields are generally observed, needing the use of extensive dialysis for purification. Figure 32 presents various morphologies of giant amphiphiles observed by Nolte and Velonia. As expected, these systems exhibit particular self-assembly behaviours once placed in water or THF. In conclusion, bioconjugates is a wide field in the polymer science being more and more investigated due to the promising results already obtained for drug-delivery and biomedical applications.
Various methods exist to attach a polymeric chain to a protein backbone either by direct coupling or by growing one block from the other. In both cases, the obtained chimera usually presents self-assembly properties forming vesicles, polymersomes, fibrils etc.. The most studied macromolecular chimeras involve "smart polymers" like PNIPAM or PEG that can be grafted in water and become hydrophobic by increasing the temperature thus achieving amphiphilic macromolecules. To date, only few attention was paid on the grafting of hydrophobic polymers to proteins certainly due to the restrictive conditions to be used and the difficulty to graft the polymeric chain on the protein. BSA-PS and Lipase-PS are, to date, the only members of the Giant Amphiphile family with interesting self-assembly properties.
Beside these approaches, the ROP of NCA seems a functional pathway to achieve "proteinlike" block co-polymers.
IV. Conclusion
This bibliographical study highlighted the origins and the challenge represented by this PhD project. Indeed, despite the fact that NR is widely used in industry, this material is not perfectly understood regarding its biosynthesis or the origin of some of its superior properties toward IR. Tanaka offered a piece of explaination making bridges between the known pathway of the biomachinery, the structure of the material and some of the properties of NR.
But, this model was never demonstrated to be true as no attempt of synthesizing the "triblock" structure was proposed. For that, a good control of the microstructure of the polymer (pure 1,4-cis) and of the chain-ends (need for different reactivities in and ) was mandatory. But it appears that, among all the synthetic pathways to afford IR, a method corresponding to this specifications could not be obtained. Another pathway was thus selected that will be presented in the next chapter. Finally, no clue could be obtained from the literature for the synthesis of the PI-Lipid hybrid as this field was never investigated.
Contrarilly to that, the PI-Protein coupling seems to be possible using the coupling pathway developped by Velonia et
I. Introduction
This chapter will focus on the chemical degradation of NR. As explained in the bibliographic part, the synthesis of a pure 1,4-cis PI chain with controlled chain-ends can be tedious and restrictive as no polymerization technique would be suitable. Nevertheless, taking advantage of the pure 1,4-cis microstructure of NR, other methods were developed affording PIs with rather low molar masses and exhibiting pure 1,4-cis microstructure by degradation of the natural material. Furthermore, according to the method employed, the nature of the chain-ends can also be tuned.
First, a brief bibliographical part will give an overview of the existing methods for chemical degradation of NR. In a second step, it will be presented the starting materials (two different clones of NR) used in the frame of this work. They will be fully characterized. Finally, the chemical degradation of both NR clones will be investigated and compared to the one of a synthetic PI.
II. Bibliography
Liquid natural rubber (LNR) are natural rubber derivatives of low molar masses (<20 000g/mol) obtained from the chemical degradation of NR. The term "liquid" refers to the fact that these compounds are generally liquids due to their low molar mass 1 . This family of polymer was developed essentially to perform chemistry, as NR is generally difficult to process regarding its high molar mass (~1 000 000 g/mol) and the difficulty to solubilize it due to the gel fraction. To date, no production on a large scale of these derivatives was reported but they have been widely used for chemical reaction such as chain-extension or block co-polymer formation [2][3][4][5][6][7][8][9][10] . More specifically, the synthesis of telechelic liquid natural rubber (TLNR) was deeply investigated as these latter can be post-modified, opening a wide range of potential applications. The main chemical pathways described in the literature for the degradation of NR as well as the structure of the resulting TLNR are presented on Figure 1.
Each method will be developed below.
a. Ozonolysis
The ozonolysis of NR was studied since more than 70 years, either to produce TLNR 11 or for a better understanding of the structure of NR by analysis of the degradation products. 12,13 Many articles dealt also with the ozonolysis of synthetic rubbers. 14,15 Nor reported that oligomers of NR (about 900 g/mol) could be obtained within 20 minutes at 0°C in chloroform but with a lack of control toward the chain-end formation. Infra-Red analysis allowed to show the appearance of new functional groups like carboxylic acid, ketone, aldehyde and hydroxyl groups. It was also reported the presence of residual molozonide groups most probably located onto the polymer backbone. Figure 2 presents the possible reaction pathways that occur during the ozonolysis of a polydiene. The author concluded that the exact structure of the chain-ends could not be given due to the great number of possibilities. Later, Gupta et al. 18 as well as Ravidran et al. 19 reported the photodegradation of NR in a toluene solution and in the presence of hydrogen peroxide yielding hydroxyl-telechelic liquid natural rubber (HTLNR), with a molar mass around 3 000 g/mol, a functionality of 1.4 due to side reactions and the formation of carbonyl groups. Ravidran reported that molar masses from 5 000 g/mol to 200 000 g/mol could be obtained by varying the time of exposure. The source of UV was also varied from a UV lamp to sunlight thus showing that both sources could be used. Even if the time of exposure to reach 5 000 g/mol was long (about 50 h), the system remained economically interesting regarding the use of sunlight. One main drawback of the system is the presence of about 10% of side products consisting mostly of a crosslinked phase. The degradation mechanism proposed by the author is depicted on Figure 3. The oxydo-reduction degradation of NR uses also free radicals but supplied from another source than light. The most studied system was phenylhydrazine/O 2 that generates phenyl free radicals which can perform the chain degradation (Figure 4) [20][21][22]23 . This degradation reaction leads to a hetero-telechelic PI terminated by a methyl ketone and a phenyl ketone. 24 Molar masses from 3 to 35 kg/mol were reported for this process with quite narrow molar mass distribution compared to the starting NR (< 2) 1 . Few drawbacks were reported about this method except that the degradation slows down with increasing quantity of phenylhydrazine 24 . Moreover, the formation of epoxides and hydroxyl functions were also reported as side products of the reaction 20 . The main advantage of this method is that it can be applied in the latex phase directly. 21
d. Metathesis
Metathetic degradation of NR is a less studied route as only few papers deal with it.
Originally, the use of different types of catalysts (Schrock's catalyst based on molybdenum 25 as well as tungsten chloride 26 ) was described, but side reactions like internal cyclization were present. The development of this field is mostly due to the appearance of the first and second generation of Grubbs catalysts that presents more stability among chemical functions (hydroxyl, carboxylic acid or ester) thus allowing to vary the chain-end termination of TLNR 27 . Moreover, the triple substitution of the double-bond from the polymer backbone usually reduces the activity of the catalyst which was improved in the case of second generation Grubbs catalyst 28 .
It has to be noted that the degradation usually proceeds via the use of a substituted vinylic chain-transfer agent which gives access to functional chain-ends by playing on its substituents. Most of the recent works with the Grubbs catalysts were described by Pilard and coll. [27][28][29] In 2005, Solanky et al. 28 reported the degradation of a synthetic polyisoprene using second generation of Grubbs catalyst and a diacetate as the chain transfer agent (CTA, Figure 5). The use of a CTA bearing hydroxyl functions to obtain directly hydroxy-telechelic polymer lowered the activity of the catalyst. A range of molar masses going from 900 to 22 000 g/mol could be obtained with reasonable molar mass distribution (~ 2). The same reaction was also performed directly into the latex phase in the presence of small amount of DCM. A degradation was still observed (about 40 kg/mol TLNR obtained) but the catalyst was not really adapted to such media. Later, it was reported the same procedure on waste tires in toluene 27 and on NR in ionic liquids 29 . The metathetic degradation of waste tires afforded small oligomers (~ 400 g/mol) in a poor yield due to the non-rubber constituents present in tires and the difficulty to purify these oligomers. Nevertheless, the obtained telechelic polymers presented the good functionality at the chain-end thus opening the way to recyclability of waste tires. In the case of ionic liquid, TLNRs were obtained in a range of molar masses from 25 to 80 kg/mol (depending on the time of reaction) and a rather low molar mass distribution around 2.
In 2011, Gutierrez et al. 30 reported a similar process using -pinene as CTA. TLNR could be obtained in a range of 700 to 3 000 g/mol, with a molar mass distribution comparable to the one reported previously (1.6-2.5). The structures and yields of the obtained oligomers are reported in Figure 6. In conclusion, metathetic degradation of NR is an interesting pathway giving access to TLNR in a wide range of molar masses and reasonable molar mass distributions.
e. Oxidative degradation
The oxidative degradation of NR is the most described method to obtain TLNR in a controlled manner. It was first reported by Reyx and Campistron in 1997 31 and was then highly developed and used by the team from Université du Maine 7,[32][33][34][35][36][37][38][39][40]41,42 to design various functional oligomers and to synthesize block copolymers. This method is based on the use of periodic acid for the cleavage of the carbon-carbon double bound of the polymer backbone previously epoxidized or not. The exact mechanism of cleavage is not perfectly understood yet 41,43 but it is believed to proceed through the formation of vicinal diol that rearranges, leading to the chain cleavage and the formation of carbonyl groups (Figure 7). Gilliet-Ritoit 41 reported that the degradation using periodic acid alone is usually slower than when applied to epoxidized rubbers (4h vs 1h), suggesting that the acid alone encounters a limitative step which is the formation of epoxide and/or vicinal diols. As previously mentioned, functional TLNRs were widely produced for the synthesis of different polyisoprene/polyurethane block co-polymers 7,[36][37][38][39][40]44 . The range of molar masses obtained is from 1 500 g/mol to 5 000 g/mol with a molar mass distribution from 1.5 to 3.
Reaction conditions are quite gentle (epoxidation at 0°C and acidic degradation at room temperature). Moreover, the difference of reactivity between the aldehyde and the ketone chain-ends was used to functionalize the obtained polymer backbone selectively 36 .
f. Conclusion
To conclude, various methods were already described for the chemical degradation of NR.
Most of them allowed to produce TLNR with various chain-ends. For our project, as both chain-ends have to be selectively modified, the oxidative degradation using periodic acid and epoxidation was selected as it gives access to two different chain-ends (ketone and aldehyde) that possess different chemical reactivities.
III. Characterization of the starting material
First, we had to obtain information about the NR used as starting material. Indeed, two different unsmoked NR sheets were supplied by our partner in Thailand (UMR IATE/ LBTNR / Katsetsart University) (Figure 8). These sheets were obtained from recovery of fresh latex, acidic coagulation of the rubber, passing through a rolling mill and drying. These two materials are coming from two different clonal origin of Hevea brasiliensis namely RRIM 600 and PB 235. There are differences between these two clones, for example the lipidic composition as reported by Vaysse et al. [45][46][47] or the molar mass distribution which is bimodal in the case of RRIM 600 and monomodal in the case of PB235. But, as most of the characterization analyses could be performed only in liquid state, issues were already encountered by trying to solubilize the NR in solvents due to gel formation. It was then decided to study the solubilization behaviour of the material in function of the solvent used and to try to find a method to obtain a pure and soluble natural polyisoprene. All these results are presented below. Results are summarized in Table 1. It can be seen that whatever the solvent and the clone used, the overall recovery is pretty good (always > 90%). On the contrary, the solubilized and gel fractions are highly dependent on the solvent and the clone:
* in cyclohexane, RRIM600 became more and more soluble with time, whereas PB235 was hardly soluble even after 5 days of stirring and remained in the state of a swollen gel in this solvent.
* in THF, half of the RRIM600 was soluble whatever the solubilization time, whereas PB235
was almost fully soluble, even after 1 day of stirring.
* in DCM, both clones presented almost similar solubility (between 60 and 80% of soluble fraction). Nevertheless, the extraction after centrifugation was quite difficult as the gel fraction was above the solution phase. This could have an impact of the figures reported here.
* in toluene, PB235 was visually fully soluble (no gel fraction recovered after centrifugation) and RRIM600 was almost fully soluble (less than 10% of gel fraction, Figure 10). In that case, the evaporation of the solvent was sometime difficult, explaining some overall recoveries higher than 100% in some cases.
In conclusion, regardless the nature of the clone, toluene is the best solvent for the solubilization of NR. THF gives both a rather good yield of extraction and an easy separation and drying using centrifugation and vacuum. As a consequence, when it was possible ( 1 H and 13 C NMR spectra), NR was solubilized in toluene but, for other analyses, THF was used thus analysing only the soluble part of the sample. Degradation reactions were, however, performed in THF as it is known to be a good solvent of periodic acid.
b. Characterization of the raw material i. SEC analysis
To obtain well defined chromatograms of both starting NRs, it was necessary to find the right concentration of sample to inject due to the molar mass of the polymer and the gel fraction that could not be filtered. Finally, by adjusting the concentration of NR to 1 mg/mL, both clones could be analyzed by SEC in THF. Figure 11 presents the chromatograms given by the RI detector. As already described, RRIM 600 possesses a bi-modal profile whereas the PB 235 is nearly mono-modal. The molar mass of PB 235 is evaluated to be 1 200 000 g/mol with a quite narrow molar mass distribution of 1.6. In the case of RRIM 600, the molar mass was calculated to be 500 000 g/mol with a molar mass distribution of 2.6. Integrating separately the 2 peaks, the molar mass (M n ) of the high molar mass and low molar mass fractions were estimated to be 1 200 000 g/mol and 200 000 g/mol respectively. This is in agreement with the results generally reported in the literature 48 .
ii. NMR analysis Figure 12 and Figure 13 present the 1 H and 13 C NMR analysis of PB 235 respectively. The NMR analysis was performed in deuterated toluene as it is the only solvent to solubilize entirely the PB235 sample when the concentration was adjusted to 20 mg/mL. Only this clonal form is presented here as the same results were achieved with RRIM 600.
corresponding to the vinylic proton, signal at 2.17 ppm corresponding to the "-CH 2 " groups in α position of the CH=CCH 3 double bond and signal at 1.75 ppm corresponding to the CH 3 group of the double bond) with the absence of any 1,2 or 3,4 units. In the zone 0-3 ppm, one can see other signals badly defined and of very small intensities. They could correspond to aliphatic moieties either coming from lipids or proteins. 13 iii. Elementary analysis
To go further, elementary analysis was also performed on both clones. The results are given in Table 2. As it can be observed, their composition is highly similar. There are only small variations of the nitrogen and oxygen contents. The nitrogen content could be related to the proteins present in both samples. These figures seem to indicate that RRIM 600 contains more proteins that PB 235. Table 2 presents also the results obtained for the natural PIs obtained by extraction with THF for 24h. Unfortunately, the oxygen content could not be determined for the THF extract of PB 235 but, the analysis of the nitrogen and oxygen content of the gel phase highly increased compared to the one of the extracts. This suggests that the extraction of natural polyisoprene using THF has also a purifying effect by the removal of some "nonrubber" constituents. The enrichment of the gel phase in nitrogen content must correspond to a concentration of proteins. The origin of the enrichment of the gel phase in oxygen is not so easy to attribute as the source of oxygen could come from some specific peptides in proteins as well as from lipidic moieties (free fatty acids, triglycerides, diglycerides, etc…).
iv. Conclusion
Both clones present very similar structure (only 1,4-cis units) as confirmed by NMR analyses.
The main differences come from SEC analysis as the RRIM 600 exhibits a bi-modal profile whereas the PB 235 SEC trace is mono-modal and from elementary analysis with a slightly higher content of oxygen and nitrogen for RRIM 600. We also showed the possibility to extract natural PI from both NR clones using various solvents. It was observed that NR behaves differently regarding the solvent used.
Cyclohexane seemed to be relatively "bad" for the solubilization whereas toluene is able to dissolve entirely the sample. The natural PI thus extracted is "freed" from a part of the nonrubber constituent (most probably proteins) as the nitrogen content of the extracted sample decreases compared to the original material.
IV. Controlled degradation of NR
This sub-chapter will then focus on the synthesis of TLNRs obtained from the successive epoxidation and acidic degradation of high molar mass rubber, varying the origin of the starting material. Indeed, natural PIs coming from 24 h THF extraction of both natural clones (ExtraNR) will be used as well as a 600 000 g/mol IR of high 1,4-cis content (97%).
a. Purification process and side reaction
It was rapidly observed the appearance of side reactions during the degradation. Indeed, some samples presented impurities (Figure 14) and values of molar masses calculated by NMR were quite far from the ones obtained by SEC as reported in Table 3. By replacing the parameter i al by (i al + i ? ) in the formula reported below, values really close to the M n (SEC) could be obtained (4 700 g/mol for the degraded ExtraNR and 7 000 for the degraded IR).
Moreover, the ratio i ?' /i ? was around 6-7 either in the case of the ExtraNR or the degraded IR.
To finish with, no modification of the ketone side of the degraded rubber was observed, which means that this side reaction consumes only aldehydes. c: Obtained by SEC in THF with LS detector, using a dn/dc value of 0,130 d : Obtained by integration of 1 H NMR signals. ial corresponds to the integration of the signal from the aldehydic proton, iiso corresponds to the integration of the signal from the diene proton of the polymer backbone (shift: 5.15 ppm in Figure 14), i? corresponds to the integration of the signal at 4.33 ppm in Figure 14, i?' corresponds to the integration of the signal at 3.31 ppm in Figure 14. This impurity formation can be explained by the presence of acid and the wide excess of methanol used for the precipitation of the polymer. For this reason, it was proposed that the precipitation of the polymer would take place in methanol but bearing a small amount of alkaline water in order to degrade the last traces of acid and to be in the presence of water thus rendering less favourable the formation of the acetal. 4. As it can be seen, the experimental epoxidation rates were very close, even if a bit lower than the theoretical ones. No particular effect of the nature of the rubber can be observed. The reaction is also pretty fast as the epoxidation rates are very similar after 2 and 4h of reaction. The quantity of m-CPBA (m w ) to add to the reaction medium for a targeted epoxidation rate was calculated as follows:
𝑚 = 𝑡 * 𝑚 𝑅 * 𝑀 𝑚-𝑃 𝑃 * 𝑀 𝑖𝑠𝑜
where:
-t x is the targeted rate of epoxidation (%) -m R is the mass of rubber (g) -M m-CPBA is the molar mass of m-CPBA (g/mol)
-P is the purity of m-CPBA (wt %) -M iso is the molar mass of isoprene (g/mol)
The only difference between ExtraNR and IR epoxidation is that, after 4h of reaction, the epoxidation rate in ExtraNR is higher than the one in IR. This can be explained by the equation reported above for the calculation of the quantity of m-CPBA used. Indeed, in the equation, the epoxidation rate is calculated from the molar ratio of epoxidizing agent toward the molar amount of isoprene units. To obtain this molar amount, the rubber sample is considered as entirely constituted of isoprene units which is true only in the case of IR. For ExtraNR we showed that the samples were not only composed of PI but also of other constituents. Thus, by dividing m R by the M iso the corresponding molar quantity of isoprene is overestimated in the case of ExtraNR thus explaining the difference of the obtained epoxidation rate between ExtraNR and IR. The partial degradation was further confirmed by 1 H NMR analysis (Figure 19) with the appearance of a signal at 9.77 ppm corresponding to the protons of the aldehyde moiety, but also with the presence of a residual signal corresponding to the protons of the epoxide units at 2.68 ppm. In the case of IR, the signal of the residual epoxide units had almost totally disappeared (Figure 19) and the measured molar mass (M n ~ 25 kg/mol by SEC) is close to the targeted one. It was further confirmed by 1 H NMR that in this case, a molecular mass of This could indicate that in ExtraNR, the acidic cleavage of the epoxides is "disturbed" most probably by some "impurities" (the non-rubber compounds). The addition of 2.2 equivalents of periodic acid (instead of 1.1 equivalent initially) allowed to cleave all the epoxide units (no more signal for oxirane protons on Figure 20). In this case, the M n SEC was 15 kg/mol, still a bit higher than the theoretical value which can be related to the slight difference existing between the experimental epoxidation rate and the targeted one (Table 4). It can be concluded from these preliminary studies that the optimal conditions to obtain TLNR (from the two clones used in this study) are to perform first the epoxidation with m-CPBA for 2h, followed by acidic cleavage for 2h with 2.2 eq of periodic acid. It was then decided to determine the benefit of the "purification step" by comparing the TLNRs obtained from ExtraRRIM 600 and from the RRIM 600 raw sheet directly. The same experimental conditions (0.6 % of epoxidation rate and 2.2 equivalents of periodic acid, targeting a final molar mass of 10 000 g/mol) were thus applied to both samples. Figure 21 and Figure 22 presents, respectively, the 1 H NMR spectra and the SEC chromatograms obtained in both cases. No significant difference is observed between the TLNR obtained from the extracted PI and the one obtained from the NR sheet. This would mean that the gel fraction does not interfere in the degradation process and/or is also chemically cleaved during the reaction. Regarding this result, for the rest of the work presented in the manuscript, the degradation will be performed directly onto the raw NR.
To go further, we decided to perform experiments to construct abacus curves for the two NR clones and the IR. Results are depicted on Figure 23. In the case of IR degradation (blue curve), even if the values are higher than the expected ones, final molar masses evolve linearly with the invert of the epoxidation rate over a wide range. On the contrary, for both NR clones, a different behaviour is observed (orange and grey curves). For high epoxidation rates both NRs follow a linear variation with the invert of the epoxidation rate as expected.
But both clones quickly reach a rate at which the degradation becomes less efficient thus forming telechelic polymers with molar mass values far much higher than the expected ones.
The limit values for both clones correspond to an epoxidation rate of 0.5% as can be observed from the zoom of the top left corner of Figure 23. Figure 24 presents the 1 H NMR spectra of degraded RRIM 600 obtained for 3 different epoxidation rates. It can be observed that above the limit defined previously, no residual epoxides could be observed and a clear signal of aldehyde is visible. But for an epoxidation rate of 0.25 % (i.e. below the limit) the signal of residual epoxides becomes stronger and the one of the aldehydic proton is almost absent. In the case of IR (Figure 25), it can be observed that for the same epoxidation rates, there is no residual epoxide and that the decrease of the intensity of the signal of aldehyde could in this case be attributed only to the increase of the molar mass (less chain-end). These results confirm that during the chemical degradation of NR, a part of the periodic acid used for the cleavage of the chains is disabled.
The origin of this phenomenon was not further determined but it proves that in order to afford high molar mass LTNR a higher amount of acid is compulsory to cleave all the epoxides.
V. Conclusion
In conclusion of this chapter, an investigation of the composition of two clones of NR (RRIM 600 and PB 235) allowed to obtain more information about the non-rubber constituents of both samples and, more generally, gave information about the differences existing between both clones of Hevea.
In a second step, it was possible to obtain hetero-telechelic liquid rubbers from different origin (natural or synthetic PIs) bearing two different chain-ends. Among all the existing methods, the acidic degradation was chosen and studied. After the optimization of some parameters and a better comprehension of its chemical pathway, well defined PI could be obtained and characterized. The molar mass of 10 000 g/mol was selected as it permits a good characterization of the polymer by NMR. The increase of the molar mass was shown to be possible but more optimisation toward the quantity of acid to be used were necessary. The obtained hetero-telechelic PI could then be used for the functionalization and the modification of chain-end to obtain reactive synthons for the synthesis of the desired tri-block polymer as will be developed in the next chapter.
I. Introduction :
The first step of the project was the synthesis of TLNR bearing two different functions at the α and ω chain-ends as shown in the previous chapter (PIDeg). The present chapter will focus on the functionalization of this telechelic polymer in order to be able to perform later the synthesis of di-blocks PI-lipid, PI-protein and the desired tri-block protein-PI-lipid to mimic Tanaka's model.
First, a brief state of the art will be presented explaining the chemical pathway given below (Figure 1) and how this multistep synthesis was built. This will be followed by the presentation of the results obtained for the synthesis of functional PIs but decomposing each step of the synthesis. The experimental procedure will be given at the end of this chapter which will be concluded by a table summarizing all the molecules synthesized. Voluntarily, only few information will be given in this chapter about the use of the synthesized functional PIs as this part of the manuscript was thought to be seen as a toolbox which will be referred to in the other chapters.
II. Bibliography :
Many functional PI derivatives were already synthesized from degraded NR [1][2][3][4][5][6][7] , mainly for the synthesis of polyurethanes. The reductive amination of degraded PI 1,2,5 was particularly studied using primary and secondary amines as well as ammonium salt to selectively functionalize either the ketone or the aldehyde chain-end (Figure 2). It was demonstrated that primary amines were not selective as reductive amination occurred at both chain-ends.
Indeed, with 2 equivalents of the amine, the double functionalization was obtained while with 1.2 equivalent of the amine, an admixture of both ketone and aldehyde functionalized PI was observed. When the same reaction was performed with secondary amines or ammonium salt, only aldehyde modification was observed, regardless the quantity of amine used for the reaction. This difference was explained by the decrease of the nucleophilicity of the electron pair going from alkyl primary amine to functional secondary amines and ammonium salt and by the difference of steric hindrance between aldehyde and ketone on the one hand and between primary and secondary amines on the other hand. This was further supported by Abdel-Magid et al 8,9 who investigated these reactions for many ketone or aldehyde functionalized-molecules.
It was also shown that with the use of sodium triacetoxyborohydride (STABH) as reducing agent it was possible to be selective for the reduction of aldehyde towards ketone even without any use of amine. Moreover, this reducing agent presents the advantage of being cheap and widely commercially available as well as less toxic than its homologous sodium tricyanoborohydride usually used for reductive aminations.
Finally, the selective reduction of ketone was also reported toward esters using sodium borohydride (NaBH 4 ) as the reducing agent 2 (Figure 3) which will also be used in our strategy. To conclude, reductive amination (as well as selective reduction) of degraded PI is possible and was already studied. By using secondary amines or ammonium salt, we should be able to selectively functionalize the aldehyde chain-end of our TLNR which could lead to the synthesis of the first di-block PI/Lipid or PI/protein. Moreover, the use of STABH as reducing agent could also permit to selectively reduce the aldehyde chain-end without degradation of the ketone part also allowing selective functionalization of our polymer. Finally, the use of NaBH 4 for the selective reduction of the ketone chain-end toward ester is also of interest and leads the way to the tri-block formation. It appears that the most important aspect of functionalizing chain-ends will be to go back and forth between activation of the chain-end and functionalization in order to obtain the desired final tri-block.
III. Chain-end functionalization a. Results and discussion
i. Synthesis of a heterotelechelic ketone/maleimide PI (PIMal)
The first functional PI to be synthesized was a PI terminated by a maleimide group. This synthon was originally designed for the synthesis of a di-block PI-Protein via a thiol-maleimide "click chemistry" between a cysteine of the protein and the PIMal.
The synthesis starts with the selective reduction of the aldehyde chain-end of the degraded PI using NaBH(OAc) 3 affording a heterotelechelic ketone/hydroxyl PI (PIOH) (Figure 4). The last step of the synthesis was then the esterification reaction. A similar reaction had already been reported by Goodyear in 2013 10 . As can be seen from Figure 8, the appearance of the characteristic signals of the maleimide function (13', 3.51 ppm / 14' and 15', 6.66 ppm) as well as the one from the "CH 2 " group in α position of the carbonyl ester (9', 2.27 ppm) confirmed the formation of the targeted compound. Again, the maleimide moiety induced side reactions as can be seen from the SEC chromatograms (Figure 9). It can be observed a tiny broadening of the peak after esterification considering the refractive index detector (full line). But, on the light scattering detector (dashed line) a shoulder appeared at higher molar mass. Nevertheless, this population can be neglected as it appeared to be highly minoritary. As fatty acids are cheap and easily available, it was preferred to synthesize most of the acyl chlorides prior to the esterification, starting from the acid and using oxalyl chloride as the chlorinating agent. Few drops of DMF were also used as catalyst (Figure 10).
As can be seen from 1 H and 13 C NMR analysis (Figure 11 and Figure 12 respectively), the chlorination of carboxylic acid went to total conversion regarding the shift of the signal corresponding to the "-CH 2 " group in α position of the carbonyl group from 2.34 ppm (carboxylic acid) to 2.87 ppm (acyl chloride) on the 1 H NMR spectrum and the shift of the carbonyl signal from 180.1 ppm (carboxylic acid) to 174.4 ppm (acyl chloride) in 13 C NMR analysis. Comparison with commercial acyl chlorides confirmed that the new carbonyl formed was the expected one. Besides, long carboxylic acid (C 24:0 ) was rather insoluble in DCM and the reaction had to be conducted in heterogeneous conditions. This did not seem to have any negative impact as the yield of this reaction was also 100%. The first PI/Lipid hybrid synthesized was a PI functionalized at each chain-end by one fatty ester. Its multi-step synthesis is described in the following paragraph.
As various fatty acids will be used later in the manuscript, a precision is given about the denomination:
-PIMonoLip, PIMonoLipLip, PIDiLip or PIDiLipLip represents the molecule in general without any precision about the fatty chains linked.
-PIMonoC n:p, PIMonoC n:p C n:p, PIDiC n :p or PIDiC n:p C n:p represents a specific molecule following the nomenclature of fatty acids where « n » is the number of carbon of the lipidic backbone and p the number of unsaturation of the lipid. As an example, PIDiC 16 :0 is the nomenclature applied for a PI terminated by two palmitic moieties at the same chain end.
The PIMonoLip, like the PIMal, were obtained by the esterification of a PIOH with fatty acyl chlorides (Figure 13). The 1 H NMR spectrum obtained after the esterification is given in Figure 14 (here in the case of a PIMonoC 24:0 ). The shift of signal 8 (3.63 ppm) to 8' (4.03 ppm) corresponding to the "CH 2 " group in α position of the chain-end as well as the appearance of the signal 11' (0.88 ppm) corresponding to the terminal "CH 3 " group of the fatty chain and of signal 9' (2.26 ppm) corresponding to the "CH 2 " group in α position of the ester carbonyl attest that the reaction is quantitative. Finally, the last step of this synthesis was the grafting of a second fatty chain at the α chainend of the PIMonoLipOH previously obtained, via an esterification reaction using fatty acyl chlorides. First, the same reaction conditions as for the synthesis of PIMonoLip were applied.
Even after 20h of reaction at room temperature, only about 50% of conversion was reached (Figure 16 presenting the 1 H NMR spectrum obtained in the case of a PIMonoC 24:0 C 24:0 ). The signal 5 (4.82 ppm) corresponding to the "CH" group in α position of the new ester can be observed but the signal noted "*" (3.80 ppm) corresponding to the "CH" group in α position of the hydroxyl from PIMonoLipOH is still visible. TEA was then replaced by DMAP as it is more nucleophilic that TEA. Indeed, a higher nucleophilicity would increase the speed of the substitution of chloride by DMAP and thus render more reactive the lipidic moiety (Figure 17). The DMAP could also activate the alcohol by hydrogen bonding. Furthermore, DMAP will still act as proton trap and thus protect the double bonds from degradation. The 1 H NMR spectrum of PIMonoLipLip obtained by this process is given in Figure 18, presenting the 1 H NMR spectrum of a PIMonoC 24:0 C 24:0 as an exemple.
In this case, the esterification is quantitative regarding the total disappearance of the signal corresponding to the "CH" group in α position of the hydroxyl of PIMonoLipOH (3.80 ppm). This synthon is an analogue of PIMonoLipLip but with 2 fatty chains at the ω chain-end of the polymer. Again, the synthesis proceeds in a multistep pathway. Each step will be presented successively. The first step is the reductive amination of the terminal aldehyde of PIDeg by diethanol amine (DEA) (Figure 19) 2 . The obtained polymer is a heterotelechelic ketone/di-hydroxyl PI (PIDiOH). This reaction gives access to a PI bearing two reactive hydroxyl functions at one chain-end and is probably one of the most important brick of the overall functional PIs presented in this chapter as it will allow getting closer to Tanaka's model. The second step of the synthesis of PIDiLipLip is the esterification of PIDiOH with fatty acids affording a heterotelechelic ketone/di-lipid PI (PIDiLip). The obtained polymer will present a structure of chain-end close to Tanaka's model. The procedure is similar to the one described for PIMonoLip, using fatty acyl chlorides as reactants and TEA as both, a catalyst and a proton trap.
Silica beads functionalized with primary amines had to be used for the purification of the sample as, during the first experiments, it was observed the formation of di-ketene as can be seen from Figure 22. The formation of ketene could happen when acyl chlorides are mixed with TEA due to the basicity of the amine and the acidity of the proton in α position of the carbonyl group 11 . The ketene formed can then dimerize to give a 4 membered ring lactone referred to as "di-ketene" (Figure 23). During the purification of the polymer, the di-ketene precipitated also and it was thus not possible to purify the PIDiLip. As primary amines react quickly with di-ketenes 12 , at the end of the reaction, it was added amine functional beads to trap the diketene and remove them by simple filtration. Figure 24 presents the 1 H NMR spectra comparison between a contaminated PIDiC 24:0 before (bottom spectrum) and after (upper spectrum) purification with silica beads. Signals 3 and 4 are no longer visible after the treatment attesting the success of this purification method. With this new purification method, pure compounds could be obtained. However, the esterification reaction was quantitative as it can be seen on 1 H NMR spectrum (Figure 25). Here, in the case of PIDiC It may be highlighted that signals 9 and 10 (becoming 9' and 10') are shifted to higher ppm values, whereas the signal 8 (becoming 8') is shifted to lower ppm values. This can be attributed to the hydrogen bonding than can be formed between the proton of the hydroxyl group and the free doublet of nitrogen in the case of a tertiary amine (Figure 26) 13 . It is thus reasonable to expect this hydrogen bonding to have an influence on the shift of the "-CH 2 " group in α position of the nitrogen (PI side), effect that disappears after esterification as no proton is available anymore. This synthon was synthesized to be used as a macro-initiator in the synthesis of a di-block copolymer PI/Polypeptide in order to mimic the PI/Protein coupling. More information will be given later, in the chapter dedicated to bio-conjugation (chapter V). The reaction conditions are similar to the one used for the synthesis of PIDiOH but changing the amine from DEA to N-methylethanolamine (MEAM) (Figure 29). The reaction was quantitative as confirmed by 1 H NMR analysis with the total disappearance of the signal of the aldehydic proton at 9.74 ppm. To further confirm the synthesis of the expected product, HSQC analysis was performed, as the signals corresponding to the "CH 2 " groups in α position of the ketone (2' 2.41 ppm) and of the nitrogen atom (PI side) (8' 2.39 ppm) are overlaid. For sake of clarity, only a zoom of the zone corresponding to the chain-ends is presented in Figure 31 A. It can be seen that signal 8' (2.39 ppm) obtained from the proton analysis of PImOH couples with two different carbons in HSQC (57.9 ppm and 44.0 ppm). By comparing with the HSQC analysis of the PIDeg (Figure 31 B.), it can be confirmed that the signal at 44.0 ppm corresponds to the carbon of the "CH 2 " group in α position of the ketone chain-end. The right compound was thus well synthesized and characterized.
Figure 30: 1 H NMR spectra of PImOH and PIDeg
To the best of our knowledge, the synthesis of PINH 2 has only been reported in literature once 5 via the selective reductive amination of the terminal aldehyde of PIDeg using ammonium acetate. It was reported that PINH 2 is not really stable as it can undergo chain-extension when heated up to 40°C. We then attempted to carry out the synthesis of PINH 2 , either following the described procedure or adapting it (change of solvent or quantity of ammonium acetate). Whatever the solvent or the amount of ammonium acetate, the salt could not be solubilized and the reaction had to be carried out in heterogeneous conditions.
The 1 H NMR spectra obtained for several conditions are given in Figure 32. It is reported in the literature a triplet characteristic of the "-CH 2 " group in α position of the primary amine around 2.6 ppm. Such a signal was never obtained in our case. Nevertheless, in all conditions, a total disappearance of the aldehydic proton (9.7 ppm) was observed whereas the signal due to "-CH 2 " protons in α position of the ketone chain-end (2.4 ppm) remained or slightly decreased in intensity. On spectrum B, two new signals were observed (3.6 and 3.4 ppm respectively) which did not fit with the shift presented in the literature. The 1 H NMR analysis thus showed no evidence that a PINH 2 was obtained but showed that some reaction occurred at the aldehyde chain-end, rather selectively regarding the integration of the "-CH 2 " group in α position of the ketone chain-end. By analysing these polymers in SEC, it can be seen for the attempts B, C and D, an increase of the molar mass (Table 1). This suggests that during the synthesis, the polymer undergoes side reactions increasing its molar mass. The main hypothesis to explain this increase and the disappearance of the aldehydic proton in NMR would be that, due to the low solubility of the ammonium salt, whenever a first primary amine is formed, it preferentially reacts either with another aldehyde or ketone chain-end of another polymer chain thus explaining the increase of the molar mass (Figure 33). Finally, as the synthon could never be synthesized properly, this pathway was abandoned.
Nevertheless, as aminated PI could be a key molecule, we investigated another possibility to obtain the polymer.
Diels-Alder reaction using furfurylamine (PIDA)
It was proposed to use Diels-Alder chemistry between furfurylamine and PIMal to afford the desired amine terminated polymer. As a first step, a model reaction was studied between MalHex and furfurylamine (Figure 34). Multiple NMR analyses ( 1 H, 13 C, 13 C-DEPT 135, HSQC, HMBC) were compulsory for the characterization of the product of the reaction and revealed that the obtained molecule was different from the expected one (Figure 36 to Figure 39). The main difference comes from the signals 6a and 6b (on Figure 36, 2.83 and 2.44 ppm respectively) which were determined by 13 C-DEPT 135 to be protons from a "CH 2 " group not existing in the expected molecule.
Furthermore, by looking in the literature, the same side reaction had already been described 15 .
This behavior is specific from the use of furfurylamine in DA conditions. However, it was shown that when the primary amine was turned into an amide, the targeted DA adduct could be formed. As the primary amine is the function targeted in our case, this reaction was also abandoned and not applied to PI. The structure of the undesired compound formed is given in Figure 35.
vii. Synthesis of a heterotelechelic di-lipid/maleimide PI (PIDiLipMal)
All the chemistries developed in this sub-chapter were finally combined to obtain a PI functionalized in α and ω positions by a maleimide and two fatty esters respectively. This molecule is the key-stone to obtain the desired "tri-block" of Tanaka's model as one chainend is already functionalized with two lipids and the maleimide can be used for the grafting of a protein via thiol-maleimide "click chemistry". This molecule was then crucial for the rest of the project.
The easiest way to obtain PiDiLipMal would be the esterification of a PIDiLipOH using a maleimide acyl chloride and DMAP as catalyst regarding the results obtained for the synthesis of PIDiLipLip (or PIMonoLipLip). Unfortunately, in this case, an unexpected pink coloration of the polymer as well as an increase of the molar mass (even reaching 300 000 g/mol while starting with a 10 000 g/mol PIDiLipOH) were observed. Morever, 1 H NMR analysis revealed that most of the maleimide double bond was consumed with the disappearance of the corresponding signal at 6.66 ppm. Two different approaches were thus considered: either the protection of the maleimide double bound prior to esterification (Figure 40 A.), or a more powerful esterification pathway as the one using MalChlo and DMAP might be too slow compared to the degradation of the maleimide group (Figure 40 B.). The use of furan for the protection of the maleimide function was already described in the literature. 16 The reaction between both functions is reported to be generally fast and quantitative and the deprotection facilitated by the fact that the boiling point of furan is low (~ 30°C) compared to the temperature of deprotection (usually ~ 130°C). The furan can thus be easily removed from the reaction media, by evaporation, during the deprotection process. However, it increases the number of steps to afford the desired compound and presents a risk, during deprotection, of cyclization from the PI double bounds as the polymer cannot handle high temperature for too long. Two deprotection pathways were then investigated, one in bulk using vacuum to remove the furan generated and one in toluene solution distilling the furan. The first step was thus the synthesis of a protected maleimide (MalProt). As 1 H and 13 C NMR spectra were difficult to assign directly, various NMR characterizations (COSY, HSQC, HMBC not presented here) were used to determine the exact structure of the compound. It appeared that, as could be expected, both "endo" (noted "+" in Figure 41 and Figure 42) and "exo" isomers were formed. The reaction was found to be quantitative with the total disappearance of the signal corresponding to the double bond of the maleimide (6.66 ppm)
and the appearance of two new signals (6.37 and 6.49 ppm) both corresponding to the new double bond formed by the DA reaction for the endo and exo compound respectively. About 60% of "endo" compound was obtained. Concerning the esterification, the reaction proceeded via Steglich esterification using dicyclohexylcarbodiimide (DCC) and DMAP as catalysts to afford a heterotelechelic MalProt/di-lipid PI (PIDiLipMalProt). This reaction was very effective as, on the 1 H NMR spectrum (Figure 43), the total shift of signal 13 (3.74 ppm) corresponding to the "CH" group in α position of the terminal hydroxyl group of PIDiLipOH to signal 13' (4.89 ppm) after esterification, corresponding to the same "CH" group but, now, in α position of an ester function was observed. Finally, the deprotection of the maleimide function was investigated to obtain a heterotelechelic maleimide/di-lipid PI (PIDiLipMal). The deprotection in "bulk" was the first one to be tested. Various reaction times and temperatures were tried but it appeared that going over 140°C, the polymer presented cross-linking signs (total or partial insolubility in solvent, coloration, increase of viscosity). A partial deprotection (up to 70% calculated by 1 H NMR) was observed for a heating at 120°C for 20 minutes without any visible sign of cross-linking.
Increasing the heating time up to 40 minutes (still at 120°C) allowed a nearly total deprotection (about 95% calculated by 1 H NMR analysis, Figure 44) even if a small signal corresponding to the protected form was still visible (signal noted "*" on the NMR analysis).
But, these conditions also led to a broadening of the SEC trace (Figure 45). This might be due to cross-linking occurring via side reaction(s) with the double bonds of PI. It was thus thought that addition of solvent was mandatory to avoid this problem. The deprotection in solution appeared to be more suitable to reach the targeted functional polymer (Figure 46 and Figure 47). Indeed, the rate of deprotection is about 90% (calculated by 1 H NMR analysis) and no sign of cross-linking can be observed on the SEC trace. Even if this deprotection pathway can still be optimized, it represents a powerful manner to obtain a PIDiLipMal without any side reaction. In order to avoid the problems encountered during the deprotection step, the direct esterification pathway was also tested. Furthermore, Steglich esterification was demonstrated to be a powerful reaction (PIDiLipMalProt) using low amount of catalyst (DMAP) without generation of acid (like with acyl chlorides) thus preventing any damage of the PI backbone.
However, it still implies to keep in contact DMAP and the double bound of maleimide which are suspected to be the reason of the pink polymer and cross-linking faced at the beginning of this sub-chapter. This reaction was not optimized and huge quantities of reactants were used in order to complete the reaction quickly and avoid thus side reactions between maleimide and DMAP.
In 40 seconds, a rate of esterification of about 85% (calculated by 1 H NMR analysis through the appearance of the signal at 4.89 ppm corresponding to the "CH" group in α position of the newly ester bond) can be reached with the presence of residual PIDiLipOH (Figure 48 indicated by a " * ") but without any visible cross-linking according to the SEC traces showed in Figure 47. Contrarily to the previous method, the 15% residual PIDiLipOH still contain a hydroxyl function that could possibly interfere in the coupling with proteins.
IV. Conclusion
In conclusion, this chapter described the synthesis of diverse functional and functionalized PIs without particularly focusing on their application that will be the scope of the coming chapters. All the structures obtained are given in Table 2 as well as experimental values of integrals for various PI synthons in Table 3. Taking advantage of the selectivity of some reactions or reactants, it was possible to obtain hetero-telechelic PIs both functionalized with two lipids at one chain-end and with a maleimide moiety at the other one. This particular synthon is postulated to be the keystone leading to the final tri-block copolymer of Tanaka by using the "thiol-maleimide" click-chemistry between cysteine and the maleimide chain-end.
This coupling chemistry will be developed in the last chapter of the manuscript. Else, PIDiOH and PImOH were synthesized to be used as macro initiators of a polypeptide block which will be also discussed in the last chapter of the manuscript. Finally, diverse architectures of PI/Lipid hybrids were synthesized varying the number of fatty esters linked. The thermomechanical properties obtained with these structures will be developed in the next chapter. MalHex (0.5 g, 0.05 mmol) was solubilized into 3 mL of dry DCM. Oxalyl chloride (0.4 mL, 2 eq, 0.1 mmol) was then added under flux of argon as well as few drops of DMF. A vigorous bubbling was observed. The reaction conversion was followed by connecting the reaction flask to a bubbler containing a KOH solution. When no bubbling was still visible, the reaction mixture was evaporated under vacuum to remove the solvent and the excess of oxalyl chloride. The final product was then dried overnight under dynamic vacuum affording an orange powder. Yield: > 90% PIOH (1 g, 0.1 mmol) was dissolved in 4 mL of dry THF. Dry TEA (60 µL, 4 eq, 0.4 mmol) was added followed by 2 eq (0.2 mmol) of the desired acyl chloride. After 1 h of reaction, 3aminopropyl-functionalized silica particles were added (~1 NH 2 eq toward acyl chloride) and the obtained mixture was stirred for 2h. iii. Synthesis of PIMonoLipLip PIMonoLipOH (0.1 g, 0.01 mmol) was dissolved in 0.7 mL of dry THF as well as 11 mg of DMAP (10 eq, 0.1 mmol). The desired fatty acyl chloride (4 eq, 0.04 mmol) was then added to the solution and the reaction allowed to proceed at 40°C for 3 h. PIDeg (1 g, 0.1 mmol) was dissolved in 3 mL of dry THF. N-methylethanolamine (33 mg, 4.5 eq, 0.45 mmol) was then added and the reaction was stirred at 40°C for 2 h.
e. Synthesis of PImOH
Finally, 0.12 g (4.2 eq, 0.42 mmol) of sodium triacetoxyborohydride were added to the reaction mixture followed by 10 µL (1.3 eq, 0.13 mmol) of acetic acid and the nonhomogeneous solution was stirred at 40°C overnight. The final polymer was obtained after two successive precipitations into a large excess of cold methanol, solubilization in Et 2 O, filtration through Celite ® and drying overnight at 40°C under dynamic vacuum. Yield: ~ 85% PIDeg (1 g, 0.1 mmol) was solubilized in 5 mL of DCM. Ammonium acetate (0.25 g, 30 eq, 3 mmol) was then added to the polymer solution as well as 65 mg (2.8 eq, 0.28 mmol) of NaBH(OAc) 3 and 6 µL (1 eq, 0.1 mmol) of glacial acetic acid. The reaction was then stirred overnight at room temperature and the final polymer recovered after two successive precipitations into a large excess of cold methanol, solubilization in Et 2 O, filtration through Celite ® and dried overnight at room temperature under dynamic vacuum.
g. DA reaction between MalHex and Furfurylamine (MalDAAm)
Figure 62: structure of MalDAAm MalHex (0.1 g, 0.47 mmol) was solubilized into 1 mL of DCM and Furfurylamine (0.46 g, 10 eq, 4.7 mmol) was then added to the solution.
After stirring at room temperature overnight, the final compound was obtained by evaporation of the solvent and the excess of furfurylamine under dynamic vacuum.
Characterization of the undesired compound formed: iii. Synthesis of PIDiLipMal
Direct esterification
PIDiLipOH (0.1 g, 0.01 mmol) was solubilized into 1 mL of dry THF. MalHex (30 mg, 10 eq, 0.1 mmol) was then added to the solution as well as 30 mg of DCC (11 eq, 0.11 mmol) and 2 mg of DMAP (1 eq, 0.01 mmol). The reaction mixture was then stirred for 40 seconds at room temperature and then precipitated twice in a large excess of cold methanol. The polymer was then solubilized in Et 2 O, filtrated through Celite ® and dried overnight at room temperature under dynamic vacuum.
I. Introduction
The previous chapter described the synthesis of various PI-Lipid hybrid polymers constituted of a polymeric chain functionalized by one, two or three fatty chains. This chapter will focus first on the thermo-mechanical properties of this new hybrid polymers and the influence of the nature of the fatty chain linked to the polymer. Then, by studying the CCr of those hybrid materials, the influence of linked and free fatty chains on the CCr of PI will be investigated.
As those hybrid polymers are close models of NR, this study will probably allow a better understanding of the superior CCr property of the natural polymer toward IR.
II. Chain-end crystallization a. Study of PIDiLip
DSC analysis was first performed on the PIDiLip to investigate the influence of the grafted fatty chain on the PI crystallization properties. Figure 1 presents the DSC profiles of NR, PIDeg and PIDiOH. Those profiles were recorded as references for the rest of the study. No significant difference between NR and its shorter derivatives was observed. In all cases, a T g around -63°C was measured. Depending on the nature of the fatty chain linked, both a crystallization and a melting peak can appear (Figure 2 and Annexes 1 to 6). For the unsaturated fatty ester used (C 11:1 / C 18:2 ) as well as for the shorter saturated one (C 14:0 ), only a T g around -65°C was measured, which is close to the value observed for initial NR, PIDeg and PIDiOH.
On the contrary, for longer saturated fatty chains, both a crystallization and a melting peak are observed. The results are also summarized in Table 1. Comparison of the crystallization temperatures of the free saturated fatty methyl esters alone with the ones obtained in the case of PIDiLip (Figure 3) showed that the crystallization temperature of the PIDiLips and of the free methyl esters alone varies similarly versus the lipidic chain length, but with a shift to lower values (i.e. free methylC 18:0 alone crystallizes at 40°C while it crystallizes at -18°C when attached to the PI chain-end). Here, the values used for the crystallization temperature of fatty methyl esters alone are data from the supplier. where :
-y represents the crystallization temperature of a PIDiLip and -x represents the size of a saturated fatty ester (in carbon number)
It was postulated that below the glass transition temperature of PI, no crystallization of chainends could be observed as the system would be in a glassy state. The previous equation was thus use to determine the "low limit" of size of fatty ester than can be grafted to the polymer backbone and would afford crystallization. The value of x for y = -63°C was thus calculated.
As the equation is a second order one, it gives two solutions, x 1 = 48 and x 2 = 13. Considering the solution x 2 , it would mean that for all the saturated fatty esters bearing less than 13 carbons and linked to a 10 000 g/mol PI, the crystallization temperature of the chain-ends would be below the T g of the polymer, thus not visible by DSC. It can be extrapolated that if the melting temperature of a fatty ester alone is below 5°C (melting temperature of the methyl-n-tridecanoate), then it will no more cristallize when attached to a 10 000 g/mol PI chain. Those two limits (13 carbons, and 5°C for the crystallization temperature) well explain the behavior of the non-crystallinity of PIDiC 11:1 , PIDiC 18:2 and PIDiC 14:0 .
Indeed, in the case of unsaturated fatty chains, the crystallization temperature of methylundecenoate and methyllinoleate alone are -24°C and -35°C respectively, far below the limit of 5°C proposed previously. This mean that, while being attached to the PI chain, those esters would crystallize below the T g of the polymer. The case of PIDiC 14:0 can be related to the limit of 13 carbons obtained from the trendline equation. Two hypotheses can be formulated. Either the low limit of 13 is not very accurate due to the fact that it is calculated from a trendline, either as the PIDiC 14:0 might possess a crystallization temperature close to the T g , the heat flow used for the analysis (10°C/min) was too fast compared to the speed of crystallization of the chain-end and the glass transition was reached prior to the beginning of crystallization of the fatty chains. This could also be related to the case of PIDiC 16:0 which crystallizes only partially during the cooling cycle (Annexe 3).
Interestingly, in the case of PIDiC 24:0 (Annexe 6), it was possible to observe the crystallization of the material at room temperature (T°c ryst = 17°C). A total change of viscosity was thus observed with a material behaving more like a paste than a viscous liquid (general behavior of the other 10 000 g/mol PIs). This hybrid was studied by optical microscopy using a heating and cooling plate under polarized light. Figure 4 presents 3 pictures of the same sample submitted to a cooling cycle (from 60°C to 5°C) followed by a heating cycle (from 5°C to 60°C) performed at a heat rate of 10°C/ min in order to be comparable with the DSC program. The pictures "a" and "c" correspond to the hybrid polymer observed at 60°C and show an amorphous matrix of PI. On the contrary, picture "b" represents the same sample but cooled down to 5°C. Crystallites appeared under the shape of "shinny dots" thanks to the polarized light. The appearance of cristallites explains the changing of behavior of the PIDiC 24:0 as compared to the other hybrids. Figure 5 presents our proposition of the self-assembly of a PIDiLip below the crystallization temperature of the lipidic chain-ends.
The nodules of crystallization created by the fatty chains are forming micro-domains anchoring the PI but keeping the ketone chain-end free. This vision of the phenomenon raised the hypothesis of a potential thermal cross-linking functionalizing both α and ω chain-ends by fatty esters. This will be discussed later in this chapter.
i. 5 000 g/mol
As can be seen on Figure 6 (as well as in Table 2 which summarizes the results and on Annexes 7 to 9), the decrease of the PI chain length did not interfere at all in the crystallization and melting phenomena. Indeed, the temperatures obtained for both the crystallization and melting of the lipidic chain-end are identical from the one obtained in the case of 10 000 g/mol PIDiLips. The main difference existing between the two experiments is the values of the enthalpy obtained during crystallization or melting that increased from ~ 8 J/g (10 kg/mol PIDilip) to ~ 14 J/g (5 kg/mol PIDiLip). This result was expectable as the mass proportion of fatty esters for PIDiLip 5 kg/mol is twice the one of PIDiLip 10kg/mol. ii. 27 000 g/mol
The second approach was to increase the molar mass of the PI chain up to 27 000 g/mol and to functionalize it with C 16:0 fatty esters affording a PIDiC 16:0 . The DSC thermogram obtained after its analysis is reported in Figure 7. No crystallization was obtained for this polymer. This could come from the increasing of the viscosity of the polymer higher than the 10 000 g/mol synthons. At low temperature, the viscosity becoming even higher, rendered the mobility of chains difficult and, as a matter of fact, prevents the crystallization of the chain-ends.
Additionally, the mass percentage of fatty esters is nearly 3 times lower than with the 10 000 g/mol PIDiLip which could also renders not favourable the crystallization. Another hypothesis could be that 27 000 g/mol is higher than the molar mass of entanglement of PI (~ 14 000 g/mol) thus rendering difficult the mobility of the polymer chains and, thus, the encountering of the chain-ends to crystallize. The effect of the number of linked fatty chains on the crystallization behavior was then studied. To this end, the crystallization properties of both PIMonoLip and PIDiLipLip (Figure 8) were then determined.
i. PIMonoLip
As described in the previous chapter, various hybrid polymers (PIMonoC 16:0 , PIMonoC 18:0 and PIMonoC 24:0 ) were synthesized starting from PI of 5 and 10 kg/mol. DSC analyses were performed and surprisingly, with the 10 000 g/mol series, only the PIMonoC 24:0 exhibited a weak crystallization. Moreover, the temperature of crystallization and melting (-32°C and -18°C respectively, Figure 9) were shifted to lower values when compared to PIDiC 24:0 . The explanation was originally thought to be the mass fraction represented by the fatty esters in PIMonoLip which is twice lower than in the case of PIDiC 24:0 . But, when PIMonoLip of 5 000 g/mol were analyzed, again, only the PIMonoC 24:0 exhibited a crystallization and at the same temperature as PIMonoC 24:0 of 10 000 g/mol. It was nevertheless more intense than for the latter. Those results deny the preliminary hypothesis of the influence of the mass fraction as for PIMonoC 24:0 5 000 g/mol the proportion of fatty chains is the same as for PIDiC 24:0 10 000 g/mol. They, however, suggest that one important parameter that governs the crystallization behavior is the shape of the hybrid. Indeed, for the same weight fraction of fatty chains attached to the PI (PIDiC 24:0 10 kg/mol and PIMonoC 24:0 5 kg/mol), when two lipids are attached at the same chain-end (PIDiLip) instead of one (PIMonoLip), a crystallization at higher temperature is observed.
It could be proposed that the lipidic domain, in the case of a "Y" shaped hybrid is bigger than in the case of a linear one thus shifting the crystallization to higher temperatures. The peaks noted "*" on the figures corresponds to ketene impurities as, at this point of the project, the removal method had not yet been found.
ii. PIDiLipLip
As described before, fatty chains could form nodules of crystallization. This raised the hypothesis that if the ketone chain-end of the PI backbone could also be functionalized with a fatty chain (PIDiLipLip or PIMonoLipLip) it would be possible to induce the formation of a physical network formed by crystals. In the same vain, a recent work described the synthesis triblock copolymers composed of an amorphous core of branched poly(n-butyl acrylate) functionalized at both chain-ends by a "crystallizable" statistic copolymer of polyoctadecyl acrylate and polydocosylacrylate 1 . It was demonstrated that different self-assembled structures could be obtained, including physical cross-linking (Figure 11). Two synthons were thus synthesized starting from the same PIDiC 24:0 (10 000 g/mol): a PIDiC 24:0 C 24:0 and a PIDiC 24:0 C 16:0 . The DSC thermograms are given in Figure 12 and Figure 13 respectively. For PIDiC 24:0 C 24:0 , it can be observed that the addition of one extra fatty chain does not have any influence on the crystallization (or melting) temperature if compared to the PIDiC 24:0 .
The only difference was again the enthalpy of crystallization which increased slightly from 8 to 12 J/g. Moreover, only one peak of crystallization (or melting) is visible on the thermogram which would confirm the co-crystallization of both chain-ends and the potential physical cross-linking. In the case of PIDiC 24:0 C 16:0 , a slight decrease of the crystallization temperature is observed as compared to the value of PIDiC 24:0 alone which could be explained by the cocrystallization of the C 24:0 and the C 16:0 . Again, only one exotherm is visible for the crystallization thus also attesting the co-crystallization of both chain-ends and thus the possible physical cross-linking. No further investigation on the reversible cross-linking of the PIDiLipLip was performed, mostly due to lack of time.
d. Addition of free lipids
To complete this study, it was proposed to find a method to increase the crystallization temperature of the hybrids in order to try to obtain a material physically cross-linked at room temperature. It was thus proposed to add free fatty acids or esters in order to reinforce the crystals.
Experiments were first carried out on PIDiLip. Various amounts of methyl palmitate (MetPalmitate) were first added to PIDiC 16:0 . To this end, the fatty ester and the polymer were dissolved in diethyl ether to get a homogeneous solution. After drying, the final material was analysed by DSC. The amount of methyl ester introduced was varied (0.1, 0.3, 1, 2 and 10 wt%). Figure 14 presents the DSC thermograms obtained for each composition. As it can be observed, the crystallization temperature increased with the amount of MetPalmitate shifting from -32.2°C to 9°C. As a reference, the crystallization temperatures of PIDiC 16:0 alone and MetPalmitate alone are -35°C and 34°C respectively. Interestingly, except in the case of 10% MetPalmitate, only one exotherm was observed thus confirming the co-crystallization of the chain-ends with the free lipids. This experiment demonstrates the real potential of the "doping" as a gap of approximately 20°C could be obtained between the initial hybrid and the 2 wt% doped one. In a second step, free fatty acids were added instead of methyl fatty esters as the crystallization temperature of fatty acids is higher than the corresponding fatty ester. Again, various amounts (0.1, 0.3, 1, 2 and 10 wt%) of free fatty acids were used. Contrarily to the previous experiment, here, stearic acid (noted here "SteAcid") was added to PIDiC 18:0 .
Figure 15 presents an overlay of the DSC thermograms obtained for each composition.
Different behaviour can be observed as two distinct exotherms were quickly observed. For small quantities of SteAcid (0.1 and 0.3 wt%) the exotherm corresponding to the crystallization of chain-ends (peak A / -18°C) was less intense and a second broad peak appeared (A' / -3.2°C). This new exotherm certainly corresponds to the co-crystallization of both free and linked fatty chains. But for higher amounts of SteAcid (1, 2 and 10 wt%), the peak A totally disappeared as the peak A' increased and a new exotherm appeared, which shifted to higher temperatures with the increase of the doping amount (peak C, C' and C'').
Taking into account that the stearic derivatives (acid or esters) are not soluble in PI 2 , when a certain amount of SteAcid is reached in the PI matrix, the chain-end still co-crystallizes with a part of the free lipids (peak A') but the rest of the free lipids crystallizes at higher temperature without any mixing with the chain-ends (peaks C, C' and C''). The increase of the temperature between C, C' and C'' comes from the fact that the amount of free fatty acids, not co-crystallized with the chain-ends, is increasing and tends to the crystallization temperature of pure stearic acid (68°C). This renders totally incompatible the acidic doping with a crosslinked material at room temperature. From the previous experiments, it can be observed that the best co-crystallization effects were observed with fatty ester derivatives. In order to generate physically cross-linked networks, PIDiC 24:0 C 24:0 was mixed with 0.1, 1, 2 and 10 wt% of methyl lignocerate (noted here "MeLignocerate") and analyzed by DSC (Figure 16). After addition of 2 wt% of MeLignocerate, the temperature of crystallization doubled (from 17°C to 36°C) with only one visible exotherm, thus attesting of the good co-crystallization of the free and linked fatty chains. For even higher amount of doping agent (10 wt%) a strange profile was obtained
e. Conclusion
To conclude on this first part of the chapter, it was observed that when grafted to PI, fatty esters were able to crystallize even in the presence of a huge amorphous matrix that represents the polymer. The fatty esters were able to form microdomains thus creating nodules of crystallization comparable to thermal anchors for the polymeric chains. Moreover, the temperature of crystallization can be monitored by either the nature of the fatty ester linked and/or the addition of free fatty esters.
It was, however, proved that the increase of the polymeric chain length prevents this phenomenon and that the fatty acids can not be used as doping agents as they are not mixable with the polymer. Finally, the functionalization of the polymer at both chain-ends could, according to the literature, lead to the formation of a reversible network formed by the crystallites. More experiments have to be done in this field in order to obtain any elastomeric material.
III. Cold Crystallization a. Introduction
The origin of the synthesis of PIDiLip was to generate a good model of NR which could be helped to a better understanding of its properties. As explained in the bibliographic part, among the different properties of NR, CCr was proved to be related to the presence of fatty chains in NR. We thus had the opportunity of studying the influence of both linked and free lipids on the CCr capacity of PI by using a model possessing a structure close to the natural polymer according to Tanaka. The results will be presented under the form of a scientific article.
b. New insight into the Cold Crystallization of Natural Rubber: the role of linked and free fatty chains
INTRODUCTION
Natural Rubber (NR) is one of the most important natural polymer as it is widely used in the industry for various applications (tires, gloves, etc…). 3 It exhibits specific thermo-mechanical properties that cannot be obtained with synthetic polyisoprene (IR) such as strain induced crystallization (SIC), high green-strength, excellent crack resistance and fast cold crystallization (CCr). [4][5][6][7] This later property corresponds to the crystallization of PI at low temperature (usually -25°C) without any external perturbations, contrarily to SIC which implies the deformation of the material. 8 The origins of the superior capacities of NR are not yet fully understood despite the wide amount of studies carried out on this topic.
Nevertheless, Tanaka et al. [9][10][11][12] showed that it could arise from the structure of the polymer which was described to be composed of a polyisoprene (PI) chain functionalized at one chainend by a protein and at the other by a phospholipidic moiety (Figure 17). These chain-ends could self-assemble to form a dynamic network in the material thus explaining the superior mechanical resistance of NR toward IR. 9 In 1946, Wood 13 studied the crystallization of NR and showed that the fastest crystallization rate was obtained by keeping the material at -25°C for several hours. Burfield established later that 75% of the final crystallinity could be reached after 3 h of isotherm and that total crystallization was observed after 16 h. 14 CCr was shown to be correlated to the suggested structure of NR (Figure 17) and more specifically to the lipidic chain-end. The influence of free fatty acid/esters present in NR was also investigated as well as their interaction with the PI chain. being the main studied) enhanced the crystallization of NR and that unsaturated ones (methyl linoleate for example) had a plasticizing effect on the PI chain. 2 Regardless the nature of the fatty acid, it was shown a synergetic effect between linked and free fatty chains as well as between stearic acid and methyl linoleate to promote the cold crystallization of the material. 20,22 Similar studies were also performed on synthetic polyisoprenes. To this end, lipids were grafted onto IR by hydroboration using the pendant 3,4-units, leading to a model of NR. 21 A fast crystallization was observed when the backbone was functionalized with 0.5 wt % of stearic acid and mixed with 1 wt % of methyl linoleate, but even if it was faster than the starting IR, it was still slower than NR. This difference could be related to the microstructure of NR, which is commonly accepted to be pure 1,4-cis contrarily to the IR used by Kawahara which contained about 2 % of 1,2 and 3,4 units, as it was demonstrated to play a role on the crystallization of PI. 23 In this study, we investigated the synthesis of new hybrid polymers formed from the functionalization of a pure 1,4-cis PI (molar mass of ~10 000 g/mol) with one or two fatty chains at the ω chain-end as a closer model of NR and we studied their cold crystallization properties. Even if isoprene polymerization was already studied by many techniques (cationic, 24,25 anionic, 26 radical, 27-29 metathesis, 30 and coordination polymerization 31 ), none of them allowed to reach a pure 1,4-cis microstructure with controlled chain-ends.
Oxidative degradation of NR by successive epoxidation and acidic degradation was preferred as it could lead to pure 1,4-cis PI with ketone and aldehyde reactive chain-ends. 32 Moreover, the molar mass could be controlled by the quantity of epoxidized units. Reactive chain-ends could then be easily modified to graft lipidic moieties in ω position. The general synthetic strategy developed in this study is depicted on Figure 18. In this paper, the influence of the nature of fatty acids on the crystallization properties will be discussed as well as the influence of the addition of free lipids. Finally, modification of a synthetic polyisoprene will also be studied to try to mimic the crystallization properties of NR. Natural rubber (NR) RRIM600 was kindly provided by Katsetsart University in Thailand.
Cis-1,4-polyisoprene (IR) (97% cis-1,4, M n = 600 kg/mol , Ð = 2.8) was purchased from Scientific Polymer Products, Inc. 3-Chloroperoxybenzoic acid (mCPBA) (70-75%, Acros), periodic acid (H 5 IO 6 ) (≥ 99%, Aldrich), acetic acid (99%, Aldrich), potassium hydroxide (KOH) (85%, Aldrich), sodium triacetoxyborohydride (NaBH(OAc) 3 ) (97%, Aldrich), diethanolamine (DEA) (99%, Alfa Aesar), stearic acid (SA) (95%, Aldrich), myristic acid (99%, Aldrich), lignoceric acid (> 99%, Aldrich), methyl linoleate (ML) (99%, Aldrich), linoleoyl chloride (>99%, Aldrich), 10-undecenoyl chloride (97%, Aldrich), palmitoyl chloride (98%, Aldrich), 3-aminopropyl-functionalized silica gel (~1 mmol/g NH 2 loading, 40-63µm, Aldrich), oxalyl chloride (>99%, Aldrich) were used without further purification. Tetrahydrofuran (THF) and dichloromethane (DCM) were dried on alumina column. Triethylamine (TEA) was dried on KOH pellets and distilled prior to use.
Methanol, diethyl ether and dimethylformamide (DMF) (reagent grade, Aldrich) were used as received as well as Celite ® (R566, Aldrich).
Characterization.
Liquid-state 1 H NMR and 13 C NMR spectra were recorded at 298 K on a Bruker Avance 400 spectrometer operating at 400 MHz and 100 MHz respectively in appropriate deuterated solvents. Polymer molar masses were determined by size exclusion chromatography (SEC) using tetrahydrofuran as the eluent (THF with 250 ppm of Butylated hydroxytoluene as inhibitor, Aldrich). Measurements were performed on a Waters pump equipped with Waters RI detector and Wyatt Light Scattering detector. The separation is achieved on three Tosoh TSK gel columns (300 × 7.8 mm) G5000 HXL, G6000 HXL and a Multipore HXL with an exclusion limits from 500 to 40 000 000 g/mol, at a flow rate of 1 mL/min. The injected volume was 100 µL. Columns' temperature was 40 °C. M n and Đ values were calculated using dn/dc(polyisoprene)=0.130. Data were processed with Astra software from Wyatt.
Differential scanning calorimetry (DSC) measurements were performed using a DSC Q100 LN 2 or a DSC Q100 RSC apparatus from TA Instruments depending on the experiment. With DSC Q100 LN 2 , the samples were first heated to 80°C during 20 minutes to suppress any traces of solvent then cooled to -100°C and heated back to 120°C at the rate of 10°C min -1 .
Consecutive cooling and heating run were also performed at 10°C min -1 . The analyses were carried out in a helium atmosphere with aluminum pans. DSC Q100 RSC device was used for isothermal analysis. The samples were heated at 80°C during 20 minutes prior to use to suppress any traces of solvent, then cooled to -25°C during predetermined time and then heated to 120°C at a heating rate of 10°C min -1 . Fourier Transformed Infra-Red-Attenuated Total Reflection (FTIR-ATR) spectra were recorded between 4000 and 400 cm -1 on a Bruker VERTEX 70 instrument (4 cm -1 resolution, 32 scans, DLaTGS MIR) equipped with a Pike GladiATR plate (diamond crystal) for attenuated total reflectance (ATR) at room temperature.
Synthesis.
Synthesis of heterotelechelic keto-aldehyde PI (PIDeg) (1). 5 g of NR were dissolved overnight in 250 mL of THF under vigorous stirring. The viscous solution obtained was then cooled to 0°C and 50 mL of mCPBA (0.14 g, 0.6 mmol) solution in THF were added dropwise to the NR solution. The reaction was then allowed to warm up to room temperature for 2 h. 0.3 g of periodic acid (2.2 eq to mCPBA, 1.33 mmol) were dissolved in 50 mL of THF and added dropwise to the epoxidized NR solution. After 2 h of stirring at room temperature, the reaction mixture was filtered affording a yellow solution which was then concentrated in vacuum and precipitated into a large excess of cold methanol containing 2 mL of alkaline water. The polymer was then dissolved in diethyl ether (~ 100 mL) and the obtained cloudy solution was filtered on Celite ® . The final product was obtained by evaporation of the Et 2 O and drying overnight at 40°C under dynamic vacuum affording a yellowish and transparent viscous liquid. Yield: ~ 80 %, M n = 9 620 g/mol, Đ = 1.6 (Figure S1).
(Figure S2); 1 Synthesis of heterotelechelic keto-dilipid PI (PIDiLip) (4). 2 g of PIDiOH (0.2 mmol) were dissolved in 7 mL of dry THF. 180 µL (6 eq, 1.2 mmol) of TEA were then added followed by 3 eq (0.6 mmol) of the desired acyl chloride.
After 1 h of reaction, 3-aminopropyl-functionalized silica particles were added (~1 NH 2 eq toward acyl chloride) and the mixture was stirred for 2 h. The reaction medium was then precipitated in a large excess of cold methanol, dissolved in Et 2 O, filtered through Celite ® and dried first on rotary evaporator and then overnight at 40°C under vacuum. The final compound was a colorless viscous liquid except for PIDiC 24:0 which was a colorless paste.
Yield: ~ 80 % (Figure S8); 1
RESULTS AND DISCUSSION
Polymer modification.
Synthesis of PIDeg (1)
In this study, 10 000 g/mol PIDeg were targeted. The epoxidation and the acidic degradation (Figure 18) were performed successively without intermediate purification. During the precipitation step in methanol, the use of alkaline water was compulsory to prevent acetal formation at the aldehyde chain-end.
The targeted molar mass can, nevertheless, be adjusted by using the following formula:
𝑀
Synthesis of PIDiLip (4)
For the synthesis of PIDiLip (4) the reductive amination of the aldehyde chain-end of (1) with diethanolamine and NaBH(OAc) 3 was first performed to yield PIDiOH (2). Compared to the procedure described in the literature 33 , the solvent was changed (THF instead of DMSO/dichloroethane mixture) to reach a higher reaction rate. 1 H NMR analysis (Figure S4)
showed a quantitative conversion with the total disappearance of the signals at 9.7, 2.49 and 2.35 ppm corresponding to the aldehydic proton and both "CH 2 " groups in α and β position of the aldehyde chain-end respectively. Furthermore, the appearance of signals at 3.57, 2.64 and 2.52 ppm corresponding to both "CH 2 " groups in α and β position of the hydroxyl function and to the "CH 2 " group in α position of the nitrogen atom respectively also confirmed the good control of the reaction. Finally, the selectivity was confirmed by the presence of the signal at 2.43 ppm corresponding to the "CH 2 " group in α position of the ketone chain-end.
The following step was to graft two fatty chains through acylation reactions affording (4). The corresponding fatty acyl chlorides were first synthesized through the reaction of fatty acids with oxalyl chloride in dichloromethane with excellent yields (Figure S6 and S7). As confirmed by the total shift of the signal at 2.35 ppm in 1 H NMR corresponding to the "CH 2 " group in α position of the carboxylic acid carbonyl to a signal at 2.87 ppm after the chlorination.
Moreover, in 13 C NMR analysis the shift of the signal at 182 ppm corresponding to the carbon of the carboxylic acid carbonyl to a signal at 174.21 ppm after chlorination also confirms the full conversion. These acyl chlorides were then reacted with (2) in dry THF in the presence of triethylamine. Again, 1 H NMR analysis (Figure S8) confirmed full conversion with the total shift of the signal at 3.57 ppm corresponding to the "CH 2 " group in α position of the hydroxyl chain-end to a new signal at 4.09 ppm corresponding to the "CH 2 " group in α position of the ester formed. The use of the silica beads functionalized by amino-propyl groups is important as it prevents contamination of the product by di-ketene (side product) formed during the reaction 34 and removes the excess of acyl chloride used. No change on the SEC chromatogram was observed thus attesting that no cross-linking occurred during the overall pathway.
Synthesis of PIMonoLip (5)
The synthesis of PIMonoLip (5) starts with the selective reduction of the aldehyde chain-end of PIDeg (1) to yield PIOH (3). It was then used NaBH(OAc) 3 as a reducing agent known for its selectivity toward aldehydes against ketone 35 . Again, 1 H NMR analysis (Figure S9) confirmed the quantitative conversion with the disappearance of the signals at 9.77, 2.49 and 2.35 from (1) corresponding to the aldehydic proton and both "CH 2 " groups in α and β position of the aldehyde chain-end respectively. A new signal appeared at 3.62 ppm corresponding to the signal of the "CH 2 " group in α position of the newly formed hydroxyl function.
The last step was the grafting of one fatty chain via an esterification reaction similarly to the synthesis of (4). Again, the reaction was carried out in dry THF using fatty acyl chlorides and triethylamine as a proton trap. The 1 H NMR analysis (Figure S9) showed a quantitative conversion with the total shift of the signal at 3.62 ppm corresponding to the "CH 2 " group in α position of the hydroxyl chain-end to a signal at 4.03 ppm corresponding to the same group in α position of the new ester function. Again, no change in the SEC chromatogram was observed attesting that no side reaction occurred.
All the synthesized molecules (varying the number of fatty chains and their nature) are summarized in Table 3.
Thermal analysis of the hybrid polymers synthesized
First, DSC analyses of the PIDiLip were performed. Figure 19 shows the DSC thermogram obtained for PIDiC 18:0. It can be observed a T g at -63°C as well as a crystallization (and a melting) at low temperature for the hybrid polymer, whereas for NR, PIDeg and PIDiOH (Figures S10 to S12), only a T g at -63°C was observed. This could not be due to PI crystallization which was reported to be slow, occurring only while maintaining the rubber at low temperature (usually -25°C) for a long time. It can then be assumed that the lipids linked at the chain-ends could crystallize. Moreover, it was possible to tune the crystallization and melting temperature of the chain-ends by varying the nature of the fatty acid as indicated on Table 4. Indeed, T m and T c increased with the fatty acid chain length (Figures S13 to S18).
For instance, T m increased from -25 to 22°C for C 16:0 and C 24:0 respectively.
Nevertheless, for unsaturated fatty acids (C 11:1 and C 18:2 ) and smaller saturated fatty acid (C 14:0 ), no crystallization occurred. For PIDiC 24:0 , as the melting temperature is close to room temperature, the material became more viscous due to partial crystallization. In the case of PIMonoLip, only PIMonoC 24:0 presented a crystallization but at a temperature much lower than PIDiC 24:0 (-35°C instead of 18°C). This suggests that the number of chains grafted to the polymer backbone is also an important parameter for the chain-end crystallization.
No crystallization or melting observed by DSC
In a second step, as PIDiLip were designed to be simple models of NR, it was decided to study their cold crystallization behavior. DSC analysis was used to study the crystallization of the starting NR after 2 and 8 h of isotherm at -25 °C (Figure 20).
It can be seen, as reported in the literature, that the crystallization process is quite long as only a small melting endotherm can be observed after 2 h at -25 °C. Interestingly, exothermal crystallization peak can be observed during the isotherm as a function of time (Figure 21). It was defined in the literature different characteristic times: t i (induction time, corresponding to the starting time of isothermal analysis), t m (time of maximum crystallization, i.e the time at which the maximum of the plot is observed) and t e (extrapolated time, i.e the time at which the crystallization is effectively starting). We added a fourth value, t f , corresponding to the time at which the crystallization is finished (see Figure 21 line b). Similar analyses were then achieved for PIDeg and PIDiOH (Figure 21). All the characteristic times are summarized in Table 5 as well as the melting temperatures obtained for each sample by performing DSC heating cycle following isothermal crystallization. For PIDiOH and PIDeg, the crystallization started with a delay (t e : 93 and 82 min for PIDeg and PIDiOH respectively) but maximum of heat flow was achieved after similar crystallization time for all three polymers (t m -t e ≈150), as well as crystallization finished in ~320 min (t f -t e ) in all three cases. In the literature, such a delay was only observed for acetone extracted NR (AE-NR) (removal of the free fatty chains by acetone extraction) or trans-esterified NR (TE-NR) (removal of both linked and free fatty chains). In our case, it could thus mean that controlled degradation removed free and/or linked fatty chains from the initial NR but that the reductive amination had no effect. Finally, an increase of the overall crystallinity of the PIDeg and PIDiOH was observed compared to the initial NR (higher enthalpy), which is in agreement with the literature 22 , as Kawahara reported a delayed crystallization of TE-NR compared to NR but a higher final crystallinity. The cold crystallization of PIDiC 18:0 was then investigated and compared to the one of PIDiOH (Figure 22). Endotherm peaks were visible in both cases. Nevertheless, in the case of PIDiOH, the observed endotherm corresponded to the melting of the crystallized PI chains, whereas for PIDIC 18:0 , the endotherm was due to the melting of the chain-ends as the melting temperature is lower than the one of PI chains and corresponds to the T m of the chain-ends during regular DSC cycle (Figure 3, Table 2). The grafted fatty chains at one chain-end seemed then to prevent (or at least significantly decrease) the CCr of the polymeric chains.
This observation is in good agreement with the literature as AE-NR (only containing linked lipids) presented a huge decrease in the crystallization rate. The effect of the number of linked fatty chains was also investigated.
In the case of PIMonoLip, no PI crystallization was observed (Figure S19). Tanaka suggests that grafting fatty chains onto PI decreases the "purity" of the sample thus rendering the crystallization less favorable. 16 In our case, the absence of crystallization of PI even after 8 h of isotherm can be related to the mass fraction of lipids which acted as a huge quantity of "impurities" in the case of PIDiC 18:0 (5.7 wt %). In a last step, free lipids were added to try to recover a cold crystallization for the hybrid material. Indeed, methyl linoleate (ML) and stearic acid (SA), which represent the major part of the lipids in the Hevea NR 36 , were reported to have a nucleating effect for SA and a plasticizing effect for ML on the CCr of NR. PIDiC 18:0 was investigated first and was mixed with 4 % SA, 4 % ML and 4 % ML + 4 % SA (wt%). DSC thermograms are given on Figure 23. The addition of 4 % of ML did not induce any crystallization of the polymer and only the melting of the lipidic chain-ends was present. On the contrary, the addition of 4 % SA or 4 % ML + 4 % SA favored CCr of the PI chains with a higher melting enthalpy when both free lipids were present, showing the synergetic effect of SA and ML. Nevertheless, even in the presence of 4 % of both SA and ML the overall crystallinity was much lower than that of NR. This is a different behavior than AE-NR for which addition of SA with ML allowed to recover the original cristallinity 20,21 . This can be due to a much higher weight content of linked fatty chains in our case as well as the low molar mass used for this study (~10 000 g/mol). By mixing PIDiOH with 4 % of SA + 4 % of ML, it can be seen on Figure 24 that the addition of free fatty chains increased the crystallization rate which became even higher than the one of NR (t m and t f of 91 and 182 min vs 146 and 338 min respectively) with an overall crystallinity comparable to the one of PIDiOH reported in Table 5. It can be assumed that the nucleating effect of SA created the first nodules of crystallization and that the plasticizing capacity of ML conferred mobility to the chains even at low temperature. A similar study was then realized with a high molar mass IR (M n ~600 000 g/mol) having high content of 1,4-cis units (~ 97%). The same chemical procedures were followed to synthesize again hybrid polymer (IRDiC 18:0 ) with a molar mass of polyisoprene of 10 000 g/mol.
The chain-end crystallization occurred exactly the same way as for PIDiC 18:0 synthesized from NR (Figure S20). Nevertheless, contrarily to NR, no crystallization was observed for the IRDiOH (Figure S21), neither for the starting IR maintained at -25°C for 8h. The time of isotherm was thus extended to 60 h and in this case crystallization appeared only for the initial IR (Figure S22) but not for IRDeg (Figure S23). These results suggest that the CCr of IR is much slower than the one of NR certainly due to the presence of 1,2-and 3,4-units in the microstructure but also that the higher the molar mass, the quicker the CCr. As expected, IRDiC 18:0 did not exhibit any crystallization in agreement with the absence of CCr for IRDeg.
Both samples were then mixed with free lipids (4 % ML + 4 % SA) and maintained at -25°C for 60 h. The DSC thermograms obtained are reported in Figure 25. Both the IRDiC 18:0 and the IRDeg recovered a CCr. In the case of the hybrid, only a weak endotherm can be observed which is partially covered by the melting of the fatty chain-end. But, like in the case of PIDeg mixed with free lipids, the IRDeg exhibits a huge endotherm corresponding to the crystallization of the PI. Moreover, the size of the endotherm obtained in the case of IRDeg mixed with free lipids is even bigger than the one of the starting IR. Again, this confirms the boosting role of the free lipids for the crystallization of PI.
I. Introduction
This final chapter will focus on the PI / Protein coupling. As a reminder, the targeted coupling chemistry is a thiol-maleimide "click reaction" involving the thiol group of a cysteine for the protein part and a maleimide chain-end for the polymer side. This coupling chemistry was demonstrated in the literature to be effective 1 affording one of the first example of Giant Amphiphile by coupling a lipase to a polystyrene chain. At this stage of the project, two synthons were available for the coupling: PIMal and a PIDiLipMal. In this chapter, various attempts of coupling will be presented emphasizing the difficulties to characterize the final product. Finally, an alternative solution was proposed to afford a protein-like block via the use of N-carboxyanhydride (NCA) polymerization (Figure 1). The other major advantage of this protein is that its molar mass is quite low, ~ 35 kDa, which is only 3 times more than the functional PI used, and not too far from the REF and SRPP, which are the final targeted proteins. However, under its native form, the 6 thiol groups of CALB cysteines are engaged in disulfure bridges which necessitate their reduction prior to any attempt of coupling with PI. Figure 2 presents the structure of the three most common reducing agents used for the disulfure bridges reduction of proteins. Among them, β-mercapto ethanol is the most powerful, usually able to reduce all disulfure bridges regardless the structure of the protein. Nevertheless, it stays connected to the newly formed thiol as represented in Figure 3, rendering it impossible to use for the project as the objective is to obtain free thiol functions. DTT appeared to be a good candidate for the disulfure reduction as it is another powerful reducing agent and leads to free thiol groups after the reduction as illustrated in Figure 4.
Moreover, this reducing agent was used by Velonia et al. 1 for the reduction of lipase and gave good results. Nevertheless, it has to be used in excess (~ 2 eq to cleave 1 disulfure bridge), and as DTT contains thiol groups, it could interfere in the PI-Protein coupling by reacting with the maleimide chain-end if the excess is not removed. To prevent this side-reaction, extensive dialysis has to be performed which slows down the overall process. TCEP was thus selected as a "sulfur-free" reducing agent. It was reported in the literature to be a powerful candidate for the disulfure bridge reduction 2 presenting the advantage of not interfering in the thiol maleimide reaction as compared to other reducing agent able to couple with the maleimide function 3 . This renders even more attractive the use of TCEP as even if traces remain after the purification of the reduced protein it would not interfere with the thiol-maleimide coupling reaction.
Surprisingly, after the reducing process and the purification step, only 30 wt% of protein was recovered. This is probably due to the purity of the starting protein (see below) and/or the adsorption of proteins to the dialysis membrane. The recovered protein was then analysed by SDS-PAGE method and compared to both, the native protein and a sample reduced with a great excess of β-mercaptoethanol (Figure 5). It can be observed that the commercial CALB (column 2) exhibits several bands, whereas only one should be present for a pure sample. This could partially explain the mass loss during the reduction process as those unknown impurities could potentially pass through the dialysis membrane. Moreover, it could be noticed that the difference of migration between the fully reduced CALB (column 3) and the native one (column 2) is very small (see the zoom). Nevertheless, the band in column 4 corresponding to the CALB reduced with TCEP lies in between the native CALB and the one reduced with β-mercaptoethanol. The reduction with TCEP was thus less efficient and gave only a partially reduced CALB. A higher amount of TCEP (up to 100 equivalents) did not increase the reduction rate as the same SDS-PAGE profile was obtained. The Ellman's titration 4 performed to determine the quantity of free thiol present on the protein failed for an unknown reason. -A "blank protein" experiment without PI.
-A "blank PI" using PIOH instead of PIMal, as it should not couple with the protein due to the absence of maleimide group.
Figure 6 presents some pictures of the coupling attempts after addition of the PI phase on the protein phase (Figure 6.1) and after 24 h of stirring followed by overnight phase separation (Figure 6.2). Before stirring, no PI precipitation was observed contrarily to the attempt of coupling carried out using THF. The cloudiness observed for the sample 1.C came from the presence of TEA chloride salt that remained from the grafting of maleimide at the PI chainend (see chapter IV). After stirring, 3 emulsions were obtained (Figure 6.2) and demonstrated to be stable as no phase separation occurred overnight. A sample of each emulsion was observed by optical microscopy (Figure 7). The 3 emulsions present three distinct particle profiles. On the "blank protein" experiment (Figure 7.A) a nearly perfect spherical emulsion stabilized by the protein itself thanks to its amphiphilic behaviour can be observed. The "blank PI" experiment presents relatively illdefined bigger particle size. In both cases, the emulsion might be stabilized by the proteins but it is nevertheless difficult to explain the difference of behaviour (size and shape of the particles). Finally, for the coupling experiment; a different behaviour compared to the two other samples is noticeable. The size of the particles formed is in between the two other experiments and the aspect of the droplets changed. Nevertheless, it is difficult to conclude that coupling occurred.
b. PI/BSA coupling
BSA is a bigger protein than CALB (66kDa instead of 35kDa) but presents the advantage of bearing a free thiol function in the native state 5 on the Cys-34, which avoids the reduction step and allow to gain time in the overall pathway. The same coupling conditions as for CALB were applied to BSA. Reaction time was 24 h (Figure 8.1) or 3 days (Figure 8.2). Again, in all cases, emulsions were obtained. In the case of the coupling attempt carried out for 3 days (Figure 8.2.C), a phase inversion can be observed (emulsion in the aqueous phase). A sample of each emulsion was taken, dispersed into pure water and analyzed by optical microscopy (Figure 9). Whatever the reaction time, well defined emulsions were obtained in the case of the "blank protein" experiment with the formation of spherical droplets. In the case of the "blank PI" experiment, the aspect of the spheres obtained are really similar to the one obtained with the "blank protein" experiment and seemed to indicate that, as expected, no coupling occurred. The coupling attempt after 24h (Figure 9.1.C) presents the same aspect of particles than the ones obtained with CALB. But, after 3 days of coupling (Figure 9.2.C), a phase inversion is observed, and the microscopy analysis presents a double population of particles, one colourless similar to a foam and a second, darker, forming spherical particles of different sizes. We could then think that the coupling occurred with the formation of big spherical droplets totally distinct from the "blank protein" experiment regarding the diversity of sizes and their global aspect but, still, this does not constitute a direct proof. Only few further analyses (SEC in water and DMF) were performed, at the moment of the writing of the manuscript, mostly due to lack of time and troubles of solubilization. The only SEC analyses that were carried out did not give any result most probably due to solubility issues. To date, optical microscopy analysis is the only method allowing to analyze the entire sample at the same time. The resulting pictures are difficult to interpret but seem to go in the sense of the occurring of the coupling even if they do not constitute an undoubtful proof of it.
Another strategy was then proposed, using N-carboxyanhydride polymerization with a PI macro-initiator to afford a PI-Polypeptide di-block with the polypeptide block mimicking proteins.
III. PI-polypeptide co-polymer synthesis
In this sub-chapter will be presented two distinct methods to afford a di-block PI-b-Polypeptide (Figure 10). The first one is based on Diels-Alder click-chemistry and proposes to synthesize a furan terminated polypeptide block that will be grafted to the already discussed PIMal (Figure 10.a). The polypeptide furan terminated could be obtained by ROP of NCA using furfurylamine as the initiator. Indeed, it was already demonstrated that DA reaction between maleimide and furfurylamine did not lead to the expected product (see chapter IV), but this behavior no longer exist if the furfurylamine is turned into a furfurylamide. 11 The second strategy (Figure 10.b) is based on the capacity of PIDiOH and PImOH to be used as macroinitiators of the ROP of N-carboxyanhydrides (NCA). A bibliographic part will briefly present the chemistry involved and the possibilities existing for the alcohol initiation of NCA also demonstrating the novelty of the system proposed and will be followed by the results obtained using this method. Finally, it will be shown that this strategy might be the most promising one to afford a structure close to Tanaka's model, regarding the difficulties encountered with the protein coupling. Regardless the strategy targeted, the starting point of the study was the synthesis of the NCA monomer. BenzylGluNCA was chosen, as it appeared to be the easiest monomer to obtain and one of the most stable among all existing NCAs. To this end, benzylglutamate was reacted with triphosgene in dry THF. The occurring of the reaction can be easily followed as benzylglutamate was insoluble in THF, but became soluble as soon as the NCA ring was formed. The product was fully characterized by NMR analysis ( 1 H, 13 C and HSQC/ Figure 11 and Figure 12). All spectra are comparable to the ones described in the literature. 12 Full conversion was obtained with a good purity of the monomer. The first step was then the synthesis of a polypeptide furan terminated. It proceeded via a classic polymerization of NCA using a primary amine, furfurylamine, as initiator. A good control of the reaction was obtained in dry DMF at 0°C. When performed in CDCl 3 , the 1 H NMR analysis gived poorly defined spectra. This was reported to be due to the structure of the polymer, forming an α-helix architecture, preventing a good solubilization in some solvents.
To improve the solubility, trifluoroacetic acid (TFA), deuterated or not, must be used as a cosolvent. The 1 H NMR spectrum, the SEC chromatogram as well as the MALDI-TOF spectrum of the polymer are given in Figure 13 to Figure 15 respectively. The assignment could be made thanks to literature 10 . It can be noted that the furan chain-ends (signals 7 and 8 at 6.27 and 6.22 ppm respectively on Figure 14) allowed to estimate the molar mass by the use of the following formula:
𝑅 = 𝑖 5 * 2 𝑖 - * 𝑖 + 𝑖 𝑖 𝑖𝑎 𝑟 With:
-M n (NMR) : The molar mass of the polymer calculated by 1 H NMR -i ( 5) : the value of the integral of signal "5" on Figure 13 corresponding to the "-CH" group of the repeating unit -i (7-8): the value of the integral of both signals "7" and "8" on Figure 13 corresponding to "CH" protons of the furan chain-end -M n (unit): The value of the molar mass of a repeating unit (219 g/mol)
-M n (initiator): The value of the molar mass of the initiator (97 g/mol)
The calculated molar mass was then estimated to be M n (NMR) = 4 450 g/mol which is in good agreement with the targeted molar mass (5 000 g/mol). The dispersity was evaluated by SEC in DMF (Figure 14) revealing a quite narrow distribution (1.05) attesting the good control of the polymerization. The molar mass obtained (3700 g/mol) is in the same range of the targeted one but is not reliable as it is based on polystyrene standards. DMF was also used to solubilize the matrix and the polymer to perform MALDI-TOF analysis (Figure 15). Two distinct populations could be observed, one of small molar masses (1435-2530 g/mol) and of weak intensity and a second, more intense, representing higher molar masses (3180-6250 g/mol). The simulation revealed an exact correspondence between the obtained spectrum and the expected compound.
Indeed, it is possible to calculate the exact degree of polymerization of each peak by using the formula:
𝐷𝑃 𝑎 = 𝑎 -𝑎° -𝑟 𝑟𝑦 𝑎 𝑖 𝑖
With:
-DP (peak) : The degree of polymerization of a precise population of polymer -M n (Na 0 ): the molar mass of native sodium used for ionisation (23,00 g/mol) -M n (furfurylamine): the molar mass of furfurylamine used as the initiator (97,12 g/mol) -M n (unit): the molar mass of a repetition unit (219,24 g/mol)
As an example, applying this formula to the main peak of the major population of Figure 15 gives a DP of 20 for a M n (peak) of 4504,3 g/mol. Surprisingly, applying the same formula to the main peak of the minor population (M n (peak) = 1873.8 g/mol) gives a DP of 8. This would mean that the minor population presents the same structure as the major one but is made of shorter chains. To date, no explanation was found for this double population, as no shoulder or bimodal distribution was observed by SEC. However, all those characterizations attest for the formation of the desired polymer bearing the expected furanic chain-end. The next step of the synthesis was the coupling via Diels-Alder reaction between both blocks.
As a model of PIDiLipMal, a PIMal was used and reacted with the previously synthesized PPepFur. The choice of the conditions appeared to be tricky as PIMal and PPepFur can be hardly solubilized in the same solvent. Chloroform happened to be the only candidate, even if, as already said, NMR spectra are not perfectly defined in this solvent probably due to solubility issues. A homogeneous solution (at least visually) was obtained. After 48h of heating at 60°C, no coupling occurred. It was then added 5% (v%) of DMF in order to help the solubilization of the peptidic chain to allow the reaction to proceed, again at 60°C for 48h.
The 1 H NMR analysis of the final polymer is given in Figure 16 and compared to the starting PIMal. No TFA could be used in this case to characterize the compound as it could degrade the PI backbone. On the NMR spectrum, no visible sign of coupling appeared and, furthermore, the signals of the chain-ends of both polymers remained perfectly visible (singlet at 6.68 ppm corresponding to the double bond of maleimide and two singlets at 6.27 and 6.22 corresponding to "-CH" groups of the furanic function). The bad solvatation of the PPepFur in chloroform, despite the small amount of DMF used, might be one explanation for this impossibility of coupling. Indeed the polypeptide chain might adopt a conformation blocking the accessibility to the furan chain-end. Another reason might be the temperature which could be too low for the occurring of the DA reaction. For these reasons, this di-block coupling was abandoned and never applied to a PIDiLipMal. As presented in the "bibliography part" (see chapter 1), the NCA polymerization generally proceeds via a primary amine initiation and at low temperature to keep a good control of the reaction (Figure 17). As, it was not possible to obtain primary amine terminated PI, we turned towards alcohol initiation of NCA as recently described in the literature. Indeed in 2015, Zhao et al. 6,7 introduced a new polymerization system based on the use of amino-alcohols as initiators and the use of a thiourea derivative (TU) as the catalyst (Figure 18). This alcohol initiated polymerization is rendered possible thanks to a multiple hydrogen bonds system. The first addition of monomer proceeds with the activation of the monomer by the thiourea concomitantly to the internal activation of the initiator by hydrogen bonding between the tertiary amine and the hydroxyl group. This first addition leads to a decarboxylation of the chain-end to recover a terminal primary amine. It allows the propagation step to occur using the "classic" polymerization process. The role of thiourea was also important during chain-growth. Indeed, the terminal tertiary amine could act as a base to deprotonate a monomeric unit and initiate a new chain. This phenomenon is prevented here by the interaction between the chain-end and the thiourea, thus reducing the basicity of the terminal tertiary amine. Moreover, polymerization could be performed at room temperature without loss of the reaction control, again thanks to the thiourea interaction with the propagating primary amine thus slowing down the reactivity.
With the use of 3 equivalents of thiourea (compared to the initiator), the reaction reaches full conversion within 90 minutes with a dispersity of 1.02 whereas only 57% of conversion were obtained after 240 minutes when 10 molar equivalents of thiourea were used but still with a narrow molar mass distribution (1.05). Multifunctional amino-alcohols can be used as plurifunctional initiators (Figure 19) to obtain different architectures of polypeptide (linear, 3arms and 4-arms stars). More recently, Gradišar et al. 8 presented another method for the ROP of NCA using alcohols as initiating species and acid as catalyst. In this case, the reaction pathway proceeds in two steps (Figure 20):
-First, the initiation step uses an organic acid to activate the monomer (3 equivalents of methanesulfonic acid (MSA) compared to the initiator) and to block the propagation by protonation of the formed primary amine.
-Then, the propagation step starts by the addition of a base (N-ethyldiisopropylamine) in a slight default compared to the acid (2.5 equivalents of base compared to the initiator). This NCA polymerization allows to form well defined polypeptides with molar masses around 6 000 g/mol and molar mass distributions varying from 1.1 to 1.4. To date, those two methods are the only ones reported for the controlled ROP of NCA using alcohols as initiators.
ii. Results and Discussion
The synthesis of a di-block "PI-Protein like" was studied applying the chemistry developed by Zhao et al 6,7 to PI bearing an amino-alcohol chain-end. Among the functional PI described in Chapter 4, PIDiOH and PImOH were good candidates. With PIDiOH a "Y" shaped polymer could be obtained if both hydroxyl groups could initiate the polymerization. This synthon was then preferred.
In a first step, the ROP of NCA using dimethylethanolamine (DMEA) as the initiator was performed following the procedure reported in the literature. The 1 H NMR analysis of the obtained polymer is given in Figure 21 as well as the SEC analysis performed in DMF reported in Figure 22. The 1 H NMR analysis obtained is similar to the one described in the literature 7 thus confirming the occurring of the polymerization. The molar mass was estimated to be 10 000 g/mol by NMR for a targeted molar mass of 13 000 g/mol by using the formula: The SEC analysis showed two different populations, one with a M n of 11 000 g/mol which is in agreement with the NMR analysis and the targeted molar mass, and a second population around 30 000 g/mol. This could indicate the presence of a small amount of side reactions. As described in the literature, the amount of TU used during the polymerization is the key point to prevent side reactions. Nevertheless, a too high amount of TU prevents the polymerization due to hydrogen bonding between TU and the propagating primary amine. We decided to establish a balance between both phenomena increasing the amount of TU to 3 equivalents (compared to initiator) and increasing the reaction time to 2 h. In conclusion, the ROP of NCA using PI as the initiator was possible thanks to the aminoalcohol chain-end. It is a convenient method to afford a di-block copolymer PI-Polypeptide with relatively simple chemistry. However, as the frame of the work presented here is to obtain a tri-block co-polymer Protein-PI-Lipid, it was compulsory to develop the same chemistry but using the ketone chain-end of PIDeg as the aldehyde side must be used for the synthesis of PIDiLip. The main problem came from the impossibility of functionalizing the ketone side with a secondary amine to afford an amino-alcohol chain end. A new approach was thus developed using two successive reductive aminations.
iii. Synthesis of a heterotelechelic ethylethanolamine/hydroxyl PI (PINOHOH) To synthesize a PI amino-alcohol terminated at the α chain-end (ketone chain-end in PIDeg), a reductive amination of the ketone chain-end with ethanolamine must be performed to afford a hetero-telechelic (ethanolamine/hydroxyl) PI (PINHOH). Before using a PIMono(or Di)Lip, it was proposed to use a model compound, PIOH (described in Chapter 4), as it is easier and quicker to synthesize than the PIMono(orDi)Lip.
The 1 H NMR spectrum of the compound synthesized is reported in Figure 26 " groups in α position of the nitrogen atom (namely " 9' ", " 10' " and " 12' " in Figure 27) is also a proof that the reaction occurred. Moreover, no particular change in the molar mass of the polymer was observed in SEC thus attesting of the absence of side reactions. It was thus possible to synthesize a PINOHOH functionalized at the α chain-end (the ketone side in PIDeg) by an amino-alcohol function that could be used for NCA polymerization. SEC analysis was also performed in THF containing Bis(trifluoromethane)sulfonimide lithium salt in order to avoid the "stacking" of the polypeptide block on the column (Figure 24). Figure 29 presents the SEC chromatograms obtained for the di-block copolymer and the starting macro-initiator. The molar mass after polymerization goes from 11 000 g/mol to 16 000 g/mol, in good agreement with the targeted molar mass of the polypeptide block (targeted molar mass of the NCA block around 5 000 g/mol). A narrowing of the signal was observed but only from the small molar mass side of the Gaussian plot. This can be explained by the high control of the NCA polymerization. Indeed, the molar mass distributions being small (<1.1) 9,10 , blocks of the same length were added to all the PI macroinitiator, this latter having a broader molar mass distribution. Assuming that the hydrodynamic volume of polypeptide is different from the one of polyisoprene, the impact of the co-polymerization on low molar mass PI will be greater than on the high molar mass ones and thus explains the behavior observed here. Finally, no trace of homopolypeptide was observed in the SEC chromatogram also confirming that the signals observed in proton NMR of polypeptide does not come from an admixture of two homopolymers.
v. Conclusion
To conclude, the ROP of NCA was studied as a simple model of a protein block. The coupling attempts via the use of DA reaction did not lead to the expected structure, certainly due to the difference of solubility of both blocks and probably due to difficult encountering of both terminal functions. As a new approach, the NCA polymerization using PI macro-initiator was developed and lead to the formation of two distinct polymer architectures: "Y" shape in the case of the use of PIDiOH as initiator and a linear di-block in the case of the use of PINOHOH as a starting material. The latter also presents the advantage of bearing the polypeptide block at its α chain-end, allowing the possibility to apply this chemistry to PIDiLip and for the synthesis of the tri-block co-polymer close to Tanaka's model.
IV. Conclusion
The protein/PI coupling was the most challenging part of this PhD work. The strategy proposed for the coupling, using thiol-maleimide click chemistry was demonstrated in the literature to work but appeared to be highly difficult to perform regarding the difference of solubility of both blocks. Some attempts were nevertheless performed showing formation of particles of variable shapes. Unfortunately, no direct proof of coupling could be obtained despite the different analysis methods used.
As a substitute, ROP of NCA was applied to obtain various PI/Polypeptide co-polymers.
Moreover, the chemistry developed for the polymerization is an alternative of the classic primary amine synthesis as it uses amino-alcohols as initiators and TU as catalyst.
Finally, it was shown the possibility to apply this chemistry from the α chain-end of the PI chain (the ketone chain-end in PIDeg) in order to be able to synthesize a polypeptide block starting from PIDiLip and, thus, afford a tri-block co-polymer close in structure to Tanaka's model.
V. Experimental part
a. Reduction of disulfure bridge (TCEP / CALB) CALB (0.5 g) was dissolved in 5 mL of a phosphate buffer (PB) 1M (pH = 7.4).
TCEP.HCl (41 mg, 10 eq) was then added to the solution as well as NaHCO 3 to obtain a final solution at pH ~ 8. Two cycles of vacuum/argon were applied to afford a final solution under inert atmosphere. The solution was then stirred at room temperature overnight. The excess of salt was removed by dialysis in pure water, and the final proteins were recovered by freezedrying. Yield: 30%
b. Coupling CALB / PIMal
The reduced CALB (50 mg) was dissolved in 0.6 mL of PB 1M and mixed with 4 drops of TEA. PIMal (71 mg, 5 eq) was dissolved in 0.6 mL of toluene. The PI solution was then added to the protein solution and the heterogeneous solution vigorously stirred for 24h.
The admixture was kept at room temperature overnight to allow phase separation.
c. Coupling BSA / PIMal
Native BSA (50 mg) was dissolved in 0.3 mL of PB 1M and mixed with 4 drops of TEA. PIMal (38 mg, 5 eq) was dissolved in 0.3 mL of toluene. The polymer solution was then added to the protein solution and the heterogeneous solution vigorously stirred for 24 hours.
The admixture was kept at room temperature overnight to allow phase separation. Finally, 57 mg (9 eq, 234 µmol) of NaBH(OAc) 3 were then added to the reaction flask as well as 2 µL (1.5 eq, 34 µmol) of acetic acid and the reaction was maintained at 40°C overnight. PINOHOH (0.11 g, 11 µmol) was dissolved into 1.5 mL of dry DCM. Then, 71 mg of BenzylGluNCA (273 µmol, 25 eq) was dissolved into 1 mL of dry DCM as well as 12.6 mg of TU (25 µmol, 2.2 eq). The solution of PINOHOH is then added to the solution of monomer, the reaction flask is linked to a bubbler and the reaction allowed to stir at room temperature for 40 minutes. The final polymer is obtained by precipitation in cold methanol containing a small amount of water (2 mL in 150 mL of methanol), filtration through glass filter and overnight evaporation at 40°C under dynamic vacuum.
d. Synthesis of BenzylGluNCA
Yield: ~ 90%
The main objective of this PhD work was the synthesis of a tri-block copolymer composed of a core of pure 1-4 cis PI functionalized at both chain-ends (namely α and ω, respectively) by a protein and one or two fatty chains respectively. This molecule had been reported to be a good model of NR and the study of the tri-block properties could lead to a better understanding of the natural polymer. The strategy developed to afford such an architecture was thus to graft selectively each block, one after the other, to the PI backbone. It was also proposed, in parallel, to study the properties of each di-block (PI-Lipids / PI-Protein) separately prior to study the properties of the tri-block. It is believed that applying this chemical pathway to synthetic PI could allow to develop a new kind of hybrid material possessing properties close to NR, or at least better than synthetic PI alone.
In order to graft selectively each block at each chain-end, the starting PI should be heterotelechelic. In the literature, many IRs were developed bearing various terminal chain-ends but the control of the microstructure could be an issue. In order to mimic NR, the microstructure of the hetero-telechelic PI should be 100% 1,4-cis. This lead us to the use of the chemical degradation of NR, already described in the literature, leading to pure 1,4-cis microstructure (as the starting material is NR), bearing a ketone function at the α chain-end and an aldehyde function at the ω one. It was demonstrated that those functions could undergo selective functionalization using, for example, reductive amination chemistry, playing with the nature of the amine or of the reducing agent.
The first chapter focused on the characterization of two NRs coming from two different Hevea brasiliensis clones (RRIM 600 and PB 235). Both clones present different molar masses and dispersity (M n ~500 000 g/mol and Ð ~ 2.6 for RRIM 600 against M n ~ 1 000 000 g/mol and Đ ~ 1.5 for PB 235) with a bimodality observed in RRIM 600 attesting the existence of 2 distinct populations of PI in the natural material. Elementary analysis showed that the nitrogen content of RRIM 600 (0.6 wt%) was slightly higher than the one of PB 235 (0.4 wt%) which could be related to a higher amount of proteins in RRIM 600. Finally, solubilization trials showed that both clones were rather well soluble in toluene, poorly soluble in cyclohexane and DCM and quite soluble also in THF. This later was then used for the degradation pathway as it allows to solubilize all the reactants.
The chemical degradation of both clones was then studied through the partial epoxidation of the double bounds with m-CPBA followed by the acidic cleavage of the oxirane groups with periodic acid to yield ketone/aldehyde hetero-telechelic PI (PIDeg).
First attempts showed that some periodic acid was consumed by the non-rubber compounds present in NR, preventing to cleave all the oxirane functions. As a consequence, experimental molar masses could be quite far from the targeted ones. This could be nevertheless improved by increasing the amount of acid to 2 equivalents compared to oxiranes. However, for small rates of epoxidation (targeting molar masses higher than 20 000 g/mol) the acidic cleavage appeared again less efficient leading to PIDeg of higher molar masses than expected and bearing remaining oxirane units. The same reactions were applied to IR confirming that this deviation from targeted molar mass was due to non-rubber compounds present in NR as no deviation was observed in the case of IR degradation. It was possible to obtain well-defined 10 000 g/mol hetero-telechelic PI with a 100% 1,4-cis microstructure.
The selective functionalization of the PIDeg was then studied after having defined the synthetic strategies to follow for grafting proteins and/or lipids. The "thiol-maleimide" chemistry was chosen for the protein coupling as it seemed to be selective and was already reported for the coupling of a protein with a PS chain. The lipidic coupling was not particularly studied in the literature. It was then decided to synthesize a PI chain bearing two hydroxyl groups at the ω chain-end (the α chain-end remaining intact) to graft fatty chains through esterification reactions to yield PIDiLip. Next step was to synthesize a PI chain bearing at α and ω chain-ends a maleimide function and two fatty esters respectively (PIDiLipMal). The ketone α chain-end of PIDiLip was then selectively reduced to yield PIDiLipOH. Maleimide grafting was a bit tricky but PIDiLipMal could be obtained in pretty good yield.
The properties of PIDiLip were first investigated, as they should be a good model of NR (half of the Tanaka's model). It was first shown by DSC that the fatty esters grafted at the polymer chain-end could crystallize despite the amorphous PI matrix. Moreover, the crystallization temperature varied with the size of the linked fatty esters. The highest crystallization temperature was obtained with lignocerates (C 24:0 ) reaching 20°C. Linked fatty chains were thus able to create nodules of crystallization in the material, suggesting that, if both chainends (α and ω) could be functionalized with lipids, a physically cross-linked material could be prepared. The addition of free fatty chains to "dope" the material and increase its crystallization temperature was investigated. Fatty acids were not suitable, contrarily to fatty esters which could increase the crystallization temperature up to 40°C in the case of methyl lignocerate. Then, the role of linked and free fatty chains onto the cold crystallization of NR was studied.
It was observed that linked fatty chains prevent totally the cold crystallization of PI, regardless the number of fatty chains linked or the position (α and/or ω chain-end). Addition of free lipids allowed a partial recovery of the crystallinity as already described in the literature during the study of the cold crystallization of NR. The PIDiLip can thus be considered as a good model of the natural material. Similar results were obtained for IR models (quick cold crystallization when an admixture of methyl linoleate and stearic acid were added).
In the last chapter, coupling of protein and PI was investigated. Lipase B from Candida antarctica (CALB) was first selected to be coupled with PI because of its tolerance to organic solvent. As the native CALB does not present any free cysteine, the reduction of one or more disulfure bridges was investigated. TCEP lead to the partial reduction of the protein as characterized by SDS-PAGE analysis. The reduced protein was then tried to be coupled with an α-malemido-terminated PI (PIMal). A stable emulsion was obtained with the formation of particles of various sizes. Unfortunately, despite the use of various analysis technics, we did not succeed to have any direct proof that the coupling reaction occurred. Similar study was performed with Bovine Serum Albumin (BSA) which natively bears a free thiol. Again, it was not possible to be 100% sure that the coupling reaction occurred. As the PI/Protein coupling appeared to be difficult, the N-carboxyanhydride (NCA) ring opening polymerization (ROP)
was investigated to afford a polypeptide block, which could mimic the behaviour of proteins.
The coupling of PI maleimide and PPep furane remained unsuccessful. On the contrary, it was possible to initiate the ROP of NCA from a PI bearing an amino-alcohol function at the chain end (PIDiOH, PINOHOH). Diverse architectures of PI-Polypeptide di-block could thus be obtained, opening the route to the synthesis of a tri-block Polypeptide-PI-Lipid.
The principal outlook of this PhD work would be to demonstrate the feasibility of the PI-Protein coupling and to manage to characterize the amphiphilic properties of the hybrid polymer. Only few insights were developed here and a deeper investigation on this topic would be of high interest as PI-Protein hybrids have never been reported in the literature. Of course, applying the coupling method to the PIDiLip would allow to form the tri-block of Tanaka and would certainly lead to interesting self-assembly properties.
Another interesting axis to develop concerns the ROP initiated by PI. Indeed, here, we focused on the ROP of NCA to obtain a polypeptide block, but the chemistry developed using TU as catalyst is a chemistry that could be applied to other monomers like carbonate, lactones, lactide etc… It could be interesting to develop a panel of di-block or tri-block copolymers using the same ROP chemistry but varying the cycles to open.
Finally, the most promising outlook of this PhD work would be to cross-link the PI-Lipid hybrids and to study the influence of linked and unlinked fatty chains on the strain induced crystallization of the material to, potentially, confer new properties to any IR and a greater resistance.
SEC analysis
Most of the polymer molar masses were determined by size exclusion chromatography (SEC) using tetrahydrofuran as the eluent (THF with 250 ppm of Butylated hydroxytoluene as inhibitor, Aldrich) and trichlorobenzene as a flow marker. Measurements were performed on a Waters pump equipped with Waters RI detector and Wyatt Light Scattering detector.
The separation is achieved on three Tosoh TSK gel columns (300 × 7.8 mm) G5000 HXL, G6000 HXL and a Multipore HXL with an exclusion limits from 500 to 40 000 000 g/mol, at a flow rate of 1 mL/min. The injected volume was 100µL. Columns' temperature was 40 °C.
M n and Đ values were calculated using dn/dc(polyisoprene)=0.130. Data were processed with Astra software from Wyatt.
For polypeptide homopolymers, polymer molar masses were determined using dimethyformamide (DMF + lithium bromide LiBr 1g/L) as the eluent. Measurements in DMF were performed on a PL GPC50 integrated system from Agilent equipped with RI and UV (260 nm) detectors and two KD-804 Shodex gel columns (300 x 8 mm) (exclusion limits from 4000 g/mol to 200 000 g/mol) at a flowrate of 0.8 mL/min. Columns temperature was held at 50°C. Polystyrene was used as the standard.
For PI-Polypeptide copolymers, polymer molar masses were determined using tetrahydrofuran (THF + Lithium(I) bis(trifluoromethanesulfonyl)imide (LiNTf 2 ) 10mM) as the eluent. Measurements were performed on an Ultimate 3000 system from Thermoscientific equipped with diode array detector DAD. The system also includes a multi-angle light scattering detector MALS and differential refractive index detector dRI from Wyatt technology. Polymers were separated one PSS SDV linear S column (300 x 8 mm) (exclusion limits from 100 g/mol to 150 000 g/mol) at a flowrate of 1 mL/min. Columns temperature was held at 36°C.
DSC analysis
Differential scanning calorimetry (DSC) measurements were performed using a DSC Q100 LN 2 or a DSC Q100 RSC apparatus from TA Instruments depending on the experiment. With DSC Q100 LN 2 , the samples were first heated to 80°C during 20 minutes to suppress any traces of solvent then cooled to -100°C and heated back to 120°C at the rate of 10°C min -1 .
Consecutive cooling and heating run were also performed at 10°C min -1 . The analyses were carried out in a helium atmosphere with aluminum pans. DSC Q100 RSC device was used for isothermal analysis.
The samples were heated at 80°C during 20 minutes prior to use to suppress any traces of solvent, then cooled to -25°C during predetermined time and then heated to 120°C at a heating rate of 10°C min -1 .
MALDI-TOF analysis
MALDI-TOF spectra were performed by the CESAMO (Bordeaux, France) on a Voyager mass spectrometer (Applied Biosystems). Spectra were recorded in the positive-ion mode using the reflectron and with accelerating voltage of 20kV. Samples were dissolved in DMF at 10 mg/mL. The matrix solution (2,5-Dihydroxybenzoic acid, DHB) was prepared by dissolving 10 mg in 1 mL of DMF. A MeOH solution of cationisation agent (NaI, 10 mg/mL) was also prepared. Solutions were combined in a 10:1:1 volume ratio of matrix to sample to cationizing agent.
FTIR-ATR analysis
Fourier Transformed Infra-Red-Attenuated Total Reflection (FTIR-ATR) spectra were recorded between 4000 and 400 cm -1 on a Bruker VERTEX 70 instrument (4 cm -1 resolution, 32 scans, DLaTGS MIR) equipped with a Pike GladiATR plate (diamond crystal) for attenuated total reflectance (ATR) at room temperature.
SDS-PAGE analysis
Protein samples were analyzed by Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), using 4-20% precast mini gels (Mini-PROTEAN® TGX™ Gels, BIO-RAD).
One volume of protein sample was mixed with one volume of sample buffer (65.8 mM Tris-HCI, pH 6.8, 2.1% SDS, 26.3% (w/v) glycerol, 0.01% bromophenol blue) containing 2mercaptoethanol (1.36 M). For non-reducing conditions, 2-mercaptoethanol was not added into the sample loading buffer. Samples were then denatured for 3 min at 95°C, and loaded onto the gel. Electrophoresis was performed in TGS buffer (25 mM Tris pH 8.3, 192 mM glycine, 0.1% SDS) at constant amperage (25 mA/gel). Gels were stained for 30 min. with Coomassie colloidal blue (InstantBlue, Expedeon), and destained with water baths. To estimate the size of the protein of interest, a protein ladder was run simultaneously in every gel (Precision Plus Protein™ Unstained Standards, BIO-RAD).
Annexes
Fonctionnalisation de Polyisoprène : Vers un modèle du caoutchouc naturel
Ce travail de thèse porte, de manière globale, sur une meilleure compréhension du caoutchouc naturel (NR). En effet, bien que ce matériau soit fortement utilisé dans l'industrie et ce depuis des dizaines d'années, plusieurs de ses propriétés restent à ce jour mal comprises. Antérieurement à nos travaux, il a été fait un lien entre la biosynthèse du polymère et ces propriétés et il a été proposé que le caoutchouc naturel était constitué d'une chaîne polyisoprène (PI) 100% 1,4-cis de forte masse molaire (> 500 000 g/mol) fonctionnalisée en α et ω par une protéine et un motif phospholipidique respectivement. Ces bouts de chaîne seraient capables de s'auto-assembler pour créer un réseau physique qui confère au NR ses propriétés si intéressantes. L'objet de cette thèse a donc été de synthétiser un copolymère tribloc Protéine/PI/Lipides afin de confirmer cette hypothèse en produisant en laboratoire un homologue de NR. Pour ce faire, un PI hétéro-téléchélique cétone/aldéhyde a été obtenu par dégradation chimique de NR. Cette méthode a permis d'obtenir un PI 100 % 1,4-cis possédant deux fonctions chimiques différentes en bout de chaine permettant ainsi le greffage sélectif d'une protéine où d'un motif lipidique. Ces deux couplages ont ensuite été étudiés séparément (PI/Protéine puis PI/Lipides) révélant des propriétés intéressantes dans le cas du copolymère di-bloc PI/Lipide. Le couplage PI/Protéine s'est avéré plus compliqué et seul des copolymères di-blocs PI/Polypeptide ont pu être obtenus avec certitude, en utilisant des synthons PI comme macro-amorceurs. Une voie de synthèse a également été dégagée pour un tri-block Polypeptide/PI/Lipide présentant une structure très proche du modèle de Tanaka.
Mots clés: Caoutchouc naturel ; Polyisoprène ; Fonctionnalisation ; Modèle de Tanaka
Functionalization of Polyisoprene: Toward the mimic of Natural Rubber
This PhD work focuses on a better comprehension of natural rubber (NR). Indeed, despite the fact that this material has been used for a long time in industry, some properties remain unclear. Previous works of Tanaka allowed to make a link between the biosynthesis of the material and its properties. It was thus suggested that NR was composed of a high molar mass chain of polyisoprene (PI, > 500 000 g/mol) functionalized at the α and ω chain-end by a protein and a phospholipidic moiety respectively. These chain-ends would be able to selfassemble into a pseudo-physical network which would explain some of the superior properties of NR. The goal of this PhD work is to synthesize a Protein/PI/Lipid tri-block copolymer in order to check this hypothesis and to synthesize hybrid material close to NR. First, a 1,4-cis hetero-telechelic (ketone/aldehyde) PI of 10 000 g/mol was obtained by chemical degradation of NR, yielding a polymeric chain bearing two different functions at the chain-ends, allowing to perform a selective functionalization with both a lipidic moiety and a protein. Both di-block copolymers (PI/Lipid and PI/Protein) were synthesized and studied separately. The PI/Lipid di-block copolymer revealed interesting properties. The synthesis of PI/Protein di-block copolymer revealed more difficult and only PI/Polypeptide di-block copolymer could have been obtainded. To this end, PI macro-initiator allowed the Ring-Opening Polymerization od N-carboxyanhydride. Finally, a chemical pathway was established, allowing to synthesize a Polypeptide/PI/Lipid tri-block close to Tanaka's model. Key words: Natural rubber ; Polyisoprene ; Functionalization ; Tanaka's model
Figure 1 :
1 Figure 1: Structure d'un polyisoprène bifonctionnel propose par Tanaka, pouvant modéliser la structure du NR.
Figure 2 :
2 Figure 2: Schéma réactionnel de la dégradation oxidative du PI
Figure 3 :
3 Figure 3: Schéma réactionnel de la synthèse de PI doublement substitué par l'acide palmitique.
Tableau 1 :
1 Températures de cristallisation et fusion de plusieurs hybrides PI/Lipides (10 000 g/mol) observées par DSC de fusion observée par DSC, b) Température de cristallisation observée par DSC; c) Pas de cristallisation (ou de fusion) observée, d) Transition vitreuse observée par DSC
Figure 4 :
4 Figure 4: Schéma de l'assemblage des hybrides PI/Lipides à la température de cristallisation du bout de chaine lipidique
Figure 5 :Figure 6 :
56 Figure 5: Thermogrammes obtenus par analyse DSC en isotherme à -25°C d'un PI de 10 000 g/mol fonctionnalisé par des chaines stéariques et d'un PI non fonctionalisé
Figure 7 :
7 Figure 7: Synthons PI accepteurs de protéines synthétisés
Figure 8 :
8 Figure 8: Emulsions formées lors d'un essai de couplage entre un PIMal et CALB -A: Expérience témoin sans PI; B: Expérience témoin utilisant un PI non accepteur de protéine; C: Essai de couplage PIMal/CALB
Figure 9 :
9 Figure 9: Synthons PI utilisés comme macroamorceurs de chaînes polypeptidiques
Figure 1 :
1 Figure 1: Schematic diagram of biorefinery for precursor-containing biomass -Reproduced from Kamm et al.1
Figure 2 :
2 Figure 2: Global production of natural and synthetic rubber during the twentieth century -Obtained from http://www.rubberstudy.com -
Figure 3 :
3 Figure 3: Structure of PI chains in NR as proposed by Tanaka.
Figure 4 :
4 Figure 4: Structure of the physical network proposed by Tanaka.
Figure 5 :
5 Figure 5: Schematic representation of a rubber particle in latex
Figure 6 :
6 Figure 6: General chemical pathways developped in the manuscript
Figure 1 :
1 Figure 1: Pictures of the latex fractionation by centrifugation and of the NR "ball" obtained by chemical or acidic coagulation.
Figure 2 :
2 Figure 2: General representation of a rubber particle according to Berthelot et al.
distribution 2 .
2 Nevertheless, three different SEC profiles can be obtained depending on the clonal origin of Hevea: a broad signal with only one population, two distinct signals with a predominance of one population toward the other or two distinct signals nearly of same intensity (Figure 3).
Figure 3 :
3 Figure 3: Typical SEC chromatograms of commercial Hevea rubbers -Reproduced from Tanaka et al.2
Figure 4 :
4 Figure 4: Structure of IPP and DMAPPThe initiator was then found out to be an oligomer of IPP and DMAPP but in the trans isomeric form as observed in isoprenoids. It is now established that the second step of the rubber biosynthesis is the formation of the initiating species resulting of the condensation of IPP onto DMAPP by a trans-prenyl transferase to form various molecules like geranylpyrophosphate (GPP), farnesyl pyrophosphate (FPP) and geranyl geranyl pyrophosphate (GGPP) which are the true initiators for the IPP polymerization, with nevertheless different activities as described by Archer7 .
Figure 5 :
5 Figure 5: Structure of GPP, FPP and GGPP
Figure 7 :
7 Figure 7: General view of the NR biosynthesis in the RPs -Reproduced from Cornish et al.5
Figure 8 : 13 C
813 Figure 8: 13 C NMR spectrum of rubber from Goldenrod and low molar mass fraction of NR -Reproduced from Tanaka et al.
Figure 9 :
9 Figure 9: FTIR spectra of purified NR and model oligopeptides: (A-a) acetone extracted NR (N%=0.38), (A-b) DPNR (N%=0.02) and (A-c) saponified NR (N% = 0.008)and (B-a) penta, (B-b) tetra, (B-c) tri and (B-d) di-peptide -Reproduced from Tanaka et al.23
Figure 10 presents
10 Figure10presents the two main evidences proposed by Tanaka for the presence of phospholipids at the chain end. The superimposition of FTIR spectra shows that in fresh NR two distinct bands are visible for fatty acids and fatty esters at 1710 and 1738 cm-1 respectively. After acetone extraction, only the band corresponding to ester moieties at 1738 cm -1 remains visible but disappears after transesterification. This proves that fatty ester moieties are linked to the polymer backbone. The 13 C NMR analysis of a low molar mass fraction of NR also showed the characteristic signals of fatty ester moieties (Figure10b).
Figure 10 :
10 Figure 10: (a) FTIR spectra of natural rubber from pale creep: (A) control, (B) extracted with acetone and (C) transesterified / (b) 13 C NMR spectrum of low molar-mass fraction of DPNR with M n = 6.8 * 10 4 g/mol -Reproduced from Tanaka et al.23,24
Figure 11 :
11 Figure 11: Rate of Crystallization of Rubber. The rate plotted is the reciprocal of the time required for one-half the total volume change -Reproduced from Wood et al.32
Figure 12b shows an equal rate of crystallization of TE-DPNR and AE-DPNR doped with SA whereas Figure 12c shows that the maximum rate of crystallization is obtained for AE-DPNR doped with ML. It indicates that SA nucleating effect is powerful enough to counter-balance the loss of linked fatty chains as AE-NR and TE-NR exhibits the same rate of crystallization. (Figure 12b plot D).
Figure 12 :
12 Figure 12: a. Isothermal crystallization behavior of various NR at -25°C: A. NR, B. AE-NR, C. TE-NR / b. Isothermal crystallization behavior of DPNR at -25°C: A. AE-DPNR, B. TE-DPNR, C. AE-DPNR with 1 wt % stearic acid added, D. TE-DPNR with 1 wt % stearic acid added / c. Isothermal crystallization behavior of DPNR at -25°C: A. AE-DPNR, B. TE-DPNR, C. AE-DPNR with 1 wt % methyl linoleate added, D. TE-DPNR with 1 wt% methyl linoleate added -Reproduced from Kawahara37
Figure 13 : 40 Figure 14 :
134014 Figure 13: Crystallization behavior of esterified IR : A. DPNR, B. IR, C. IR-C18, D. IR-C18 + 1 wt % stearic acid, E. IR-C18 + 1 wt % ML -Reproduced from Kakubo et al.40
Figure 15 :
15 Figure 15: Thermograms of natural cis-polyisoprene crystallized at -26°C -Reproduced from Edwards 42 .
Figure 16 :
16 Figure 16: Stress-strain plots for (a) non-vulcanized NR (red) and IR (black); (b) vulcanized NR (red) and IR (black)-Reproduced from Toki et al 44 .
Figure 17 :
17 Figure 17: Evolution of the crystallization rate (CI) in function of strain (a) and in function of temperature (b) for both vulcanized IR (red) and NR (blue) -Reproduced from Candeau et al.49
Figure 18 :
18 Figure 18: Different microstructures of polyisoprene.
Figure 19 :
19 Figure 19: General mechanism of the cationic polymerization of isoprene
e [B(C 6 F 5 ) 3 ] = 0.01 M. f [initiator] = 0.023 M.
Figure 20 :
20 Figure 20: General scheme of cationic polymerization of isoprene in aqueous medium -Suspension process
Figure 21 :Figure 22 :
2122 Figure 21: Chemical structure of DMAOH and IPOH
Figure 23 :
23 Figure 23: Different derivatives of DMAPP used for the cationic polymerization of isoprene.
Figure 24 :
24 Figure 24: General mechanism for anionic polymerization of isoprene
Figure 25 :
25 Figure 25: General scheme for free radical polymerization and CRP.
Figure 26 :
26 Figure 26: ROMP pathway to afford hydroxyltelechelic polyisoprene -Reproduced from Grubbs et al.90
Figure 28 :
28 Figure 28: Structure exemple of (a) NdOR (b) NdO and (c) NdP catalysts -Reproducued from Nuyken et al.92
Figure 29 :
29 Figure 29: General scheme of the "grafting to" method
Figure 30
30 Figure 30 represents a general scheme of the grafting from pathway. Here, two examples are given: the growth of a polypeptide from a polymer through ring-opening polymerization (ROP) of N-carboxyanhydride (NCA) monomers and the growth of a polymer from a protein.
Figure 30 :
30 Figure 30: General scheme of the "grafting from" pathway
Figure 31 :
31 Figure 31: "Activated monomer" and "amine" mechanism for the ROP of NCAs
Figure 32 :
32 Figure 32: Various morphologies of Giant Amphiphiles reported in the litterature 115,119,147 . a : PS-CALB fibrils (observed by TEM), b: aggregates of complexes formed between PS and HRP in aqueous solution (observed by TEM), c: TEM picture of a BSA-PS giant amphiphile in aqueous media (observed by TEM)
Figure 1 :
1 Figure 1: Main chemical degradation pathways described in the literature.
Figure 2 :
2 Figure 2: Possible reactions following ozonolysis of a diene-containing polymer -Reproduced from Phyniocheep16
Figure 3 :
3 Figure 3: NR Degradation pathway proposed by Ravidran et al. 19 -Reproduced from the original article
Figure 4 :
4 Figure 4: Mechanical pathway for the oxydo-reductive degradation of NR using phenylhydrazine -Reproduced from Brosse et al.24
Figure 5 :
5 Figure 5: Chemical pathway reported by Solanky et al. 28 for the chemical degradation of NR using metathesis
Figure 6 :
6 Figure 6: Structure and yields (detected by GC/MS) of the products of metathesis degradation of NR using β-pinene as CTA -Reproduced from literature
Figure 7 :
7 Figure 7: Proposed mechanical pathway for the chemical degradation of NR using periodic acid
Figure 8 :
8 Figure 8: Picture of the unsmoked NR sheets supplied in the frame of Rubbex project
Figure 9 :Figure 10 :
910 Figure 9: Solubilization of RRIM 600 in THF and centrifugation
Figure 11 :
11 Figure 11: SEC chromatograms of both NR clones in THF injected at 1mg/mL
C NMR spectrum confirms the high purity of 1,4-cis units with the signals 5 at 32.2 ppm and 3 at 23.4 ppm corresponding to the "-CH 2 " group in α position of the quaternary carbon and the "-CH 3 " group respectively. For trans units, the signals would be at 40.4 and 16.3 ppm respectively which were not observed in the spectrum.
Figure 12 : 1 H 8 Figure 13 :
121813 Figure 12: 1 H NMR spectrum of PB235 in toluene d 8
Figure 14 : 1 H
141 Figure 14: 1 H NMR spectra of degraded ExtraNR and IR bearing impurities
Figure 15 :
15 Figure 15: Structure characterized by Li et al.49
Figure 16 :
16 Figure 16: 2D HSQC NMR analysis of a rubber bearing impurities
Figure 17 : 1 H
171 Figure 17: 1 H NMR analysis of an ExtraRRIM 600 epoxidized at 20 % -CDCl 3
Figure 18 :
18 Figure 18: SEC chromatograms of epoxidized ExtraRRIM 600 after different time of degradation, compared to the starting ExtraRRIM 600.
Figure 19 : 1 H
191 Figure 19: 1 H NMR spectra of IR and ExtraRRIM 600 degraded with 1,1 eq of periodic acid.
Figure 20 : 1 H
201 Figure 20: 1 H NMR spectra of ExtraRRIM 600 degraded with 1,1 or 2,2 eq of periodic acid
Figure 21 : 1 HFigure 22 :
21122 Figure 21: 1 H NMR spectra of degraded PIs coming from THF extraction (ExtraRRIM 600) and the unsmoked sheet (RRIM 600)
Figure 23 :
23 Figure 23: Study of the evolution of the experimental molar masses (determined by SEC-THF) of TLNRs versus 1/(rate of epoxidation).
Figure 24 : 1 H
241 Figure 24: 1 H NMR spectra of degraded ExtraRRIM 600 with 1 %, 0.5 % and 0.25 % of epoxidation (from the bottom to the top, respectively).
Figure 25 : 1 H
251 Figure 25: 1 H NMR spectra of degraded IR starting from 1 %, 0.5 % and 0.25 % of epoxidation (from the bottom to the top, respectively)
Figure 26 : 1 HFigure 27 :
26127 Figure 26: 1 H NMR spectrum of a degraded natural PI obtained from RRIM 600
), taken from the sheets received from Thailand, was immerged in 50 mL of the selected solvent for various times (1 day, 3 days and 5 days) under vigorous stirring. The obtained admixture was then centrifuged (speed: 7000 rpm -20 minutes) and the gel phase was separated from the solution. Both phases were then dried under vacuum overnight and weighed. A "recovery yield" was then established for each solvent.b. Degradation of NRIR, NR from the unsmoked sheet or natural PI obtained from extraction (1 g) was solubilized in 40 mL of THF overnight under vigorous stirring. The flask is then cooled to 0°C with an ice bath while 28 mg (0.12 mmol, targeted epoxidation rate of 0.8%) of m-CPBA are dissolved in 5mL of THF. The solution of m-CPBA is then added dropwise to the rubber solution. After the addition, the cooling bath is removed and the solution is let stirring at room temperature for 2 h. Periodic acid (61 mg, 0.26 mmol, 2.2 eq) are then solubilized in 5 mL of THF and added dropwise to the epoxidized rubber solution. A decrease of viscosity is rapidly observed as well as a yellowish coloration and the disappearance of the gel fraction in the case of NR. The solution is still inhomogeneous as small brown particles could be observed.After 2 h of reaction the solution is filtered to remove the suspended particles affording a homogeneous yellow solution which is then concentrated under rotative evaporation. The concentrated solution is then precipitated in a huge excess of methanol (150 mL) in presence of alkaline water (water/KOH solution, pH ~ 8). The precipitated polymer is then solubilized in Et 2 O, filtered through Celite ® and dried first with the rotative evaporation and then at 40°C overnight affording a viscous yellowish liquid. Yield ~ 85%, M n (NMR) = 11 400 g/mol, M n (SEC) = 11 200 g/mol), Ð ~ 1.5.
Figure 1 :
1 Figure 1: General chemical pathways developed for chain-end functionalization
Figure 2 :
2 Figure 2: Chemical pathways for the reductive amination of PIDeg 2
Figure 3 :
3 Figure 3: Selective reduction of chain-ends using NaBH 4 reported from Kebir et al.2
Figure 4 :
4 Figure 4: Chemical pathway for the synthesis of PIMal
Figure 5 : 1 H
51 Figure 5: 1 H NMR spectra of PIDeg and PIOH
Figure 6 : 1 HFigure 7 :
617 Figure 6: 1 H NMR spectra of MalChlo and MalHex
Figure 8 : 1 HFigure 9 :
819 Figure 8: 1 H NMR spectra from PIOH and PIMal
Figure 10 :Figure 11 : 1 H
10111 Figure 10: Proposed mechanism for the chlorination of carboxylic acid using oxalyl chloride and DMF as the catalyst
Figure 13 :Figure 14 : 1 H
13141 Figure 13: General chemical pathway for the synthesis of PIMonoLipLip
Figure 15 :
15 Figure 15: I H NMR spectra from a PIMonoC 24:0 OH and a PIMonoC 24:0
Figure 16 : 1 H
161 Figure 16: 1 H NMR spectrum of a PIMonoC 24:0 C 24:0 obtained after the first attempt of esterification using fatty acyl chloride and TEA.
Figure 17 :Figure 18 : 1 H
17181 Figure 17: Proposed mechanical pathway for the effect of DMAP
Figure 19 :
19 Figure 19: Chemical pathway for the synthesis of PIDiOH
Figure 20 : 1 HFigure 21 : 1 H
201211 Figure 20: 1 H NMR spectra in CD 2 Cl 2 of PIDeg and PIDiOH
Figure 22 : 1 HFigure 23 :Figure 24 :
2212324 Figure 22: 1 H NMR spectrum of a PIDiC 24:0 polluted by di-ketene formation
24:0 , signals 8 (2.52 ppm), 9 (2.64 ppm) and 10 (3.57 ppm) were completely shifted to signals 8' (2.50 ppm), 9' (2.74 ppm) and 10' (4.09 ppm) corresponding to the "CH 2 " groups in α position of the nitrogen atom and in β and α position of the newly ester formed, respectively.
Figure 25 : 1 H
251 Figure 25: 1 H NMR spectra of a PIDiC 24:0 and a PIDiOH
Figure 26 :
26 Figure 26: Schematic representation of the interaction existing in amino alcohol derivatives
Figure 27 : 1 H
271 Figure 27: 1 H NMR spectra of a PIDiC 24:0 OH and a PIDiC 24:0
Figure 28 : 1 H
281 Figure 28: 1 H NMR spectra of a PIDiC 24:0 OH and a PIDiC 24:0 C 24:0
Figure 29 :
29 Figure 29: Chemical pathway for the synthesis of PImOH
Figure 32 : 1 H
321 Figure 32: 1 H NMR spectra obtained after 4 different attempts of synthesis of PINH 2 A [PI] = 20 mmol/L, [NH 4 OAc] = 600 mmol/L, [NaBH(OAc) 3 ] = 56 mmol/L, [AcOH] = 20 mmol/L; B : Same conditions as A but decreasing the quantity of ammonium acetate (200 mmol/L); C : Same conditions as A but changing the solvent from DCM to THF; D : Same conditions as A but changing the solvent from DCM to dichloroethane (DCE).
Figure 33 :
33 Figure 33: Proposed side-reaction pathway faced during the synthesis of PINH 2
Figure 34 :
34 Figure 34: Diels-Alder model reaction studied for the synthesis of PIDA
Figure 35 :
35 Figure 35: Structure of the unexpected compound formed
Figure 36 : 1 HFigure 37 : 13 CFigure 38 :Figure 39 :
36137133839 Figure 36: 1 H NMR spectrum of the product obtained after DA reaction
Figure 40 :
40 Figure 40: Protection (A.) and direct esterification (B.) pathways proposed for the synthesis of PiDi(or mono)LipMal
Figure 41 : 1 HFigure 42 : 13 C
4114213 Figure 41: 1 H NMR spectrum of the compound obtained after furan protection of MalHex
Figure 43 : 1 H
431 Figure 43: 1 H NMR spectrum obtained after after the esterification of a PIDiLipOH with MalProt
Figure 44 : 1 HFigure 45 :
44145 Figure 44: 1 H NMR spectrum of a PIDiLipMal obtained by bulk deprotection
Figure 46 : 1 HFigure 47 :
46147 Figure 46: 1 H NMR spectrum of a PIDiLipMal obtained by solution deprotection
Figure 48 : 1 H
481 Figure 48: 1 H NMR spectrum of a PIDiLipMal obtained by direct esterification pathway
Figure 51 :
51 Figure 51: Structure of MalChlo
Figure 53 :
53 Figure 53: General structure of a PIMonoLip
Figure 55 :
55 Figure 55: General structure of PIMonoLipLip
Figure 60 :
60 Figure 60: Structure of PImOH
Figure 65 :
65 Figure 65: General structure of PIDiLipMal
Figure 1 :
1 Figure 1: DSC thermograms of NR (RRIM 600), a PIDeg and a PIDiOH
Table 1 :Figure 2 :
12 Figure 2: DSC thermogram of PIDiC 18:0
Figure 3 :
3 Figure 3: Comparison of the variation of crystallization (and melting) temperatures of PIDiLip with the crystallization temperature of saturated methyl fatty esters in function of the number of carbons in the fatty chain
Figure 4 :
4 Figure 4: Optical microscopy analysis of a PIDiC 24:0 using polarized light and varying the temperature of the analysis: a. 60°C / b. 5°C / c. 60°Czoom: 10X
Figure 5 :
5 Figure 5: Schematic representation of the self assembly of a PIDiLip below its crystallization temperature
Figure 6 :
6 Figure 6: DSC thermogram of PIDiC 18:0 5 kg/mol
Figure 7 :
7 Figure 7: DSC thermogram of PIDiC 16:0 of 27 kg/mol
Figure 8 :
8 Figure 8: General structures of PIMonoLip and PIDiLipLip
Figure 9 :
9 Figure 9: DSC thermogram of PIMonoC 24:0 10 000g/mol
Figure 10 :
10 Figure 10: DSC thermogram of PIMonoC 24:0 5 000 g/mol
Figure 11 :
11 Figure 11: Schematic representation of the different architectures affordable playing on the length of the branches from the amorphous core -Reproduced from Daniel et al.1
Figure 12 :Figure 13 :
1213 Figure 12: DSC thermogram of PIDiC 24:0 C 24:0
Figure 14 :
14 Figure 14: DSC thermograms of admixtures of PIDiC 16:0 with 0.1, 0.3, 1, 2 and 10 wt% of MetPalmitate
Figure 15 :
15 Figure 15: DSC thermograms of admixtures of PIDiC 18:0 with 0.1, 0.3, 1, 2 and 10 wt% of SteAcid
showing 3
3 distinct exotherms and the temperature reached 46°C. It seems, however, possible to increase the crystallization temperature of the chain-ends of a PIDiLipLip with the use of fatty esters carrying the temperature around 40°C. As a reference, the crystallization temperatures of PIDiC 24:0 and MeLignocerate alone are 17°C and 59°C respectively thus attesting of the efficiency of the doping.
Figure 16 :
16 Figure 16: DSC thermograms of admixtures of PIDiC 24:0 C 24:0 with 0.1, 0.3, 1, 2 and 10 wt% of MeLignocerate
Figure 17 :
17 Figure 17: Structure of NR proposed by Tanaka
Figure 18 :
18 Figure 18: General synthetic route to obtain PIDi(or Mono)Lip
Figure 19 : 0 Table 4 .
1904 Figure 19: DSC thermogram of PIDiC 18:0
Figure 20 :Figure 21 :
2021 Figure 20: DSC thermograms of NR obtained after 2h and 8h at -25°C
times obtained from the DSC thermograms; b) Melting enthalpy calculated from the area of the melting endotherm on the DSC thermograms. c) Melting temperatures obtained from the DSC thermograms
Figure 22 :
22 Figure 22: DSC thermograms of a PIDiC 18:0 and PIDiOH after 8 h of isothermal crystallization at -25 °C
Figure 23 :
23 Figure 23: DSC thermograms of PIDiOH, PIDiC 18:0 and PIDiC 18:0 mixed with various amount of ML and SA after isothermal crystallization at -25°C for 8h
Figure 24 : 25 °C for 8 h
2425 Figure 24: DSC thermograms of a PIDiOH alone and with 4 % ML + 4 % SA after isothermal crystallization at -25 °C for 8 h
Figure 25 :
25 Figure 25: DSC thermograms of a high molar mass IR, IRDeg mixed with free fatty chains and IRDiC 18:0 also mixed with free fatty chains. Obtained after 60 h of crystallization at -25 °C.
Figure S3 : 2 Figure S5: 1 H 3 Figure S6: 1 H 3 Figure S7: 13 C 3 Figure S8: 1 H 2 Figure S9: 1 H 3 FigureFigure S16 : 2 Figure S17 : 1 Figure S18 :
S3213131331213S162S171S18 Figure S3: ATR-FTIR spectrum of PIDeg
Figure 1 :
1 Figure 1: General scheme representing the PI-Protein coupling strategy (A) and the PI-Polypeptide synthesis (B.)
Figure 2 :Figure 3 :
23 Figure 2: Structure of the 3 commonly used reducing agents for disulfure bridges cleavage
Figure 4 :
4 Figure 4: Reduction mechanism of DTT
Figure 5 :
5 Figure 5: SDS-Page analysis of CALB reduced either with TCEP or β-mercaptoethanol
Figure 6 :
6 Figure 6: PI-Protein coupling attempt / 1. Before coupling / 2. After stirring + overnight phase separation. (A: "blank protein" / B: "blank PI" / C: coupling)
Figure 7 :
7 Figure 7: Optical microscopy analysis of the 3 emulsions obtained after the coupling attempts (A: "blank protein" / B: "blank PI" / C: coupling)
Figure 8 :
8 Figure 8: PI-Protein coupling attempt / 1. 24h of coupling + overnight phase separation / 2. 3 days coupling + overnight phase separation. (A: "blank protein" / B: "blank PI" / C: coupling).
Figure 9 :
9 Figure 9: Optical microscopy analysis of the 3 emulsions obtained after the coupling attempts: 1. 24h of coupling + overnight phase separation / 2. 3 days of coupling + overnight phase separation (A: "blank protein" / B: "blank PI" / C: coupling)
Figure 10 :
10 Figure 10: Two different pathways investigated for the synthesis of PI-Polypeptide diblock
Figure 11 : 1 H 3 Figure 12 :
111312 Figure 11: 1 H and 13 C NMR spectra of BenzylGluNCA in CDCl 3
Figure 13 : 1 HFigure 14 :
13114 Figure 13: 1 H NMR spectrum of a PPepFur -CDCl 3 /TFA 2/1 (v/v)
Figure 15 :
15 Figure 15: MALDI-TOF spectrum of PPepFur
Figure 16 : 1 H
161 Figure 16: 1 H NMR spectra of a PIMal and an attempt of DA coupling to afford PI-b-PPep -CDCl 3
Figure 17 :
17 Figure 17: General chemical pathway for the ROP of NCA initiated by a primary amine
Figure 18 :
18 Figure 18: General chemical pathway proposed by Zhao et al. 7 for the ROP of NCA using hydroxylamines as initiating system
Figure 19 :
19 Figure 19: Multifunctional hydroxylamines used by Zhao et al 7 .
Figure 20 :
20 Figure 20: Mechanism proposed for the ROP of NCA using alcohol as initiating system and acid as a catalyst
(RMN) is the molar mass of the polymer calculated by 1 H NMR -i (5) and i (8) are the values of the integral of signals 5 and 8 in Figure21respectively -M (unit) is the molar mass of a repetition unit (219 g/mol) -M n (initiator) is the molar mass of the initiator (89 g/mol)
Figure 21 : 1 HFigure 22 :
21122 Figure 21: 1 H NMR spectrum of a polypeptide synthesized using DMEA as initiator in CDCl 3 /TFA 2/1 (v/v)
Figure 24 :
24 Figure 24: SEC chromatograms of a PIDiPPep and the starting PIDiOH in THF
Figure 25 :
25 Figure 25: General chemical pathway for the synthesis of PINOHOH
Figure 26 : 1 H 2 Figure 27 : 1 H
2612271 Figure 26: 1 H NMR spectra of PINHOH and PIOH in CD 2 Cl 2
Figure 28 : 1 H
281 Figure 28: 1 H NMR spectra of PIPPepNOHOH and PINOHOH in CD 2 Cl 2
Figure 29 :
29 Figure 29: SEC chromatograms of PIPPepNOHOH and PINOHOH in THF/LiNTf 2
Figure 30 :
30 Figure 30: Structure of BenzylGluNCA
Figure 32 :Figure 33 :Figure 34 :Figure 35 :
32333435 Figure 32: Structure of PIPPepFur
Table of Contents Introduction………………………………………………………….. Bibliography………………………………………………..………… I. Overview of NR: Generalities, Biosynthesis and Establishment of Tanaka's model…………………………………………………………… 7
of
a. General considerations on NR…………………………………. 7
b. Biosynthesis of NR………………………………………… 10
c. Tanaka's model of NR……………………………………… 13
i. The trans units at the chain-end…………………………… 14
ii.
Origin
of the proteic part………………………….… 15 iii. Origin of the phospholipidic part…………………………... 16 iv. Conclusion……………………………………………… 17 d. Cold crystallization of NR and IR……………………..………… 17 e. Strain-induced crystallization of NR and IR………………………… 22 II.
Synthesis of Polyisoprene……………………………………… 25
a. Introduction………………………………………………… 25
b. Cationic polymerization……………………………………… 25
c. Anionic polymerization……………………………………… 32
d. Radical polymerization………………………………………… 33
i. NMP polymerization……………………………………… 34
ii. RAFT polymerization…………………………………… 35
iii. ATRP polymerization…………………………………… 35
iv. Conclusion…………………………………………… 36
e. Ring opening metathesis polymerization (ROMP)…………………… 36
f. Coordination polymerization…………………………………… 37 g. General conclusion on polyisoprene synthesis…………………… III.
Polymer coupling: Grafting lipids or proteins………………………
a. Polymer-Lipid coupling………………………………………
b. Polymer-Protein coupling………………………………………
i. Grafting to……………………………………………
ii. Grafting from……………………………………………
iii. Giant amphiphiles…………………………………………
IV. Conclusion….………………………………………………
Natural Rubber and Chemical Degradation………………………..
I. Introduction….……………………………………………… II. Bibliography…………………………………………………… a. Ozonolysis…………………………………………………... b. Photodegradation……………………………………………... c. Oxido-reduction degradation……………………………………. d. Metathesis………………………………………………...… e. Oxidative degradation………………………………………… f. Conclusion………………………………………………… III.
Characterization of the starting material….……………………......
b. Epoxidation……………………………………………… b. Synthesis of PIMal………………………………………… a. Study of PIDiLip……………………………………………..
c. Acidic cleavage…………………………………………....…. i. Synthesis of PIOH………………………………………. b. Variation of the PI chain-length…………………………………
d. Comparison between extracted PI and NR sheet………………….... ii. Synthesis of MalChlo……………………………………. i. 5 000 g/mol……………………………………………
e. Conclusion………………………………………………...... iii. Synthesis of PIMal……………………………………… ii. 27 000 g/mol…………………………………………...
V. Conclusion…………………………………………………....... c. Synthesis of PIMonoLipLip……………………….……….…... c. Variation of the number of linked fatty chains………………………
VI. Experimental part…………………………………………...…. i. Synthesis of PIMonoLip……………….…….…………… i. PIMonoLip…………………………………………….
a. Natural PI extraction…………………………………………... ii. Synthesis of PIMonoLipOH……………………….………… ii. PIDiLipLip…………………………………………….
b. Degradation of NR…………………………………………… iii. Synthesis of PIMonoLipLip……………………..………… d. Addition of free lipids…………………………………………
Selective Chain-end Functionalization……………………………… d. Synthesis of PIDiLipLip………………………………………. e. Conclusion………………………………………………….
a. Extraction of polyisoprene from Natural Rubber……………….……
b. Characterization of the raw material……………………………… ii. Synthesis of PIDiLipMalProt………………………………
i. SEC analysis……………………………………………. iii. Synthesis of PIDiLipMal……………………………………
ii. NMR analysis…………………………………………… 2. Diels-Alder reaction using furfurylamine (PIDA)……………. 1. Deprotection of PiDiLipMalProt…………………………
iii. Elementary analysis………………………………………. vii. Synthesis of heterotelechelic di-lipid/maleimide PI (PIDiLipMal)…. 2. Direct Esterification…………………………………...
IV. Conclusion……………………………………………………. Polyisoprene / Lipid Coupling………………………………………..
V. Experimental part………………………………………………
a. Synthesis of fatty acyl chlorides…………………………………
iv. Conclusion…………………………………………...… IV. Controlled degradation of NR…………………………………. a. Purification process and side reaction…………………………….. I. Introduction……………………………………………………. II. Bibliography……………………………………………………. III. Chain-end functionalization……………………………………… a Results and discussion………………………………………… i. Synthesis of a heterotelechelic ketone/maleimide PI (PIMal)……… ii. Synthesis of fatty acyl chlorides…………………………….. iii. Synthesis of a homotelechelic lipid/lipid PI (PIMonoLipLip)……… iv. Synthesis of a heterotelechelic di-lipid/lipid PI (PIDiLipLip)……… v. Synthesis of a heterotelechelic methylamino/ketone PI (PImOH)…… vi. Synthesis of a primary amine terminated PI…………………… 1. Reductive amination using ammonium salt (PINH 2 )………….. i. Synthesis of PIDiOH…………..………………………… ii. Synthesis of PIDiLip……………………………….……. iii. Synthesis of PIDiLipOH………………………………….. iv. Synthesis of PIDiLipLip……………………………...…... e. Synthesis of PImOH………………………………………….. f. Synthesis of a heterotelechelic ketone/amine PI(PINH 2 )……………… g. DA reaction between MalHex and Furfurylamine (MalDAAm)………… h. Synthesis of PIDiLipMal…………………………………...….. i. Synthesis of MalProt………………………….…………. I. Introduction…………………………………………………... II. Chain-end crystallization………………………………………... III. Cold Crystallization…………………………………………….. a. Introduction………………………………………………….. b. New insight into the Cold Crystallization of Natural Rubber: the role of linked and free fatty chains……………………………………………... IV. General conclusion……………………………………………... V. Supporting information…………………………………………. Polyisoprene / Protein Coupling…………………………………….. I.
Introduction…………………………………………………... II. Polymer -Protein coupling………………………………………
iii. Synthesis of a heterotelechelic ethylethanolamine/hydroxyl PI
(PINOHOH)……………………………………………
iv. Synthesis of Polypeptide initiated by PINOHOH
(PIPPepNOHOH)……………………………………….
v. Conclusion…………………………………………….
IV. Conclusion…………………………………………………….
Conclusion……………………………………………………………..
Materials and Methods……………………………..…………
Annexes………………………………………………………………..
a. Reduction of disulfure bridges and PI/Lipase coupling……………… b. PI/BSA coupling…………………………………………….. III. PI-polypeptide co-polymer synthesis………………………………. a. Synthesis of benzylglutamate N-carboxyanhydride (BenzylGluNCA)……. b. Synthesis of a di-block PI-Polypeptide via DA reaction……………… c. ROP of NCA initiated by PIDiOH………………………………. i. Bibliography…………………………………………... ii. Results and discussion………………………………….... V. Experimental part……………………………………………… a. Reduction of disulfure bridge (TCEP / CALB)……………………... b. Coupling CALB / PIMal………………………………………. c. Coupling BSA / PIMal………………………………………... d. Synthesis of BenzylGluNCA……………………………………. e. Synthesis of PPepFur………………………………………… f. Synthesis of PIPPepFur………………………………………. g. Synthesis of PPepDMEA……………………………………… h. Synthesis of PIDiPPep………………………………………... i. Synthesis of PINHOH………………………………………… j. Synthesis of PINOHOH……………………………………….. k. Synthesis of PIPPepNOHOH…………………………………...
Generalities, Biosynthesis and Establishment of Tanaka's model a. General considerations on NR As
briefly explained in the general introduction, NR is a material of strategic importance for industry. It is present in more than 40 000 commercial goods like tires but also in more than 400 medical products1 . Many plants are able to produce NR but the resulting polymer is usually of low molar mass and is not suitable for mechanical applications 2 . Among this diversity, only a couple of plants namely Hevea brasiliensis, Parthenium argentatum
I. Overview of NR:
Wiley Online Library, 2006.
(2) Mooibroek, H.; Cornish, K. Appl. Microbiol. Biotechnol. 2000, 53 (4), 355.
(3) van Beilen, J. B.; Poirier, Y. Plant J. 2008, 54 (4), 684.
(4) Cornish, K. Technol. Innov. 2017, 18 (4), 244.
(5) Tarachiwin, L.; Sakdapipanich, J.; Ute, K.; Kitayama, T.; Tanaka, Y.
Biomacromolecules 2005, 6 (4), 1858.
Bibliography
1) Kamm, B.; Gruber, P. R.; Kamm, M. Biorefineries-industrial processes and products; (6) Tarachiwin, L.; Sakdapipanich, J.; Ute, K.; Kitayama, T.; Bamba, T.; Fukusaki, E.; Kobayashi, A.; Tanaka, Y. Biomacromolecules 2005, 6 (4), 1851. (7) Tanaka, Y. Rubber Chem. Technol. 2001, 74 (3), 355. (8) Tanaka, Y.; Tarachiwin, L. Rubber Chem. Technol. 2009, 82 (3), 283. (Guayule) and Taraxacum kok-saghyz (Russian Dandelion) were particularly studied for the production of rubber suitable for industrial applications. It is important to note that generally, "NR" designs Natural Rubber from Hevea brasiliensis as it is to date the main one to be used by companies. The previous sentence raises a problem widely demonstrated by Cornish et al.
Table 1 : Composition of natural rubber latex and raw dry rubber -Reproduced from Vaysse et al. 11
1
Latex Dry Rubber
% w/w fresh latex % w/w dry matter of latex % w/w dry matter
Rubber hydrocarbon 35.0 87.0 94.0
Proteins 1.5 3.7 2.2
Carbohydrates 1.5 3.7 0.4
Lipids 1.3 3.2 3.4
Organic solutes 0.5 1.1 0.1
Inorganic substances 0.5 1.2 0.2
Approximate values only (highly dependant on clone, season and physiological status of the tree)
Table 2 : Carbon Number and T m of Fatty Acids and Their Esters, and T g of cis-1,4 Polyisoprene (IR) Containing 30 wt % of Acid or Ester -Reproduced from Tanaka et al.
2
Table 4 : Results on the cationic polymerization of isoprene using the 1-(4-methoxyphenyl)ethanol/B(C 6 F 5 ) 3 initiating system in two different solvents a -Reproduced from Kostjuk et al 65 . Run Solvent Time (min) Conv (%) M n (g.mol -1 ) Đ Unsaturation b (%) trans- 1,4 c (%) T g d (°C)
4
1 CH 2 Cl 2 360 30 4910 2.9 72 94.0 -33.6
2 BTF 2 82 5580 3.9 63 93.4 -
3 e BTF 120 21 3460 1.9 83 92.9 -25.8
4 e,f BTF 360 26 2660 1.4 88 92.9 -32.4
a Polymerization conditions : [B(C 6 F 5 ) 3 ] = 0.023 M ; [IP] = 1.67 M ; initiator : [1-(4-methoxyphenyl)ethanol] = 0.011 M;
solvent (BTF or CH 2 Cl 2 ) 5mL; temperature = -30°C; b Determined by 1 H NMR: 100% corresponds to linear polyisoprene
with one unsaturation per isoprene unit.
c
Determined by 1 H NMR and
13
C NMR spectroscopy. d Measured by DSC.
Table 5 : Results from the cationic polymerization of isoprene using the 1-(4-methoxyphenyl)ethanol/B(C 6 F 5 ) 3 initiating system in aqueous media a -Reproduced from Kostjuk et al 65 . Run Process Time (h) Conv (%) M n (g.mol -1 ) Đ Unsaturation b (%) trans-1,4 c (%)
5
1 Suspension 138 51 1040 1.7 97 96.4
2 Dispersion 142 39 900 1.4 99 96.2
3 Emulsion 141 30 680 1.5 97 96.7
a Polymerization conditions : [IP] = 1.72 M; [B(C 6 F 5 ) 3 ] = 4.7x10 -2 M; [1-(4-methoxyphenyl)ethanol] = 1.86x10 -1 M;
temperature = 20°C; b Determined by 1 H NMR: 100% corresponds to linear polyisoprene with one unsaturation per isoprene
unit. c Determined by 1 H NMR and 13 C NMR spectroscopy.
Table 6 : Emulsion cationic polymerization of styrene and isoprene catalyzed by water-dispersable LASCs a - Reproduced from Kostjuk et al. 58 Run Monomer T (°C) t (h) Conv. (%) M n (kg/mol) M w /M n Styrene/ isoprene (%) c Tg (°C)
6
10 Isoprene 40 13 89 97.0 3.8 n.a. -58
11 b Isoprene 40 24 92 60.8 2.7 n.a. -57
12 Styrene / Isoprene 40 15 100 124.9 4.9 47:53 6
13 Styrene / Isoprene 40 15 64 81.7 3.9 26:74 -16
14 Styrene / Isoprene 40 15 89 152.2 3.2 73:27 0, 64
a : Polymerization conditions : H 2 O (3.5g); monomer (1.5mL); YbCl 3 x 6 H 2 O (0.21g); DBSNa (0.78g); b : C 6 Cl 5 OH (0.14g)
as initiator, c : Determined by 1 H NMR spectroscopy, n.a. : not applicable
Table 7 : Microstructure of alkali metal-catalyzed polyisoprenes -Reproduced from Foster et al. 72 Catalyst % 1,4 cis % 1,4 trans % 3,4 % 1,2
7
Lithium 94 0 6 0
Sodium 0 43 51 6
Potassium 0 52 40 8
Rubidium 5 47 39 8
Cesium 4 51 37 8
1,4-cis PI using specific catalyst that will be presented later in this sub-chapter. Most of the results presented here comes from a book published in 2006 by Friebe et al. 92 retracing the development of this technology.
Table 8 : Polymerization of Isoprene with (AlHNR) n and TiCl 4 -Reproduced from Schoenberg et al. 91
8
Cocatalyst Al / Ti mole ratio Cis-1,4 by IR (%)
1.4 95.0
(AlHN-C 2 H 5 ) n 1.5 94.0
1.55 95.0
1.1 95.5
(AlHN-iC 3 H 7 ) n 1.2 97.0
1.3 97.0
1.5 97.0
(AlHN-nC 4 H 9 ) n 1.6 96.5
1.7 97.0
Table 9 : Summary of advantages and drawbacks for each synthetic system
9
Synthetic method 1,4-cis rate Control of the chain-ends
Cationic
Table 1 : Summary of the data obtained for the extraction of natural PI from RRIM 600 and PB235 using different solvents
1
Clone Solvent Solubilization time Overall recovery (%) a) Solubilized Natural PI (%) b) Gel fraction (%) c)
1 day 90 38 61
Cyclohexane 3 days 90 60 40
5 days 93 87 13
1 day 90 56 44
THF 3 days 94 56 44
RRIM 600 5 days 1 day 95 94 60 77 40 23
DCM 3 days 88 78 22
5 days 98 85 14
1 day 91 85 14
Toluene 3 days 115 85 14
5 days 101 92 7
1 day 88 18 81
Cyclohexane 3 days 89 16 84
5 days 86 14 86
1 day 98 96 4
THF 3 days 93 91 9
PB235 5 days 1 day 97 91 95 61 5 39
DCM 3 days 94 84 16
5 days 88 62 38
1 day 97 100 0
Toluene 3 days 91 100 0
5 days 94 100 0
. [NR] = 20 g/L a) Overall recovery (solubilized + gel fractions) starting from 1g of NR ; b) Percentage of the solubilized PI fraction ; c) Percentage of the gel fraction
Table 2 : Elementary analysis results obtained for both NR clones and their extracts with THF
2
NR Clone Carbon (wt %) Hydrogen (wt %) Oxygen (wt %) Nitrogen (wt %)
RRIM 600 85.7 11.3 1.8 0.6
RRIM extract -THF -24h 86.2 11.3 1.0 0.1
RRIM gel phase -THF -24h 86.3 10.9 3.6 1.3
PB 235 86.1 11.3 1.2 0.4
PB 235 extract -THF -24h 86.9 12.5 - 0.1
PB 235 gel phase -THF -24h 80.2 11.7 2.9 1.9
Table 3 : Summary of the data obtained for the characterization of degraded rubbers bearing impurities
3
M n th a M n NMR b M n SEC c 1 H NMR integrals d
Sample (g/mol) (g/mol) (g/mol) i al i iso i ? i ?'
Degraded ExtraNR 4 000 7 100 4 600 1 104 0,6 4,0
Degraded IR 6 000 12 700 7 700 1 186 0,8 5,4
𝑖 𝑖𝑠𝑜 𝑖 𝑎 𝑙 * 68) +
a : Targeted molar mass b : Calculated from the 1 H NMR integrals by using the formula: Mn.NMR = (
Table 4 . Epoxidation of different starting materials
4
Targeted epoxidation rate Reaction Experimental Epoxidation rate (%) a)
(%) time (h) ExtraRRIM 600 ExtraPB 235 IR
20 2 17.1 17.6 15.5
4 17.6 18.3 16.1
4 2 4 4.2 4.6 3.7 4.0 3.6 3.8
2 2 4 2.5 2.3 2.2 1.8 -b) -b)
a) experimental epoxidation rate determined by
1
H NMR according to the following formula:
Table 1 : Molar mass and dispersity values obtained for the four attempts of PINH 2 synthesis
1
Attempt M n (g/mol) a Đ a
PIDeg 8 000 1,4
A 8 000 1,4
B 24 000 1,1
C 47 000 1,8
D 32 000 1,3
a : Obtained by SEC in THF
Table 2 : Summary of all the PI synthesized
2
Polymer type Chain-end structure Chemical structure R Name
PIOH
α-ketone / ω- PIDiOH
Functional hydroxyl
PIs PImOH
α-ketone / ω- PIMal
maleimide
Table 3 : Experimental values of integrals for various PI synthons
3
Experimental values of integrals
Name PI chain Chain-ends
PIDeg -CH=CCH 3 - 138 -CHO 1.0
PIOH -CH=CCH 3 - 140 -CH 2 OH -CH 2 COCH 3 2.2 1.8
-CH 2 COCH 3 1.7
PIDiOH -CH=CCH 3 - 143 -CH 2 OH -CH 2 CH 2 OH 3.7 3.6
-CH 2 N- 1.6
-CH 2 COCH 3 1.9
PIDiC 24:0 -CH=CCH 3 - 136 -CH 2 COO--CH 2 -(aliphatic chain) 3.7 90.4
-CH 2 CH 3 7.2
Synthesis of fatty acyl chlorides Figure 49: General structure of fatty acyl chlorides A
solution of the desired fatty acid was prepared in dry DCM ([fatty acid] = 0.5M). Oxalyl chloride (3 eq) was then added to the mixture under vigorous stirring followed by 3 drops of DMF. A vigorous bubbling was observed. The reaction conversion was followed by connecting the reaction flask to a bubbler containing a KOH solution. When no bubbling was still visible, the reaction mixture was evaporated under vacuum to remove the solvent and the excess of oxalyl chloride. The final product was then dried overnight at 40°C under dynamic vacuum affording a yellowish liquid for C 18:0 Chlo and C 14:0 Chlo and a white powder for
b. Synthesis of PIMal
i. Synthesis of PIOH
Figure 50: Structure of PIOH
PIDeg (2 g, 0.2 mmol of aldehyde function) was dissolved in 5 mL of dry THF.
Triacetoxyborohydride (0.19 g, 4 eq, 0.8 mmol) were then added to the obtained solution as
well as 12 µL (1 eq, 0.2 mmol) of acetic acid. The reaction mixture was stirred at 40°C
overnight and the final product was obtained after two successive precipitations into a large
excess of cold methanol, solubilized in Et 2 O, filtrated through Celite ® and dried overnight at
40°C under dynamic vacuum. Yield: ~ 90%
COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.65, 29.64, 29.61, 29.52, 29.36,
29.33, 29.07, 28.43, 25.07, 22.71] (10C, CH 2-fatty chain ), 14.11 (1C, CH 2 CH 3 ).
COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.81, 29.75, 29.66, 29.52, 29.46,
29.21, 28.57, 25.20, 22.83] (14C, CH 2-fatty chain ), 14.25 (1C, CH 2 CH 3 )
C 24:0 Chlo: 1 H NMR, (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H, CH 2 CH 2 Cl), 1.25
(s, 40H, CH 2, fatty chain ), 0.88 (t, 3H, CH 3 CH 2 ) ; 13 C NMR (CDCl 3 ) (ppm): 174.21 (1C,
COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.86, 29.76, 29.68, 29.52, 29.49,
29.22, 28.59, 25.21, 22.86] (20C, CH 2-fatty chain ), 14.28 (1C, CH 2 CH 3 )
C 24:0 Chlo. Yield: >90%. C 14:0 Chlo:
1
H NMR (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H, CH 2 CH 2 Cl), 1.25 (s, 20H, CH 2, fatty chain ), 0.88 (t, 3H, CH 3 CH 2 ) ; 13 C NMR (CDCl 3 ) (ppm): 174.21 (1C, C 18:0 Chlo: 1 H NMR, (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H, CH 2 CH 2 Cl), 1.25 (s, 28H, CH 2, fatty chain ), 0.88 (t, 3H, CH 3 CH 2 ) ; 13 C NMR (CDCl 3 ) (ppm): 174.21 (1C, 1 H NMR (CDCl 3 ) (ppm): 5.12 (t, 1H, CHCCH 3 ), 3.62 (t, 2H, CH 2 OH), 2.42 (t, 2H, CH 2 COCH 3 ), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH) ii. Synthesis of MalChlo
NMR (CDCl 3 ) (ppm): 6.68 (s, 2H, CH double bond ), 3.51 (t, 2H, CH 2 N), 2.87 (t, 2H, CH 2 COCl), 1.72 (q, 2H, CH 2 CH 2 COCl), 1.60 (q, 2H, CH 2 CH 2 N), 1.33 (m, 2H, CH 2 CH 2 CH 2 ) ; 13 C NMR (CDCl 3 ) (ppm): 173.12 (1C, COCl), 170.9 (2C, CO cycle ), 134.22 (2C, CH double bond ), 46.95 (1C, CH 2 COCl), 37.48 (1C, CH 2 N), 28.14 (1C, CH 2 CH 2 COCl), 25.63 (1C, CH 2 CH 2 N), 24.60 (1C, CH 2 CH 2 CH 2 )
iii. Synthesis of PIMal
1 H
Figure
52
: Structure of PIMal PIOH (1 g, 0.1 mmol) was solubilized in 4 mL of dry THF. Dry TEA (50 µL, 4 eq, 0.4 mmol) was then added to the polymer solution followed by 51 mg (2 eq, 0.2 mmol) of MalChlo. A precipitate was quickly observed and the heterogeneous media stirred at room temperature for 1h. The final polymer was recovered after two successive precipitations into a large excess of cold methanol, solubilization in Et 2 O, filtration through Celite ® and overnight drying under dynamic vacuum at room temperature. Yield: ~ 80% 1 H NMR (CDCl 3 ) (ppm): 6.68 (s, 2H, CH double bond ), 5.12 (t, 1H, CHCCH 3 ), 4.02 (t, 2H, CH 2 OCO), 3.51 (t, 2H, CH 2 N), 2.42 (t, 2H, CH 2 COCH 3 ), 2.27 (t, 2H, CH 2 COO), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH) c. Synthesis of PIMonoLipLip i. Synthesis of PIMonoLip
The final polymer was obtained after two successive precipitations into a large excess of cold methanol, solubilizing in Et 2 O, filtration through Celite ® and overnight drying at 40°C under dynamic vacuum. Yield: ~ 80%, 1 H NMR (CDCl 3 ) (ppm): 5.12 (s, 1H, CHCCH 3 ), 4.03 (t, 2H, CH 2 OCO), 2.43 (t, 2H,
CH 2 CCH 3 ), 2.26 (t, 2H, OCOCH 2 ) 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH),
1.26 (s, CH 2, fatty chain ), 0.88 (t, 3H, CH 2 CH 3 )
ii. Synthesis of PIMonoLipOH
Figure 54: General structure of PIMonoLipOH
PIMonoLip (1 g, 0.1 mmol) was solubilized in 3.7 mL of a MeOH/THF admixture (10/90 v/v%). NaBH 4 (21 mg, ~5 eq, 0.5 mmol) was then added to the solution. A strong degassing was observed and the reaction allowed to proceed at room temperature for 1h. The polymer was then recovered by two successive precipitations in cold methanol followed by a solubilization in Et 2 O, filtration through Celite ® and overnight drying at 40°C under dynamic vacuum. Yield ~ 85%.
1
H NMR (CDCl 3 ) (ppm): 5.12 (t, 1H, CHCCH 3 ),
4
.03 (t, 2H, CH 2 OCO), 3.80 (m, 1H, CHCH 3 OH), 2.28 (t, 2H, CH 2 COO), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH), 1.25 (s, CH 2 fatty chain ), 1.18 (d, 3H, CH 3 CHOH), 0.88 (t, 2H, CH 3 CH 2 )
3-aminopropyl-functionalized silica particles were added (~1 NH 2 eq toward acyl chloride) and the obtained mixture was stirred for 2 h at room temperature. The final polymer was then recovered by two successive precipitations into a large excess of methanol, solubilization in Et 2 O, filtration through Celite® and drying overnight at 40°C under dynamic vacuum. Yield ~ CH 2 ) 2 ), 2.51 (t, 2H, CH 2 N), 2.26 (t, 4H, (OCOCH 2 ) 2 ) 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH), 1.26 (s, CH 2, fatty chain ), 1.16 (d, 3H, CH 3 CHOH)
ii. Synthesis of PIDiLip:
CHOHCH3), 2.74 (t, 4H, N(0.88 (t, 6H, CH 2 CH 3 )
80%.
1 H NMR (CDCl 3 ) (ppm): 5.12 (t, 1H, CHCCH 3 ), 4.82 (m, 1H, CHOCH3), 4.02 (t, 2H, iv. Synthesis of PIDiLipLip
CH 2 OCO), 2.21 (t, 2H, CH 2 COOCH 2 ), 2.18 (t, 2H, CH 2 COOCHCH 3 ), 1.96 (s, 4H, CH 2 CH /
CH 2 CCH 3 ), 1.60 (s, 3H, CH 3 C), 1.18 (s, CH 2 fatty chain ), 0.80 (t, 6H, (CH 2 CH 3 ) 2 ) Figure 57: general structure of PIDiLip
PIDiOH (2 g, 0.2 mmol) was dissolved in 7 mL of dry THF. Dry TEA (180 µL, 6 eq, 1.2
d. Synthesis of PIDiLipLip mmol) was then added followed by 3 eq (0.6 mmol) of the desired acyl chloride. After 1 h of
i. Synthesis of PIDiOH reaction, 3-aminopropyl-functionalized silica particles were added (~1 NH 2 eq toward acyl
iii. Synthesis of PIDiLipOH
filtrated on Celite ® and dried under dynamic vacuum at 40°C overnight affording a colorless
viscous liquid. Yield: ~ 85%
1 H NMR (CD 2 Cl 2 ) (ppm): 5.14 (s, 1H, CHCCH 3 ), 3.57 (t, 4H, (CH 2 OH) 2 ), 2.64 (t, 4H, Figure 58: General structure of PIDiLipOH
N(CH 2 ) 2 ), 2.52 (t, 2H, CH 2 N), 2.42 (t, 2H, CH 2 COCH 3 ), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), PIDiLip (1 g, 0.1 mmol) was solubilized in 3.7 mL of a MeOH/THF admixture (10/90 v/v%).
1.67 (s, 3H, CH 3 CCH) NaBH 4 (21 mg, ~5 eq, 0.5 mmol) was then added to the polymer solution. A strong degassing
was observed and the reaction allowed to proceed at room temperature for 1h. The polymer
was then recovered by two successive precipitations into a large excess of cold methanol
followed by a solubilization in Et 2 O, filtration through Celite ® and overnight drying at 40°C
under dynamic vacuum. Yield ~ 85%.
Figure 56: Structure of PIDiOH
PIDeg (3.7 g, 0.37 mmol) was dissolved in 10 mL of dry THF. Diethanolamine (0.19 g, 4.5 eq, 1.6 mmol) was then added and the reaction was stirred at 40°C for 2 h. Finally, 0.36 g (4.2 eq, 1.55 mmol) of sodium triacetoxyborohydride were added to the reaction mixture followed by 30µL (1.3 eq, 0.48 mmol) of acetic acid and the non-homogeneous solution was stirred at 40°C overnight. The reaction mixture was then directly precipitated into a large excess of cold methanol under vigorous stirring. The methanol was then removed and the polymer dissolved in DCM and precipitated again in cold methanol. The polymer was then dissolved in Et 2 O, chloride) and the obtained mixture was stirred for 2 h. The reaction media was then precipitated in a large excess of cold methanol, dissolved in Et 2 O, filtered through Celite ® and dried overnight at 40°C under vacuum. The final compound was a colorless viscous liquid except for PIDiC 24:0 which was a colorless paste. Yield: ~ 80% 1 H NMR (CD 2 Cl 2 ) (ppm): 5.12 (s, 1H, CHCCH 3 ), 4.09 (t, 4H, (CH 2 OCO) 2 ), 2.74 (t, 4H, N(CH 2 ) 2 ), 2.51 (t, 2H, CH 2 N), 2.43 (t, 2H, CH 2 CCH 3 ), 2.26 (t, 4H, OCOCH 2 ) 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH), 1.26 (s, CH 2, fatty chain ), 0.88 (t, 6H, CH 2 CH 3 ).
Figure 59: General structure of PIDiLipLip
PIDiLipOH (0.1 g, 0.01 mmol) was dissolved in 0.7 mL of dry THF as well as 11 mg of DMAP (10 eq, 0.1 mmol). The desired acyl chloride (4 eq, 0.04 mmol) was then added to the polymer solution and the reaction allowed to proceed at 40°C for 3 h. 3-aminopropylfunctionalized silica particles were added (~1 NH 2 eq toward acyl chloride) and the obtained mixture was stirred for 2 h at room temperature. The final polymer was then recovered by two successive precipitations into a large excess of methanol, solubilization in Et 2 O, filtration through Celite® and drying overnight at 40°C under dynamic vacuum. Yield: ~ 80%. 1 H NMR (CD 2 Cl 2 ) (ppm): 5.12 (s, 1H, CHCCH 3 ), 4.89 (m, 1H, CHOCH3), 4.09 (t, 4H, (CH 2 OCO) 2 ), 2.74 (t, 4H, N(CH 2 ) 2 ), 2.51 (t, 2H, CH 2 N), 2.43 (t, 2H, CH 2 CCH 3 ), 2.28 (t, 4H, (OCOCH 2 ) 2 ), 2.25 (t, 2H, CH 2 COOCHCH 3 ), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH), 1.26 (s, CH 2, fatty chain ), 1.20 (d, CH3CHOCO), 0.88 (t, 9H, CH 2 CH 3 ).
of a heterotelechelic ketone/amine PI (PINH 2 ) Figure 61: Structure of PINH 2
1 H NMR (CD 2 Cl 2 ) (ppm): 5.12 (t, 1H, CHCCH 3 ), 3.25 (t, 2H, CH 2 OH), 2.50 (t, 2H,
CH 2 CH 2 OH), 2.41 (t, 2H, CH 2 COCH 3 ), 2.39 (t, 2H, CH 2 N), 2.04 (s, 4H CH2CH / CH 2 CCH 3 ),
1.67 (s, 3H, CH 3 CCH)
f. Synthesis
Figure 63: Structure of MalProt
CH double bond ), 79.37 (2C, (COCHCH) 2 ), 45.8 (2C, (CHCOCH) 2 ), 38.15 (1C, CH 2 N), 33.57 (1C, CH 2 COOH), 27.02 (1C, CH 2 CH 2 N), 25.96 (1C, CH 2 CH 2 CH 2 ), 23.98 (1C, CH 2 CH 2 COOH) exo compound: 179.98 (1C, COOH), 176.15 (2C, (COCHN) 2 ), 136.5 (2C,
13 C NMR (CDCl 3 ) (ppm): endo compound: 179.98 (1C, COOH), 175.03 (2C, (COCHN) 2 ),
1 H NMR (CDCl3): (ppm) 7.36 (d, 1H, CHONH), 6.30 (m, 1H, CH double bond ), 6.22 (m, 1H, 134.3 (2C, CH double bond ), 80.85 (2C, (COCHCH) 2 ), 47.28 (2C, (CHCOCH) 2 ), 38.55 (1C, CH 2 N), 33.57
CH double bond ), 3.84 (m, 2H, CH2NHC), 3.70 (m, 1H, CHCOCH 2 ), 3.45 (t, 2H, CH 2 N), (1C, CH 2 COOH), 27.02 (1C, CH 2 CH 2 N), 25.96 (1C, CH 2 CH 2 CH 2 ), 23.98 (1C,
2.83/2.44 (dd, 2H, CH 2 COCH), 2.18 (t, 2H, CH 2 COOH), 1.53 (m, 4H, CH 2 CH 2 N / CH 2 CH 2 COOH)
CH 2 CH 2 COOH), 1.27 (m, 2H, CH 2 CH 2 CH 2 )
13 C NMR (CDCl 3 ): (ppm) 178.49 (1C, COOH), 177.83 (1C, COCH), 175.43 (1C, COCH 2 ), ii. Synthesis of PIDiLipMalProt
152.26 (1C, CCHCH 2 ), 142.59 (1C, CHNHCH), 110.56 (1C, CH doublebond ), 108.24 (1C,
CH double bond ), 55.06 (1C CHCO), 44.15 (1C, CH 2 NH), 37.81 (1C, CH 2 N), 36.13 (1C,
CH 2 CO), 35.11 (1C, CH 2 COOH), 27.38 (1C, CH 2 CH 2 N), 26.43 (1C, CH 2 CH 2 COOH), 24.82
(1C, CH 2 CH 2 CH 2 )
h. Synthesis of PIDiLipMal
i. Synthesis of MalProt
30 (m, 2H,
(CHOCHCH) 2 ), 5.12 (s, 1H, CHCCH 3 ), 4.88 (m, 1H, CHOCOCH 3 ), 4.09 (t, 4H,
MalHex (0.5 g, 2.36 mmol) was solubilized into 2.3 mL of DCM. Furan (1.6 mL, 10 eq, 23.6 (CH 2 OCO) 2 ), 3.50 (m, 2H, (CHCOCH) 2 ), 3.30 (t, 2H, CH 2 N), 2.74 (t, 4H, N(CH 2 ) 2 ), 2.51 (t,
mmol) was then added to the solution and the admixture stirred at room temperature 2H, CH 2 N), 2.43 (t, 2H, CH 2 CCH 3 ), 2.31 (t, 2H, CH 2 COOH), 2.26 (t, 4H, OCOCH 2 ) 2.05 (s,
overnight. The final compound was obtained by evaporation of the solvent and the excess of 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH), 1.26 (s, CH 2, fatty chain ), 0.88 (t, 6H,
furan under dynamic vacuum at 40°C. Yield: 100%. CH 2 CH 3 ), exo compound: 6.49 (s, 2H, CH double bond ), 5.24 (m, 2H, (CHOCHCH) 2 ), 5.12 (s, 1 H NMR (CDCl 3 ) (ppm): endo compound: 6.37 (s, 2H, CH double bond ), 5.30 (m, 2H, 1H, CHCCH 3 ), 4.88 (m, 1H, CHOCO), 4.09 (t, 4H, CH 2 OCO), 3.46 (t, 2H, CH 2 N), 2.82 (m,
(CHOCHCH) 2 ), 3.50 (m, 2H, (CHCOCH) 2 ), 3.30 (t, 2H, CH 2 N), 2.31 (t, 2H, CH 2 COOH), 2H, (CHCOCH) 2 ), 2.74 (t, 4H, NCH 2 ), 2.51 (t, 2H, CH 2 N), 2.43 (t, 2H, CH 2 CCH 3 ), 2.31 (t,
1.55 (m, 2H, CH 2 CH 2 COOH), 1.43 (m, 2H, CH 2 CH 2 N), 1.27 (m, 2H, CH 2 CH 2 CH 2 ) exo 2H, CH 2 COOH), 2.26 (t, 4H, OCOCH 2 ) 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H,
compound: 6.49 (s, 2H, CH double bond), 5.24 (t, 2H, (CHOCHCH) 2 ), 3.46 (t, 2H, CH 2 N), 2.82 CH 3 CCH), 1.26 (s, CH 2, fatty chain ), 0.88 (t, 6H, CH 2 CH 3 )
(s, 2H, (CHCOCH) 2 ), 2.31 (t, 2H, CH 2 COOH), 1.60 (m, 4H, CH 2 CH 2 N / CH 2 CH 2 COOH),
1.27 (m, 2H, CH 2 CH 2 CH 2 )
Figure 64: General structure of PIDiLipMalProt
PIDiLipOH (0.4 g, 0.04 mmol) was dissolved into 1.5mL of dry THF. MalProt (50 mg, 4 eq, 0.16 mmol) was then added to the polymer solution as well as 40 mg of DCC (4.4 eq, 0.17 mmol) and 4 mg of DMAP (0.8 eq, 0.03 mmol). The reaction solution was then stirred overnight at room temperature. The final polymer was obtained after two successive precipitations into a large excess of cold methanol, solubilization in Et 2 O, filtration through Celite ® and overnight drying at 40°C under dynamic vacuum. Yield: ~ 85% 1 H NMR (CD 2 Cl 2 ) (ppm): endo compound: 6.37 (s, 2H, CH double bond ), 5.
Table 2 : Crystallization and melting temperatures of various PiDiLip (5 kg/mol) observed by DSC
2
2,15-21 Kawahara et al. demonstrated that saturated fatty acids/esters (stearic acid
H NMR (CDCl 3 ) (ppm): 9.77 (t, 1H chain-end , CH 2 CHO), 5.13 (t, 1H CH=CCH 3 ), 2.49 (t, 2H, CH 2 CHO), 2.44 (t, 2H, CH 2 COCH 3 ), 2.35 (t, 2H, CH 2 CH 2 CHO), 2.13 (s, 3H, CH 3 COCH 2 ), 2.05 (s, 4H, CH 2 CH/CH 2 CCH 3 ), 1.68 (s, CH 3 CCH).Synthesis of heterotelechelic keto-hydroxyl PI (PIOH) (3).2 g of PIDeg (0.2 mmol) were dissolved in 5 mL of dry THF. 0.19 g (4 eq, 0.8 mmol) of triacetoxyborohydride were then added to the obtained solution as well as 12 µL (1 eq, 0.2 mmol) of acetic acid. The reaction mixture was stirred at 40 °C overnight and the final product was obtained after two successive precipitations into a large excess of cold methanol, dissolution in Et 2 O, filtration through The reaction conversion was followed by connecting the reaction flask to a bubbler containing a KOH solution. When no bubbling was visible, the reaction mixture was evaporated under vacuum to remove the solvent and the excess of oxalyl chloride. The final product was then dried overnight at 40°C under dynamic vacuum affording a yellowish liquid for C 18:0 Chlo and C 14:0 Chlo and a white powder for C 24:0 Chlo. Yield:
Celite ® and drying overnight at 40°C under dynamic vacuum. The PIOH is a viscous and
colorless liquid. Yield: ~ 90 %
(Figure S5); 1 H NMR (CDCl 3 ) (ppm): 5.12 (t, 1H, CH=CCH 3 ), 3.62 (t, 2H, CH 2 OH), 2.42
(t, 2H, CH 2 COCH 3 ), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH)
Synthesis of fatty acyl chlorides (C 14:0 Chlo / C 18:0 Chlo / C 24:0 Chlo). A solution of the desired
fatty acid was prepared in dry DCM ([fatty acid] = 0.5M). 3 eq of oxalyl chloride were then
added to the mixture under vigorous stirring followed by 3 drops of DMF. A vigorous
(Figure S3); FTIR: H-C=C : 3035 cm -1 bubbling was observed. > 90 %. C 14:0 Chlo: (Figure S6) 1 H NMR (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H, ; CH 2 , CH 3 : 2900-2730 cm -1 ;
C=O : 1722 cm -1 CH 2 CH 2 Cl), 1.25 (s, 20H, (CH 2 ) 10 ), 0.88 (t, 3H, CH 3 CH 2 ) / (Figure S7) 13 C NMR (CDCl 3 ) ; C=C : 1664 cm -1 ; n CH 2 , CH 3 cis-1,4-isoprene : 1446, 1375 cm -1 ; C=C-H :
833 cm -1 (ppm): 174.21 (1C, COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.65, 29.64,
Synthesis of heterotelechelic keto-diol PI (PIDiOH) (2). 3.7 g of PIDeg (0.37 mmol of 29.61, 29.52, 29.36, 29.33, 29.07, 28.43, 25.07, 22.71] (10C, CH 2-fatty chain ), 14.11 (1C,
CH 2 CH 3 ). aldehyde groups) were dissolved in 10 mL of dry THF. 0.19 g (4.5 eq, 1.66 mmol) of C 18:0 Chlo: (Figure S6) 1 H NMR, (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H, diethanolamine were added and the reaction was stirred at 40°C for 2 h. Finally, 0.36 g (4.2 CH 2 CH 2 Cl), 1.25 (s, 28H, CH 2, fatty chain ), 0.88 (t, 3H, CH 3 CH 2 ) / (Figure S7) 13 C NMR eq, 1.55 mmol) of sodium triacetoxyborohydride were added to the reaction mixture followed by 30µL (1.3 eq, 0.48 mmol) of acetic acid and the heterogeneous medium was stirred at (CDCl 3 ) (ppm): 174.21 (1C, COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.81,
40°C overnight. The reaction mixture was then directly precipitated into a large excess of cold 29.75, 29.66, 29.52, 29.46, 29.21, 28.57, 25.20, 22.83] (14C, CH 2-fatty chain ), 14.25 (1C,
methanol under vigorous stirring. The polymer was further dissolved in DCM and precipitated CH 2 CH 3 )
again in cold methanol, dissolved in Et 2 O, filtered on Celite ® and dried under vacuum at 40 C 24:0 Chlo: (Figure S6) 1 H NMR, (CDCl 3 ) (ppm): 2.87 (t, 2H, CH 2 CCl), 1.70 (m, 2H,
°C overnight affording a colorless viscous liquid. Yield: ~ 85 % CH 2 CH 2 Cl), 1.25 (s, 40H, CH 2, fatty chain ), 0.88 (t, 3H, CH 3 CH 2 ) / (Figure S7) 13 C NMR
(CDCl 3 ) (ppm): 174.21 (1C, COCl), 47.28 (1C, COCH 2 ), 31.93 (1C, COCH 2 CH 2 ), [29.86,
29.76, 29.68, 29.52, 29.49, 29.22, 28.59, 25.21, 22.86] (20C, CH 2-fatty chain ), 14.28 (1C,
CH 2 CH 3 )
(Figure
S4
);
1
H NMR (CD 2 Cl 2 ) (ppm): 5.14 (s, 1H, CH=CCH 3 ), 3.57 (t, 4H, (CH 2 ) 2 OH), 2.64 (t, 4H, N(CH 2 ) 2 ), 2.52 (t, 2H, CH 2 N), 2.42 (t, 2H, CH 2 COCH 3 ), 2.05 (s, 4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH)
NMR with the appearance of a triplet at 9.77 ppm characteristic of the aldehyde proton as well as the appearance of 3 triplets at 2.49, 2.44 and 2.35 ppm corresponding to "CH 2 " groups in α and β position of the aldehyde and in α position of ketone chain-end respectively. Moreover, the appearance of a band at 1722 cm -1 in FTIR-ATR also confirmed the generation of carbonyl functions.
𝑛 ̅̅̅̅ = * 68 𝑇 𝑥 +
T x = [epoxidized units] [monomer units] * % M n ̅̅̅̅ ∶ Targeted molar mass g/mol
The occurring of the degradation reaction was monitored by SEC, 1 H NMR and FTIR-ATR
analyses (Figures S1, S2 and S3 respectively). By SEC, it was observed a decrease of the
molar mass from 500 000 g/mol (NR) to 10 000 g/mol (PIDeg) as well as a decrease of the
molar mass distribution from 2.6 (NR) to 1.6 (PIDeg). The formation of carbonyl chain-ends
as confirmed by 1 H
Table 3 : Summary of all PIDiLip and PIMonoLip synthesized
3
General structure R Corresponding name
PIDiC 11:1
PIDiC 18:2
PIDiC 14:0
PIDiC 16:0
PIDiLip PIDiC 18:0
PIDiC 19:0
PIDiC 24:0
PIMonoC 16:0
PIMonoC 18:0
PIMonoLip PIMonoC 24:0
Table 5 . Characteristic values of CCr of NR, PIDeg and PIDiOH obtained for 8 h isothermal crystallization at -25°C
5
Sample t i a)
and compared to the initial PIOH. The signal of the "-CH 2 " group in α position of the ketone chain-end at 2.43 ppm totally disappeared confirming a quantitative reaction. New signals appeared at 3.6 ppm (8'-11') and at 2.79, 2.74 and 2.68 ppm (9'-10') and were assigned to newly formed chainend thanks to HSQC and HMBC experiments. No significant change could be observed in SEC thus attesting of the absence of side reactions. The next step was another reductive amination using acetaldehyde and PINHOH in order to obtain a tertiary amine that could initiate NCA polymerization. It was proposed that, ideally, both reductive aminations (from PIOH to PINOHOH) could be done successively as they involve the same reaction conditions.The 1 H NMR spectrum of PINOHOH is given in Figure27and compared to PINHOH. The assignment was tricky regarding the number of signals and HSQC and HMBC experiments were again needed to assign each signal. The total shift of the signal from the "-CH 2 " group in α position of the hydroxyl group (from 3.63 to 3.43 ppm) confirmed full conversion. The increase of the number of signals between 2.3 and 2.8 ppm corresponding to the "-CH" and "-
CH 2
Synthesis of PPepFur Figure 31: Structure of PPepFur BenzylGluNCA
1 H NMR (CDCl 3 ) δ (ppm): 7.34 (m, 5H, CH Ar ),6.84 (s, 1H, NH), 5.12 (s, 2H, CH 2 O), 4.37 (t, 1H, CHNH), 2.56 (t, 2H, CHCOO), 2.24/2.12 (m, 1H/1H, CH 2 CHNH) 13 C NMR (CDCl 3 ) δ (ppm): 172.4 (1C, COOCO), 169.5 (1C, OCONH), 152.2 (1C, CH 2 OCO), 135.3 (1C, CH 2 C Ar ), 128.8/128.6/128.4 (5C, CH Ar ), 67.3 (1C, CH 2 OCO), 56.9 (1C, COCHNH), 29.7 (1C, OCOCH 2 ), 26.7 (1C, CH 2 CH 2 CH) e. (1 g, 3.8 mmol) was dissolved in 5 mL of dry DMF and cooled to 0°C using an ice bath. Furfurylamine (15 µL, 0.04 eq, 0.16 mmol) was added to the solution and the reaction maintained at 0°C during 6 h under stirring, with the reaction flask connected to a bubbler. The final polymer was recovered by precipitation into cold Et 2 O and overnight drying under vacuum, affording a white solid. Yield ~ 70%. 1 H NMR (CDCl 3 /TFA: 2/1 v/v) δ (ppm): 7.30 (m, 5H, CH Ar ), 6.27 (s, 1H chain-end , OCH=CHCH), 6.22 (s, 1H chain-end, CHCH=C), 5.11 (q, 2H, CH 2 C Ar ), 4.69 (m, 1H, COCHNH), 2.47 (m, 2H, COCH 2 CH 2 ), 2.14/1.96 (m, 2H, CH 2 CH 2 CH)
f. Synthesis of PIPPepFur
of PINOHOH Figure 36: Structure of PINOHOH PINHOH
The final polymer was then recovered by two successive precipitations into cold methanol, solubilisation in Et 2 O, filtration through Celite ® and overnight drying at 40°C under dynamic vacuum. Yield: ~ 85 % 2.79/2.74/2.68 (m, 3H, NHCH 2 CH 2 /CHCH 3 ), 2.06 (m, 4H, CH 2 CH=CCH 3 CH 2 ), 1.69 (m, 3H, (0.2 g, 20 µmol) was dissolved in 1 mL of dry THF. Then, 10 µL (10 eq, 200 µmol) of acetaldehyde were added to the solution and stirred 2h with the polymer. Finally, 93 mg (20 eq, 400 µmol) of NaBH(OAc) 3 and 3 µL (2 eq, 52 µmol) of acetic acid were then added to the solution and the reaction was allowed to proceed at 40°C overnight. The final polymer was recovered by two successive precipitations into cold methanol, solubilization into Et 2 O, filtration through Celite ® and overnight drying at 40°C under dynamic vacuum. Cl 2 ): δ (ppm): 5.14 (m, 1H, CH=CCH 3 ), 3.60 (m, 2H, CH 2 OH α ), 3.43 (m, 2H, NCH 2 CH 2 OH), 2.73/2.57/2.48/2.36 (m, 5H, CHN(CH 2 ) 2 ), 2.06 (m, 4H, CH 2 CH=CCH 3 CH 2 ),
CH=CCH 3 )
j. Synthesis Yield: ~ 85%
1 H NMR (CD 2 1.69 (m, 3H, CH=CCH 3 )
k. Synthesis of PIPPepNOHOH Figure 37: Structure of PIPPepNOHOH
H NMR spectrum shows the characteristic signals of a 1,4-PI (signal at 5.26 ppm
H NMR (CD
Cl 2 ) (ppm): 9.77 (t, 1H, COH aldehyde ), 5.14 (s, 1H, CHCCH
), 2.49 (t, 2H, CH 2 COH), 2.43 (t, 2H, CH 2 COCH 3 ), 2.34 (t, 2H, CH 2 CH 2 COH), 2.05 (s,
4H, CH 2 CH / CH 2 CCH 3 ), 1.67 (s, 3H, CH 3 CCH)
H NMR (CD
Cl 2 ) (ppm): 5.12 (s, 1H, CHCCH
),
4.09 (t, 4H, (CH 2 OCO) 2 ), 3.74 (m, 1H,
Figure S23: DSC thermogram of IRDeg after isothermal crystallization at -25 °C for 60 h.
H NMR (CD
Cl 2 ): δ (ppm): 5.14 (m, 1H, CH=CCH
), 3.60 (m,
4H, CH 2 OH α /CH 2 OH ω ),
Annexe 1: DSC thermogram of PIDiC 11:1 Annexe 2: DSC thermogram of PIDiC 14:0 Annexe 3: DSC thermogram of PIDiC 16:0
Annexe 4: DSC thermogram of PIDiC 19:0 Annexe 5: DSC thermogram of PIDiC 18:2 Annexe 6: DSC thermogram of PIDiC 24:0
Annexe 7: DSC thermogram of PIDiC 14:0 5 kg/mol Annexe 8: DSC thermogram of PIDiC 16:0 5 kg/mol Annexe 9: DSC thermogram of PIDiC 24:0 5 kg/mol
Remerciements
Comparably to PImOH, PINH 2 could be of high interest as a macroinitiator for the synthesis of di-block copolymer PI/Polypeptide as the ROP of NCA was mainly described using primary amine as initiator. Moreover, it was found in the literature that by reacting a primary amine with maleic anhydride it is possible to obtain a maleimide function 14 . Applying this chemistry to PINH 2 could therefore generate a maleimide terminated PI thus rendering the polymer backbone functionalizable with proteins via "thiol-maleimide" click chemistry as reported in the general bibliographic part.
CONCLUSION.
In this study, several hybrid polyisoprenes (M n ≈10 000 g/mol) bearing one or two fatty esters at one chain-end were synthesized either from NR (100 % 1,4-cis units) or from IR (97 % 1,4cis units) as models of NR. Before the grafting of the fatty chain, polymers coming from NR exhibited a high crystallinity after isothermal treatment at -25°C, whereas polymers coming from IR showed no crystallinity probably due to the presence of other units than 1,4-cis.
Detailed DSC analyses of the hybrid polymers showed that the grafting of lipidic chains prevented cold crystallization of polyisoprene, a general feature of NR. On the contrary, chain-ends could crystallize relatively quickly despite the huge amorphous matrix. The addition of free fatty acids to the hybrid polymer allowed to recover a partial cold crystallization in a lower amount than the initial NR. Nevertheless, addition of both SA and ML presented a synergetic effect to enhance CCr of all the polymers either natural or synthetic. All these observations allow to clarify the role of both free and linked fatty chains on the CCr of NR and allowed to obtain a crystallization rate of a 10 000 g/mol IR comparable to the one of a 600 000 g/mol IR.
IV. General conclusion:
This chapter focused on the study of properties of various PI-Lipid hybrids synthesized as models of NR. It was shown that the chain-ends were able to crystallize despite the amorphous PI matrix. Moreover, the crystallization temperature can be monitored by varying the length of the fatty chain linked and/or the addition of free fatty esters. It opened the possibility to get access to cross-linked materials, where the anchoring points would be fatty chain crystallites. It was further studied the CCr of NR and of PI with linked and free fatty chains. Linked fatty chains prevented the crystallization of PI. On another hand, free fatty chains were demonstrated to highly increase the crystallization rate of PI under isothermal conditions at -25°C thus allowing to induce a faster crystallization of a 10 000 g/mol IR than a
V.
Supporting information: Those conditions were used for the polymerization of NCA using PIDiOH as the initiator. 1 H NMR analysis comparing the obtained polymer with a PPep initiated by DMEA is given in Figure 23 as well as a SEC analysis performed in THF comparing the obtained polymer with the starting PIDiOH in Figure 24. On 1 H NMR analysis, it is difficult to identify the linkage between both blocks due to the bad definition of the polypeptide in CDCl 3 . When TFA was added, the sample was degraded rapidly preventing the analysis. Nevertheless, the comparison of the spectrum of the obtained polymer with the PPep initiated by DMEA showed that all the signals corresponding to the polypeptide block are present in the copolymer. Moreover, the signal corresponding to the "-CH 2 " group in α position of the ketone chain-end of the PI block remains visible at 2.43 ppm. Finally, the DOSY NMR analysis confirmed the coupling as two different diffusion coefficient were found for the macroinitiator and the final copolymer. The formation of the diblock was also confirmed by the SEC analysis in THF (Figure 24). After the polymerization, an increase of the molar mass is observed by using MALLS detection whereas the elution time of the co-polymer shifted to higher retention time. This is proposed to be due to the interactions between the polypeptide block and the stationary phase of the column and thus confirms the obtention of the good compound. Indeed, by looking at the variation of the molar mass in function of the elution time (two lines in the zoom of Figure 24) it can be observed that for the co-polymer the molar mass is practically constant along the time contrarily to the macro-initiator which presents an important decrease of the molar mass with the increase of the elution time. Finally, the increase of molar mass is about 10 000 g/mol which is in agreement with the targeted molar mass.
MATERIAL AND METHODS
Material.
Natural Triethylamine (TEA) and dimethylethanolamine (99%, Aldrich) were dried on KOH pellets and distilled prior to use. Methanol, toluene and diethyl ether (reagent grade, Aldrich) were used as received as well as Celite ® (R566, Aldrich).
NMR analysis
Liquid-state 1 H NMR and 13 C NMR, HSQC and HMBC spectra were recorded at 298 K on a Bruker Avance 400 spectrometer operating at 400 MHz and 100 MHz respectively in appropriate deuterated solvents. |
01768481 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768481/file/LX15216F_R1_final.pdf | Yasaman Madraki
Sarah Hormozi
Guillaume Ovarlez
Élisabeth Guazzelli
Olivier Pouliquen
Enhancing Shear Thickening
Keywords: numbers: 47, 57, Gc,83, 80, Hj
A cornstarch suspension is the quintessential particulate system that exhibits shear thickening. By adding large non-Brownian spheres to a cornstarch suspension, we show that shear thickening can be significantly enhanced. More precisely, the shear thickening transition is found to be increasingly shifted to lower critical shear rates. This influence of the large particles on the discontinuous shear thickening transition is shown to be more dramatic than that on the viscosity or the yield stress of the suspension.
I. INTRODUCTION
A shear thickening fluid is one in which the viscosity increases with the rate of shear [see e.g. [START_REF] Barnes | Shear-Thickening (Dilatancy) in Suspensions of Nonaggregating Solid Particles Dispersed in Newtonian Liquids[END_REF][START_REF] Mewis | Colloidal Suspension Rheology[END_REF][START_REF] Brown | Shear thickening in concentrated suspensions: phenomenology, mechanisms and relations to jamming[END_REF]. A classical example is a suspension of cornstarch in water which is often used in demonstrations to exhibit the counterintuitive behavior of shear thickening materials. It indeed behaves as a normal liquid if stirred slowly whereas it acts as a solid when agitated or struck forcefully. A person may even walk on a large pool of cornstarch without sinking provided the walking steps are quick and strong enough to cause the shear thickening phenomenon. Cornstarch suspensions exhibit both continuous (CST) and discontinuous shear thickening (DST). At low cornstarch concentration φ cs , rheological measurements show a smooth and continuous increase in viscosity with increasing shear rate. As φ cs becomes larger, the increase becomes steeper and eventually leads, at a critical shear rate γ0 c , to an order-of-magnitude discontinuous jump in viscosity or even directly to jamming. DST is observed in the dense suspension regime for φ cs 0.36.
Despite a sustained research attention since the mid-twentieth century, the origin of shear thickening is still not deciphered and remains a matter of active debate [see e.g. [START_REF] Hoffman | Discontinuous and dilatant viscosity behavior in concentrated suspensions. II. Theory and experimental tests[END_REF][START_REF] Fall | Shear Thickening of Cornstarch Suspensions as a Reentrant Jamming Transition[END_REF][START_REF] Wagner | Shear thickening in colloidal dispersions[END_REF][START_REF] Fall | Shear Thickening and Migration in Granular Suspensions[END_REF][START_REF] Brown | The role of dilation and confining stresses in shear thickening of dense suspensions[END_REF]. A new promising idea points to a transition from a frictionless to a frictional state of the suspension [START_REF] Fernandez | Microscopic Mechanism for Shear Thickening of Non-Brownian Suspensions[END_REF][START_REF] Seto | Discontinuous Shear Thickening of Frictional Hard-Sphere Suspensions[END_REF][START_REF] Wyart | Discontinuous Shear Thickening without Inertia in Dense Non-Brownian Suspensions[END_REF][START_REF] Mari | Shear thickening, frictionless and frictional rheologies in non-Brownian suspensions[END_REF]. In this scenario, the shear thickening behavior stems from the existence of short-distance repulsive forces between particles. At low shear rate, the repulsion prevents frictional contacts between particles and the suspension behaves as a suspension of frictionless particles. When the shear rate is increased, the stresses on the particles increases and may overcome the repulsive forces. The friction is mobilized at contact and a transition to a frictional rheology then takes place. Within this picture, the discontinuous shear thickening transition occurs when the shear stress imposed to the suspension reaches a critical stress which depends upon the repulsive forces and the particle size. This new theory of friction-induced shear thickening is supported by recent experimental measurements [START_REF] Guy | Towards a Unified Description of the Rheology of Hard-Particle Suspensions[END_REF][START_REF] Lin | Hydrodynamic and Contact Contributions to Continuous Shear Thickening in Colloidal Suspensions[END_REF][START_REF] Hermes | Unsteady flow and particle migration in dense, non-Brownian suspensions[END_REF] which validate model and simulations. Even with these new advances in the understanding of shear thickening, there is still a compelling need to control more precisely the DST transition as this is an important challenge for industries that handle such fluids that can become solid-like, in particular in applications with dampening and shock-absorption such as found in armor composite material [START_REF] Lee | The ballistic impact characteristics of Kevlar woven fabrics impregnated with a colloidal shear thickening fluid[END_REF] and for curved-surface shear-thickening polishing [START_REF] Li | Shear-thickening polishing method[END_REF]. This paper presents a novel method to enhance shear thickening by adding large non-Brownian spheres to the cornstarch suspension.
More generally, adding particles to a fluid has been shown to enhance its shear viscosity. This is known for a Newtonian fluid since the seminal work of Einstein demonstrating that the viscosity of the mixture is increased above that of the suspending fluid [START_REF] Einstein | Über die von der molekularkinetischen Theorie der WŁrme geforderte Bewegung von in ruhenden Flssigkeiten suspendierten Teilchen[END_REF] and this gradually more with increasing particle concentration [START_REF] Stickel | Fluid mechanics and rheology of dense suspensions[END_REF]. As the jamming transition is approached, steric hindrance becomes dominant. Particle fluctuating motions then become more intense and collective, leading to a divergence of the viscosity [START_REF] Lerner | A unified framework for non-Brownian suspension flows and soft amorphous solids[END_REF][START_REF] Andreotti | Shear Flow of Non-Brownian Suspensions Close to Jamming[END_REF]. The enhancement effect is also observed when adding non-Brownian particles to a non-Newtonian matrix. In the case of a viscoplastic fluid, adding spherical particles induces an increasingly enhancement of the viscosity (or more precisely the consistency) and the yield stress [START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of noncolloidal spheres in yield-stress fluids[END_REF][START_REF] Ovarlez | Flows of suspensions of particles in yield stress fluids[END_REF]. The intensification of these rheological quantities has been also seen for concentrated colloidal dispersions exhibiting a weak shear-thinning followed by continuous shear thickening [START_REF] Cwalina | Rheology of non-Brownian particles suspended in concentrated colloidal dispersions at low particle Reynolds number[END_REF], as well as for shear-thinning polymer solutions and shearthickening cornstarch suspension (but only investigated in the regime of CST) [START_REF] Liard | Scaling laws for the flow of generalized Newtonian suspensions[END_REF]. Theoretically, the rheological enhancement caused by adding large particles can be addressed by homogenization approaches [see e.g. 22]. The addition of large particles to a fluid increases locally the shear rates in the fluid, an effect sometimes refers to as a lever effect and described by introducing a lever function relating the magnitude of the local shear rate to the macroscopic shear rate imposed to the whole suspension mixture. The objective of the present work is to study how adding large spherical particles to a dense cornstarch suspension leads to a progressive shift of the DST transition to lower critical shear rates and to investigate how this effect is related to the rheological enhancement also observed.
II. MATERIALS AND METHODS
A. Particles and fluid
The cornstarch (73% Amylopectin and 27% Amylose from Sigma Aldrich, USA) used in the experiments consisted of irregularly-shaped particles ranging from 5 -20 µm with an average diameter d cs ≈ 13 µm and a density 1.68 g•cm -3 , see figure 1(b). The suspending fluid, a 54 wt% solution of cesium chloride (Cabot high purity grade from Sigma Aldrich, USA) in distilled water, was chosen to match the density of these particles. The cornstarch suspensions were prepared by carefully mixing the cornstarch particles with the suspending fluid at different volume fractions φ cs ranging from 0.10 to 0.44. Large non-Brownian particles with volume fraction φ p ranging from 0.05 to 0.35 were then added to this cornstarch suspension. We used φ cs = V cs /V t (1φ p ) and φ p = V p /V t with V cs , V p and V t the volumes of cornstarch, of large particles, and of the whole suspension, respectively. These definitions for the concentrations differed from those usually adopted for bidisperse suspensions. The large particles were Polymethyl methacrylate (PMMA) spheres (Cospheric, USA) of diameter d s ≈ 106 -125 µm, see figure 1(b). These PMMA particles were silver coated to ensure a density of 1.34 g•cm -3 which was closest as possible to that of the cornstarch suspension in order to avoid significant sedimentation effect. These large particles were insensitive to Brownian motion and colloidal interactions. Additional experiments were performed using large particles of different sizes (of diameters ranging from d p ≈ 45 -355 µm) but for non-coated PMMA particles (density 1.20 g•cm -3 ).
B. Rheological methods
The rheological measurements were conducted with a DHR-3 rotational rheometer (TA Instruments) using a 40 mm (diameter) serrated parallel plate geometry with 90 degree V-shaped grooves of height 0.5 mm and width 1 mm. This geometry eliminated wall slip effect, see figure 1(a). In a typical experiment, the desired amount of material was sandwiched between the two crosshatched plates and the rheological device delivered the torque C and the rotation rate Ω yielding to the determination of the maximum shear stress τ = 2C/πR 3 and the maximum shear rate γ = ΩR/h, and consequently to the effective viscosity η = τ/ γ.
Before the experiments were carried out, different tests were performed. A first important test was to check that the rheological measurements were independent of the size of the gap between the plates. This gap dependency was found to be very sensitive to the roughness of the plates as well as to whether the surplus of material around the plates was removed or not. We found that using crosshatched plates and trimming carefully the surplus of material around the plates produced gap-independent measurements in agreement with the experiments of [START_REF] Fall | Shear Thickening of Cornstarch Suspensions as a Reentrant Jamming Transition[END_REF], as exhibited in figure 1(c) for a cornstarch suspension at φ cs = 0.40 (see the next paragraph for the description of the rheological curve). A gap size of h = 1.5 mm was then chosen for most of the experiments.
The second test concerned the reproducibility of the rheological properties of the suspensions. We obtained identical rheological measurements within an accuracy of 15% using different batches of suspensions prepared in the same way or the same batch for different tests but the same day. Last, we checked the time dependency of the experiments by varying the time of the shear rate ramp. No significant differences were observed for a total time of the experiments ranging between 60 and 360 s. This test was also performed when large particles were added to the cornstarch suspensions.
A total time of 180 s was chosen for all the experiments for which the influence of sedimentation was found to be inconsequential.
III. EXPERIMENTAL RESULTS
A. Rheological observations
The rheological results for pure cornstarch suspensions are presented in figure 1(c) for φ cs = 0.40 and in figure 2(a) and 2(b) for 0.33 φ cs 0.44. At low γ, the rheology presents a plateau in shear stress, which can be interpreted as a yield stress, τ y , as evidenced in the inset of figure 1(c).
In terms of viscosity, the response is shear thinning as the viscosity η is seen to decrease with increasing γ until it reaches a minimum plateau η p , see figure 2 of large particles, see figure 3(a), the effective viscosity of the mixture η is overall increased, see figure 3(b), as seen previously [START_REF] Cwalina | Rheology of non-Brownian particles suspended in concentrated colloidal dispersions at low particle Reynolds number[END_REF][START_REF] Liard | Scaling laws for the flow of generalized Newtonian suspensions[END_REF]. More strikingly, the DST transition is moved to lower shear rate with increasing particle volume fraction φ p , see figure 3(b). This behavior is systematically observed in the range 0.36 φ cs 0.41. The critical shear rates γ p c which characterize the onset of DST for the mixture of large particles and cornstarch suspensions at different φ p are plotted versus φ cs in figure 3(c). This graph clearly evidences the shift of the DST transition to lower shear rate with increasing φ p with an upper bound curve given by that for the pure cornstarch suspension (φ p = 0 in red) taken from figure 2(c), i.e. the curve γ0 c (φ cs ). In the following, we first provide a systematic investigation of the shift in critical shear rate and then discuss the effect of adding particles on the critical shear stress.
B. The shift in the critical shear rate
We examine first the critical shear rates γ p c for the mixture of silver-coated PMMA spheres (of diameter d s ≈ 106 -125 µm) and a cornstarch suspension at different φ cs and φ p . Normalizing the critical shear rate γ p c for the mixture of large particles and cornstarch suspensions by the critical shear γ0 c for the pure cornstarch suspension leads to a good collapse of the data onto a single master curve as evidenced in figure 4(a) where γ p c / γ0 p is plotted against φ p . The dynamics of the dense suspension mixture thus mainly depends on the large particle volume fraction φ p . To complete this rheological analysis and study the role of the aspect ratio between the large spheres and the cornstarch particles, experiments were also performed using large particles of different sizes.
It was however not possible to obtain coated PMMA particles of different sizes. Therefore, we used non-coated PMMA particles (of diameters ranging from ≈ 49 to 327 µm) with the drawback of having a larger mismatch in density. In figure 4(b), the critical shear rates γ p c for the mixture normalized by the critical shear γ0 c for the pure cornstarch suspension are plotted as a function of the volume fraction for four different sizes, showing a clear influence of the size d p of the large particles. The shift to lower shear rate when adding particles is more pronounced when the size of the large particles is decreased, i.e. when the ratio of the large particles to the cornstarch particles is diminished. This influence of the particle size ratio may be related to loosening effects in suspensions consisting of polydisperse particles, i.e. the change in maximum packing volume fraction (φ m ) of large particles (PMMA or silver-coated PMMA) due to presence of small particles (cornstarch grains) [START_REF] Chateau | Series in Civil and Structural Engineering[END_REF]. If we naively consider that two large particles are always separated by a cornstarch grain even in the closest packing, the new maximum packing volume fraction becomes
φ m = φ m d 3 p /(d p + d cs ) 3 .
When the data of figure 4(b) are replotted by rescaling the volume fraction φ p using φ m , the curves obtained for different particle sizes collapse onto a master curve as evidenced in figure 4(c). This collapse seems to substantiate the idea that the dynamics of the dense suspension mixture is mainly controlled by steric constraints.
C. Influence on the critical shear stress
Whereas adding large particles to a cornstarch suspension dramatically shift the critical shear rate to lower values, the effect is of lesser importance on the critical shear stress, as evidenced in figure 3(a). The DST is observed to occur at a critical stress ≈ 20 Pa. This critical value is in agreement with the experiments of [START_REF] Fall | Shear Thickening of Cornstarch Suspensions as a Reentrant Jamming Transition[END_REF] for pure cornstarch suspensions similar to those used in the present study. Note that a lower value of ≈ 5 Pa is found by [START_REF] Hermes | Unsteady flow and particle migration in dense, non-Brownian suspensions[END_REF] using similar Sigma Aldrich cornstarch but suspended in a (not so closely density matched) fluid obtained by adding glycerol into the water phase. We systematically measured the critical stress at which the DST transition occurs for different cornstarch concentrations φ cs , different large particle concentration φ p , and different particle size ratio. The resulting data are recapitulated in figure 5. The critical stress τ p c normalized by the critical value for pure cornstarch τ 0 c is plotted for all data as a function of the rescaled volume fraction φ p /φ m . The experimental data are seen to collapse onto a single curve.
The critical stress is approximately constant and equal to the critical stress for pure cornstarch at low volume fractions but begins to increase when φ p /φ m 0.5. This observation can be rationalized by the following understanding of the rheology of non colloidal rigid suspensions. At low volume fraction, while it is enhanced by the particles, the bulk stress of a suspension remains purely of hydrodynamic nature and is entirely carried by the suspending-fluid phase. The average stress experienced by the interstitial cornstarch is thus equal to the suspension stress. Assuming that the DST transition occurs at a constant stress in the cornstarch then means that DST occurs at a constant stress for the whole suspension, at least at low particle volume fraction φ p . However, at larger volume fraction, contact can occur between the large particles, and the bulk stress is now carried in part by the large-particle phase and not solely by the suspending-fluid phase. This may explain the increase of the critical stress for DST at larger volume fraction, as in this regime the stress experienced by the cornstarch suspension is now smaller than the total stress applied to the mixture.
The observation of a quasi constant critical shear stress provides a simple interpretation for the lowering of the critical shear rate for the DST transition. Adding large particles enhances the viscosity, implying that the critical shear rate decreases in order to keep the critical stress constant.
However, as discussed in the following section, the interpretation of the DST in terms of a constant
IV. INTERPRETATION IN TERMS OF A LOCAL SHEAR RATE
To analyze and interpret the rheology of suspensions in non-Newtonian fluids, a useful concept is that of local shear rate γlocal . The interstitial fluid between the particles experienced a very fluctuating velocity field, and the typical magnitude of the shear rate, called the local shear rate γlocal in the following, is larger than the macroscopic shear rate γ applied to the suspension because of the indeformability of the large particles. This amplification can be described by introducing a lever function F (φ p ) relating the local to the macroscopic shear rate such as γlocal = γF (φ p ) with the underlying assumption that F only depends on φ p [START_REF] Lerner | A unified framework for non-Brownian suspension flows and soft amorphous solids[END_REF][START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of noncolloidal spheres in yield-stress fluids[END_REF].
The present study of the shift of the critical shear rate at which DST occurs provides a direct estimation of the lever function. The DST should be obtained when γlocal reaches the critical values for pure cornstarch γ0 c and therefore when the macroscopic shear rate γ reaches γ0 c /F (φ p ).
The lever function estimated from DST is then simply given by the ratio F DST = γ0 c / γ p c . These data are plotted in figure 6 as a function of the normalized volume fraction of large particle φ p /φ m (solid lines and symbols). The good collapse of all data for different large particles and different cornstarch concentrations seems to substantiate the idea that the dynamics of the dense suspension mixture is mainly controlled by steric constraints and thus only depends on the large particle volume fraction φ p .
In addition, the present rheological measurements provide two others ways to estimate the lever function from the shift in rheological properties prior DST, in particular the shift in plateau viscosity η p (φ p ) and the shift in yield stress τ y (φ p ). The first estimate is derived from an energetic argument stipulating that the dissipation in the suspension is equal to the dissipation in the suspending fluid (which means that contact between the large particles is supposed to be negligible), i.e. η p (φ p ) γ2 = (1φ p )η p (0) γ2 local . This lead to the estimate of the lever function based on the viscosity:
F η p (φ p ) = η p (φ p )/[η p (0)(1 -φ p )].
The second estimate is obtained again by writing the dissipation at low shear rate when the stress is given by the yield stress: τ y (φ p ) γ = (1φ p )τ y (0) γlocal leading to the lever function based on the yield stress: F τ y (φ p ) = τ y (φ p )/τ y (0)(1φ p ). We have computed these two estimates of the lever function from the rheological curves, η p (φ p ) being inferred as the minimum plateau viscosity and τ y (φ p ) as the shear stress at γ = 0.04 s -1 . These two alternative estimates of F (φ p ) are also plotted in figure 6 (dashed and dotted lines). They are both in very good agreement and again confirm the sole dependance on the large particle volume fraction φ p .
While these three evaluations of F (φ p ) present an increase with increasing large particle volume fraction φ p , they do not match completely. Both estimates of the lever function from viscosity shift (dashed lines) and yield stress shift (dotted lines), F η p (φ p ) and F τ y (φ p ) respectively, are similar whereas the predictions from the DST shift (solid lines and symbols), F DST (φ p ), are systematically larger. The influence of the large particles on the DST is clearly more dramatic than that on the viscosity or the yield stress. This discrepancy shows that a simple mean field argument based on a single scalar estimate of the local shear rate is not sufficient to capture the observed rheology and that the DST may be controlled by the extreme values of the local shear rate distribution. It is indeed sufficient to have a percolating jamming network within the interstitial fluid to block macroscopically the mixture.
V. CONCLUSION
In conclusion, we have shown that adding large non-Brownian particles to a cornstarch suspension moves the DST transition to lower critical shear rates providing a method for controlling shear thickening properties by simply varying the concentration of the large particles. The effective plateau viscosity of the mixture prior DST is observed to increase as seen previously [START_REF] Cwalina | Rheology of non-Brownian particles suspended in concentrated colloidal dispersions at low particle Reynolds number[END_REF][START_REF] Liard | Scaling laws for the flow of generalized Newtonian suspensions[END_REF].
The critical stress for the DST transition is much less affected as it is seen to be approximately constant and equal to that for pure cornstarch in a relative wide range of concentrations of large particles before presenting some increase for larger concentrations.
Interpreting our results in terms of a local shear rate, i.e. stipulating that the presence of large particles enhances the shear rate in the suspending fluid, exhibits a difference between the influence of the large particles on the DST and on the bulk rheology. The local shear rate estimated from the shift in the DST transition is larger than the local shear rate estimated from the shift in viscosity. A simple interpretation in terms of a single estimate of the local shear is thus not appropriate, which may suggest that the DST transition mobilizes the extreme values of the local shear rate distribution while the plateau viscosity (or more generally the bulk rheology prior DST) involves the averaged value. These observations raise questions that require theoretical studies going beyond mean field approaches in order to tackle thoroughly these nonlinear rheological systems.
2 FIG. 1 :
21 FIG. 1: (a) Sketch of the plate-plate rheometer exhibiting the manufactured grooves. (b) Micrograph of PMMA and cornstarch particles. (c) Viscosity η (Pa • s) versus shear rate γ (s -1 ) for different gap sizes h with a cornstarch suspension at φ cs = 0.40. The inset shows the corresponding shear stress τ (Pa) versus shear rate γ (s -1 ) for the same gap sizes.
6 φFIG. 2 : 4 FIG. 3 :FIG. 4 :
62434 FIG. 2: (a) Shear stress τ (Pa) and (b) viscosity η (Pa • s) versus shear rate γ (s -1 ) for pure cornstarch suspensions at different φ cs . (c) DST critical shear rate γ0 c (s -1 ) versus φ cs . The inset shows the same plot in semi-log scale.
dFIG. 5 :
5 FIG. 5: Critical stress τ p c (φ p ) at the DST transition normalized by the averaged critical stress for pure cornstarch ( τ 0 c = 20.8 Pa) as a function of the rescaled volume fraction φ p /φ p for data corresponding to various φ cs and d p .
FIG. 6 :
6 FIG. 6: Estimate of the lever function F (φ p ) based on the DST shift F DST (φ p ) = γ0 c / γ p c (solid lines and symbols), the viscosity shift F η p (φ p ) = η p (φ p )/[η p (0)(1φ p )] (dashed lines), and yield stress shift F τ y (φ p ) = τ y (φ p )/τ y (0)(1φ p ) (dotted lines) versus φ p versus φ p /φ m .
ACKNOWLEDGMENTS This work is undertaken under the auspices of the ANR project 'Dense Particulate Systems' (ANR-13-IS09-0005-01), the 'Laboratoire d'Excellence Mécanique et Complexité' (ANR-11-LABX-0092), and the 'Initiative d'Excellence' A * MIDEX (ANR-11-IDEX-0001-02) as well as the NSF Grant No. CBET-1554044-CAREER. We thank Dr. David F. J. Tees (Department of Physics and Astronomy, Ohio University) for his assistance on the suspensions micrography. |
01768488 | en | [
"phys.meca.mefl"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768488/file/RV1rheofibres.pdf | Franco Tapia
Saif Shaikh
Jason E Butler
Olivier Pouliquen
Élisabeth Guazzelli
Rheology of concentrated suspensions of non-colloidal rigid fibres
Pressure and volume-imposed rheology is used to study suspensions of non-colloidal, rigid fibres in the concentrated regime for aspect ratios ranging from 3 to 15. The suspensions exhibit yield-stresses. Subtracting these apparent yield-stresses reveals a viscous scaling for both the shear and normal stresses. The variation in aspect ratio does not affect the friction coefficient (ratio of shear and normal stresses), but increasing the aspect ratio lowers the maximum volume fraction at which the suspension flows. Constitutive laws are proposed for the viscosities and the friction coefficient close to the jamming transition.
Introduction
The rheological properties of viscous Newtonian fluids containing rigid fibres remains relatively unexplored as compared to suspensions of spherical particles, and a consensus on even the qualitative description of the rheology is still lacking for concentrations beyond the dilute limit. As one example, the steady values of the shear stresses should, for suspensions of fibres that are large relative to colloidal scales and free of external body forces, follow a Newtonian law [START_REF] Dinh | A rheological equation of state for semiconcentrated fiber suspensions[END_REF]. However, many experimental studies find yield stresses and a nonlinear scaling of the shear stresses with the rate of shear, where these non-Newtonian effects become more prominent with increasing concentration [START_REF] Ganani | Suspensions of rodlike particles: Literature review and data correlations[END_REF][START_REF] Powell | Experimental analysis of the porosity of randomly packed rigid fibers[END_REF]. Different explanations have been proposed to explain the departure from a Newtonian response. This includes arguments that the fibres were not rigid under the imposed conditions [START_REF] Powell | Experimental analysis of the porosity of randomly packed rigid fibers[END_REF][START_REF] Sepehr | Rheological properties of short fiber model suspensions[END_REF] or that the fibres are not force-free. An example of the latter is the assertion that adhesive forces [START_REF] Mongruel | Shear viscosity of suspensions of aligned non-Brownian fibres[END_REF][START_REF] Chaouche | Rheology of non-Brownian rigid fiber suspensions with adhesive contacts[END_REF]Bounoua et al. 2016b) can exist between the fibres, even though their size is large compared to typical colloidal scales.
Previous rheological studies have focused on suspensions at relatively small volume fractions. Identifying measurements of rheology for volume fractions, φ, above 0.1 is difficult for fibres of large aspect ratios, A = L/d, where L and d are the fibre length and diameter, respectively. The lack of data is attributable, at least in part, to the difficulty of preparing and measuring the rheology of suspensions at high concentrations for large aspect ratios. Even for aspect ratios as high as 17 or 18, measurements are available for volume fractions of only up to φ = 0.15 or 0.17 (Bounoua et al. 2016a;[START_REF] Bibbó | Rheology of semiconcentrated fiber suspensions[END_REF]; measurements as high as φ = 0.23 were made by [START_REF] Bibbó | Rheology of semiconcentrated fiber suspensions[END_REF] for smaller aspect ratios of A = 9. As a result, the rheological properties of suspensions of rigid fibres remains to be characterised in the limit of large concentrations where mechanical contacts are expected to matter [START_REF] Sundararajakumar | Structure and properties of sheared fiber suspensions with mechanical contacts[END_REF][START_REF] Petrich | Interactions between contacting fibers[END_REF][START_REF] Snook | Normal stress differences in suspensions of rigid fibres[END_REF]. Likewise, the volume fraction at which the shear stresses diverge, and the flow of the suspension ceases (i.e. becomes jammed), has not been determined previously for non-colloidal fibres, though such measurements have been made for shear-thickening suspensions of colloidal fibres [START_REF] Egres | The rheology and microstructure of acicular precipitated calcium carbonate colloidal suspensions through the shear thickening transition[END_REF][START_REF] Brown | Shear thickening and jamming in densely packed suspensions of different particle shapes[END_REF].
Here, a custom-built rheometer has been used to explore the shear stresses and normal forces in suspensions of non-colloidal, rigid fibres for concentrations exceeding φ = 0.23. The rheometer [START_REF] Boyer | Unifying suspension and granular rheology[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF] measures the stresses in both a pressure and volume-imposed configuration. The measurements indicate the presence of yield stresses in the tested suspensions, but also a viscous scaling wherein the stress grows linearly with the rate of shear. The unique rheometer design facilitates the study of these highly concentrated suspensions, and the volume fractions at which the stresses diverge are measured. The scaling of the stresses near this jamming transition are found to differ substantially from that of a suspension of spheres. These measurements are reported in § 3, after presenting the experimental materials and techniques in § 2; conclusions are drawn in § 4.
Experiments
Fibres and fluids
Four batches of rod-like particles were used in the experiments. They were obtained by using a specially-designed device to cut long cylindrical filaments of plastic (PLASTINYL 6.6) that were supplied by PLASTICFIBRE S.P.A. (http://www.plasticfibre.com). Images of typical fibres from each batch are shown in figure 1 (b). The length and diameter of over 100 fibres were measured with a digital imaging system. The distributions of lengths and diameters were found to be approximately Gaussian for all aspect ratios. The mean value and standard deviation of the fibre aspect ratio A = L/d, length L, and diameter d are shown in table 1. Note that batches (II) and (III) have very different lengths and diameters, but roughly the same aspect ratio of A ≈ 6 -7.
The rigid fibres were suspended in a Newtonian fluid that had a matching density of ρ f = 1056 kg/m 3 . The suspending fluid was a mixture of water (10.72 wt%), Triton X-100 (75.78 wt%), and Zinc Chloride (13.50 wt%). The fluid viscosity of η f = 3 Pa•s and the density were measured at the same temperature (25 • C) at which the experiments were performed. The suspensions were prepared by adding the fibres to the fluid, where both quantities were weighed, and gently stirring. Little to no settling or creaming was observed.
The rheological measurements were performed at a maximum shear rate of γ ≈ 3 s -1 , ensuring that a maximum Reynolds number (ρ f γL 2 /µ f ) of 0.04 was achieved. The fibres can be considered non-colloidal, owing to their large size, and rigid under the conditions of the experiment. Regarding the latter, the buckling criterion has been characterised by
Experimental techniques
The experiments were conducted using a custom rheometer that was originally constructed by [START_REF] Boyer | Unifying suspension and granular rheology[END_REF] and then modified by [START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF]. This rheometer, sketched in figure 1 (a), provides measurements of both shear and normal stresses. The shearing cell consists of (i) an annular cylinder (of radii R 1 = 43.95 mm and R 2 = 90.28 mm) which is attached to a bottom plate that can be rotated and (ii) a top cover plate that can be moved vertically. This top plate is porous, enabling fluid to flow through it, but not particles. The plate was manufactured with holes of sizes 2 -5 mm and then was covered by a 0.2 mm nylon mesh (see figure 1 (c)). The parallel bottom and top plates have also been roughened by positioning regularly-spaced strips of height and width 0.5 mm onto their surfaces. A transparent solvent trap covers the cell, hindering evaporation of the suspending fluid.
In a typical experiment, the annular cell was filled with suspension and the porous plate was lowered into the fluid to a position h. This height, measured independently by a position sensor , ranges between 10.8 to 18 mm, corresponding to 13 to 25 fibre diameters depending on the fibre batch. The height measurement enables calculation of the fibre volume fraction, φ. The bottom annulus was rotated at a rate Ω by an asynchronous motor (Parvalux SD18) regulated by a frequency controller (OMRON MX2 0.4 kW), while the torque exerted on the top plate was measured by a torque transducer (TEI -CFF401). The shear stress τ was deduced from these torque measurements after calibration with a pure fluid to subtract undesired contributions resulting from the friction at the central axis and the shear in the thin gap between the top plate and the cell walls; the calibration method is decribed by [START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF]. A precision scale (Mettler-Toledo XS6002S) was placed on a vertical translation stage driven by a LabVIEW code in order to measure the apparent weight of the top plate. This measurements, after correcting for buoyancy, provided the determination of the normal force that the particles exert on the porous plate in the gradient direction. Dividing by the area of the plate gives the gradient component of the normal stress, which is referred to simply as the particle pressure, P . A normal viscosity in the gradient direction can be defined as P/η f γ, as was done by [START_REF] Morris | Curvilinear flows of noncolloidal suspensions: The role of normal stresses[END_REF].
The rheometer can be run in a pressure-imposed mode or in a volume-imposed mode, and measurements were recorded as a function of the mean shear rate, γ = Ω(R 2 + R 1 )/2h, once steady state was achieved. In pressure-imposed rheometry, the particle pressure P is maintained at a set value that is measured by the precision scale; the volume fraction φ and the shear stress τ are measured as a function of the shear rate γ and pressure P . In volume-imposed rheometry, the height h, and consequently the volume fraction φ, are maintained at a fixed value, while the shear stress, τ , and particle pressure, P , are measured as a function of the shear rate, γ. Errors in the measurements of τ , P , and φ for the suspensions depend upon the calibration experiments, the preparation of the suspension samples, and the precision of the height, torque, and scale measurements. Estimates, based upon tests with independently created samples of suspension, suggest errors of ±6 Pa, ±5 Pa, and ±0.005 for τ , P , and φ, respectively.
Rheological measurements
Rheological observations
Typical rheological data for the apparent relative shear and normal viscosities, τ /η f γ and P/η f γ, are plotted against volume fraction, φ, in figures 2 (a) and (b). The data was collected for fibres of batch (II) using pressure-imposed and volume-imposed measurements. As expected, both quantities increase with increasing φ. However, multiple values of the apparent viscosities are measured for any given φ. Plotting the shear stress, τ , and the particle pressure, P , against the shear rate for different values of φ demonstrates that τ and P are linear in γ, but have a non-zero value at γ = 0, see figures 3 (a) and (b). This seems to suggest that a yield-stress exists for both the shear stress and the particle pressure, τ 0 and P 0 , respectively. Their values can be determined using a linear fit of the stress and pressure data as a function of γ, as indicated by the lines in figures 3 (a) and (b). Both yield-stresses, τ 0 and P 0 , increase with increasing φ as shown in figures 3 (c) and (d) for all four batches of fibres. The growth in τ 0 and P 0 with respect to φ is more pronounced for larger aspect ratios A.
The data of figures 3 (a) and (b) demonstrate that the stresses scale linearly with the rate of shear, as expected. Furthermore, the slopes of τ and P with γ increase with φ, which is evidence of the increase of the shear and normal viscosities with φ. These shear and normal viscosities can be collapsed into a single function of φ by removing the yield stresses. Figures 2 (c) and (d) show the results of (τ -τ 0 )/η f γ and (P -P 0 )/η f γ as a function of φ. In all of the following analysis, the yield stresses are subtracted systematically from the raw data.
Constitutive laws
Figures 4 (a) and (b) show η s = (τ -τ 0 )/η f γ and η n = (P -P 0 )/η f γ, the relative shear viscosity and relative normal viscosity, for all of the fibre batches. Both quantities increase with φ and seem to diverge at a maximum volume fraction that depends on the aspect ratio A. The influence of the aspect ratio is also seen on the rheological functions as η s (φ) and η n (φ) shift toward lower values of φ with increasing A. An interesting observation is that the data for batches (II) and (III), corresponding to similar values of A but different sizes, collapse onto the same curve. This indicates that finite size effects are not significant. Also, the decrease of η n is much stronger than that of η s for φ 0.35.
An alternative representation of the rheological data plots the friction coefficient µ = η s /η n and the volume fraction φ as a function of the dimensionless shear rate, J = η f γ/(P -P 0 ) [START_REF] Boyer | Unifying suspension and granular rheology[END_REF]; note that J = 1/η n and is a function of φ as shown in figure 4b. The rheology is then described by the two functions µ(J) and φ(J) as shown in figures 4 (c) and (d) for the same data as in figures 4 (a) and (b). A striking result is that a complete collapse of all the data is observed for µ(J), indicating that the friction coefficient is independent of the aspect ratio A. The volume fraction φ is a decreasing function of the dimensionless number J. There is a clear shift of φ(J) toward the lower values of φ when A is increased. The data for batches (II) and (III), having similar aspect ratios, again collapse onto the same curve.
This frictional approach is particularly well suited to study the jamming transition, as it circumvents the divergence of the viscosities. From the semi-logarithmic plot of φ(J), shown in the inset of figure 4 (d), the critical (or maximum flowable) volume fraction φ m can be determined from the limiting value of φ as J goes to zero. Similarly, the semilogarithmic plot of µ(J) in the inset of figure 4 (c) shows that the friction coefficient tends to a finite value µ s at the jamming point.
The critical values φ m and µ s are plotted against the fibre aspect ratio A in figures 5 (a) and (b), respectively. Again, the similar results for batches (II) and (III) indicate that confinement is not influencing the measurements, and the values obtained by [START_REF] Boyer | Unifying suspension and granular rheology[END_REF] for suspensions of poly(methyl methacrylate) spheres are also plotted on these graphs (for A = 1, although strictly speaking a sphere is not a cylinder of aspect ratio one). Clearly, φ m decreases with increasing A. This follows the general trends of a decrease in volume fraction with the aspect ratio for processes such as dry packing, as shown in figure 5 (a). A comparison is also made in figure 5 (a) between the values of φ m and estimates from simulations [START_REF] Williams | Random packings of spheres and spherocylinders simulated by mechanical contraction[END_REF] of the maximum concentration at which the orientation distribution remains random. The critical friction µ s does not vary significantly with A in the explored range and its value (≈ 0.47) is larger than that obtained for spheres (≈ 0.32) by [START_REF] Boyer | Unifying suspension and granular rheology[END_REF].
Figure 6 displays the same data as figure 4, but with φ scaled by φ m . This simple rescaling leads to a good collapse of the data for all of the fibre batches, indicating that the aspect ratio principally impacts the maximum volume fraction, φ m . Another remarkable result is that the relative shear and normal viscosities, η s and η n , diverge near the jamming transition with a scaling close to (φ m -φ) -1 , as clearly evidenced by the insets of figures 6 (a) and (b). This starkly contrasts with the divergence of (φ m -φ) -2 observed for suspensions of spheres [START_REF] Boyer | Unifying suspension and granular rheology[END_REF]. A constitutive law for µ can be generated by fitting the data to a linear combination of powers of (φ m -φ)/φ,
µ(φ) = µ s + α φ m -φ φ + β φ m -φ φ 2 , (3.1)
as was done by [START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF]. The red curve in figure 6 (c) shows the result, with µ s = 0.47, α = 2.44, and β = 10.20. As noted previously, the value for µ s is larger than that obtained for suspensions of spheres (µ s = 0.3). The values for α and β also differ from those obtained for suspensions of spheres (α = 4.6 and β = 6). The best fit for η s was found to be
η s (φ) = 14.51 φ m -φ φ m -0.90 , (3.2)
as seen in figure 6 (a). Note that the best-fit exponent is -0.9, rather than -1. The rheological law for η n is then just given by
η n (φ) = η s (φ)/µ(φ), (3.3)
which is represented by the red curve in figure 6 (b). The variation of φ with J can be deduced from this last law since J = 1/η n (φ); this result is shown in figure 6 (d).
Discussion and Conclusions
Using a custom rheometer [START_REF] Boyer | Unifying suspension and granular rheology[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF], we have performed pressure and volume-imposed measurements of the rheology of non-colloidal rigid fibres suspended in a Newtonian fluid. Measurements for the shear stress and particle pressure have been obtained in the dense regime and for aspect ratios between 3 and 15, and the volume fractions at which the rheology diverges has been characterised as a function of the aspect ratio.
The suspensions exhibit yield-stresses which increase with increasing volume fraction, φ, and are more pronounced for larger aspect ratios. Yield-stresses have been reported previously for rigid fibres suspended in Newtonian fluids, and the yield stresses have been attributed to adhesive contacts (see e.g. [START_REF] Mongruel | Shear viscosity of suspensions of aligned non-Brownian fibres[END_REF][START_REF] Chaouche | Rheology of non-Brownian rigid fiber suspensions with adhesive contacts[END_REF] despite the relatively large size of the fibres. A recent model (Bounoua et al. 2016b), which considered attractive interactions between fibres in the dilute regime, predicted simple Bingham laws for both the shear stress and the first normal stress difference, with the apparent shear and normal yield stresses proportional to φ 2 and φ 3 , respectively. The present data also follows Bingham laws, but the yield stress, τ 0 , and pressure, P 0 , increase with higher power laws in φ than predicted. This can be seen in the insets of figures 3 (c) and (d), where it is also demonstrated that the data for all aspect ratios collapses onto single curves by rescaling φ by φ m .
It is unclear whether, for the large fibres used here, colloidal forces are responsible for the yield-stresses. Finite-size effects close to the jamming point can also be advocated, particularly since lubrication forces are inefficient at preventing mechanical contacts between elongated particles [START_REF] Sundararajakumar | Structure and properties of sheared fiber suspensions with mechanical contacts[END_REF]. Close to jamming, since the system has a finite size, percolating jamming network of particles can exist. While it is transient phenomenon, it may impact the averaged rheological measurements which consequently may exhibit apparent yield stresses. Clearly, more work is necessary to elucidate the origin of the yield stresses.
Subtracting the apparent yield-stresses reveals a viscous scaling for both the shear stresses and particle pressures, wherein both grow linearly with the rate of shear. The aspect ratio of the fibres does not affect the friction coefficient, µ, but does impact the maximum flowable volume fraction, φ m . Rescaling the volume fraction, φ, by this maximum volume fraction, φ m , leads to an excellent collapse of all the data on master curves for the shear and normal viscosities. Hence, we argue that the aspect ratio principally affects the maximum volume fraction at which the suspensions can be sheared. Similar collapse of the rheological data across multiple aspect ratios has been observed previously for shear-thickening suspensions of colloidal fibres [START_REF] Brown | Shear thickening and jamming in densely packed suspensions of different particle shapes[END_REF].
Using the data presented here, constitutive laws in the form of expansions in (φ m -φ) have been generated for the rheology of dense suspensions of rigid fibres. An important product of the present study is the examination of the rheology close to the jamming transition. At jamming the friction coefficient is found to be constant and to be larger than that found for suspensions of spheres. Both shear and normal viscosities present a similar algebraic divergence in ≈ (φ m -φ) -1 in stark contrast to that in (φ m -φ) -2 observed for suspensions of spheres near the jamming point. The maximum volume fraction φ m is seen to decrease with increasing aspect ratio, similar to the dry packing of rigid fibres found in experiments (Rahli et al. 1999), see figure 5 (a). However, no inferences about the general structure of the suspension at jamming is possible for A < 15, as comparisons with estimates of maximum random packing [START_REF] Williams | Random packings of spheres and spherocylinders simulated by mechanical contraction[END_REF] do not clearly indicate that the orientation distribution has organised. The comparsion does indicate that the structure is organised for A = 15, though direct observations, or simulations, of the structures need to be developed in future work to conclusively resolve this question. The experimental data are available as supplementary material for future comparison.
Figure 1 :
1 Figure 1: (a) Sketch of the experimental apparatus. (b) Images of the plastic fibres. (c) Image of the top plate (the inset is a blowup of the image showing the nylon mesh).
Figure 2 :
2 Figure 2: Apparent relative (a) shear (τ /η f γ) and (b) normal (P/η f γ) viscosities as well as relative (c) shear [(τ -τ 0 )/η f γ] and (d) normal [(P -P 0 )/η f γ] viscosities (after subtraction of the yield-stresses) versus volume fraction, φ, for the fibres of batch (II) in pressure-imposed ( ) and volume-imposed ( ) configurations.
Figure 3 :
3 Figure 3: (a) Shear stress (τ ) and (b) particle pressure (P ) versus shear rate, γ, for the fibre suspension of batch (II) at different φ values of 0.26 (lightest grey shade), 0.30, 0.35, 0.38, and 0.41 (black). The lines represent the linear fit for each different φ value. Yield-stress (c) for the shear stress (τ 0 ) and (d) particle pressure (P 0 ) versus φ for fibres of batch (I), (II), (III), and (IV) shown using the symbols , , ♦, and , respectively (see table 1). The insets of graphs (c) and (d) are log-log plots versus φ/φ m where φ m is the maximum flowable volume fraction given in figure 5 (a).
Figure 4 :Figure 5 :
45 Figure 4: Rheological data: (a) η s = (τ -τ 0 )/η f γ and (b) η n = (P -P 0 )/η f γ versus φ as well as (c) µ = η s /η n and (d) φ versus J = η f γ/(P -P 0 ), for fibre batches (I), (II), (III), and (IV) as represented by the symbols , , ♦, and , respectively (see table 1). The insets of graphs (c) and (d) are log-log and semi-logarithmic plots.
Figure 6 :
6 Figure 6: Rescaled rheological data: (a) η s = (τ -τ 0 )/η f γ, (b) η n = (P -P 0 )/η f γ and (c) µ = η s /η n versus φ/φ m as well as (d) φ/φ m versus J = η f γ/(P -P 0 ), for all the data of the different batches (I), (II), (III), and (IV) shown using the symbols , , ♦, and , respectively (see table 1). The insets of graphs (a), (b), and (d) are log-log plots. The red solid curves correspond to the rheological laws given by equations (3.1), (3.2), and (3.3).
Table 1 :
1 Properties of each batch of fibres. Data shown includes the mean value and standard deviation of the aspect ratio A, fibre length L, and fibre diameter d. Values of the dimensionless number S p , characterising the relative strengths of the viscous and elastic forces, are also reported.
Fibre label Symbol A L (mm) d (mm) Sp
(I) 14.5 ± 0.8 5.8 ± 0.1 0.40 ± 0.01 5 • 10 -3
(II) 6.3 ± 0.4 2.5 ± 0.1 0.40 ± 0.01 2.4 • 10 -4
(III) ♦ 7.2 ± 0.4 5.8 ± 0.2 0.81 ± 0.02 3.9 • 10 -4
(IV) 3.4 ± 0.3 2.8 ± 0.1 0.81 ± 0.03 2.7 • 10 -5
Acknowledgments
We thank Simone Dagois-Bohy for his generous assistance with the experiments. This work is undertaken under the auspices of the ANR project 'Dense Particulate Systems' (ANR-13-IS09-0005-01), the 'Laboratoire d'Excellence Mécanique et Complexité' (ANR-11-LABX-0092), and the Excellence Initiative of Aix-Marseille University -A * MIDEX, a French "Investissements d'Avenir programme", and COST Action MP1305 'Flowing Matter'. FT benefited from a fellowship of CONICYT and SS from a fellowship of the A * MIDEX Excellence Academy -PhD Collegium. The plastic filaments were donated by PLASTICFIBRE S.P.A. This work was also supported by the National Science Foundation (grants #1511787 and #1362060). |
01473406 | en | [
"phys.cond.cm-sce",
"phys.cond.cm-msqhe"
] | 2024/03/05 22:32:16 | 2016 | https://hal.science/hal-01473406/file/ChevallierAlbertDevillard.pdf | D Chevallier
M Albert
P Devillard
Probing Majorana and Andreev Bound States with Waiting Times
Keywords: 02, 50, Ey -Stochastic processes 72, 70, +m -Noise processes and phenomena 74, 50, +r -Superconductivity-Tunneling phenomena
We consider a biased Normal-Superconducting junction with various types of superconductivity. Depending on the class of superconductivity, a Majorana bound state may appear at the interface. We show that this has important consequences on the statistical distribution of time delays between detection of consecutive electrons flowing out of such an interface, namely the waiting time distribution. Therefore, this quantity is shown to be a clear fingerprint of Majorana bound state physics and may be considered as an experimental signature of its presence.
Introduction. -During the last two decades, Majorana fermionic states in condensed matter physics, have received a lot of interest because of their exotic properties such as non-Abelian statistics, that open the perspective of using them for quantum computation. These exotic states have been studied extensively in various systems [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] with among them, a conceptually simple one made with a semiconducting nanowire of InAs or InSb, with strong spin-orbit coupling, subjected to an external Zeeman field and in the proximity of an s-wave superconductor (SC) [3,4,19,20]. In this situation, a Majorana Bound State (MBS) may appear at the interface of a normal/superconducting junction, under proper conditions, and strongly affects the electronic conduction properties (see Fig. 1). Several experiments have reported the observation of a zero-bias conductance peak in such physical setups, which are in good qualitative agreement with all theoretical predictions based on Majorana physics so far but still not fully consistent with the predicted conductance and magnetic field value needed for the existence of a MBS [21][22][23][24]. Therefore, several works have been conducted in order to understand these inconsistencies based on alternative interpretations by including other physical processes [25][26][27][28][29]. However, a clear consensus is still lacking mostly because of the absence of an experimental smoking gun for Majoranas. Generally these MBS appear in hybrid junction by tuning one of the parameter of the system (i.e. phase difference or Zeeman field) in a topological phase. Along this transition, these states mutate from Andreev Bound State (ABS) in the non-topological phase to MBS in the topological one and understanding their differences is thus of fundamental importance in order to distinguish them. So far, many efforts have focused on the relation and the evolution of ABS onto MBS by tuning the system parameters [8,30,31] but less on their own properties [20,[32][33][34] and the consequences on physical observables which is the purpose of this contribution.
Recently, an intriguing feature due to MBS was identified in Ref. [35] and named selective equal-spin Andreev reflection (SESAR). The presence of a MBS drastically modifies Andreev reflection and leads to a spin polarization of the current as well as to interesting correlations between different spin components which are visible in the zero frequency noise [36] for instance. However, such fingerprints are based on the possibility to observe fine quantitative differences between spin resolved current-current cross correlations which seems to be complicated experimentally in the present situation. In this letter, we show that a very clear qualitative difference is visible in the Waiting Time Distribution (WTD) of electrons flowing out of the interface making it an interesting and alternative signature of MBS. The WTD is the statistical distribution of time delay between the detection of two consecutive electrons and has been shown to be a very informative and powerful quantity for understanding correlations in mesoscopic quantum conductors [37][38][39][40][41][42][43][44][45][46][47][48][49][50].
Model. -We consider two types of hybrid junctions as depicted on Fig. 1 is converted into an electron with opposite spin ↓ n (↑ n) and b) Normal-(topological)Superconducting junction: a hole with spin ↑ n is converted into an electron with same spin ↑ n and a hole with spin ↓ n is reflected as a hole also with same spin ↓ n. In the first case n denotes any possible direction whereas in the second case a Majorana bound state appears at the interface and sets a special spin direction n for scattering (see text). In both cases a bias voltage eV is imposed and brings the superconducting chemical potential µS above the Fermi energy EF of the normal metal with the restriction that eV ∆ the superconducting gap.
one is a normal metal(N)/s-wave superconductor(S) carrying an ABS at the interface and the second one is a N/topological superconductor(TS) where the topological junction is made of a Rashba nanowire in proximity with an s-wave superconductor and in presence of Zeeman field V z carrying a MBS at each boundary. However, we assume the nanowire to be long enough to decouple the two MBS. In both situations, the Fermi energy of the normal metal is E F , the superconducting gap is ∆ and the superconducting chemical potential µ S is biased in such a way that E F = µ S -eV and eV ∆ as shown on Fig. 1. As a consequence, a stream of non-interacting holes is approaching the interface from the Normal part where it is scattered as a coherent superposition of electron and holes. This incoming scattering state reads
|ψ in = k V k=0 c k,n c k,-n |0 , (1)
where c k,n is the destruction operator of electron with momentum k, energy E = hv F |k| and spin orientation n (or creation of holes with opposite properties), |0 stands for the Fermi sea filled with states of energies up to µ s and k V = eV /hv F with v F the Fermi velocity. So far, n denotes any unitary vector and not necessary ẑ or x for instance. Below we will connect it to the polarization axes of the MBS but up to now this is just a choice of basis. It is important to note that spin components ↑ z / ↓ z or more generally ↑ n / ↓ n are equally distributed namely the incoming quantum state is isotropic in spin space. The key difference between these two junctions is how Andreev reflection occurs. In the case of a NS junction, the usual Andreev reflection takes place, meaning that a hole with a given spin is reflected as an electron with an opposite spin leading to the presence of ABS in such a junction (See Fig. 1 a)). Replacing the s-wave superconductor by a topological one strongly changes scattering properties and especially the Andreev reflection. If the Zeeman field is strong enough to enter the topological phase (V z ≥ ∆), the p-wave pairing dominates and the Andreev reflection is spin selective [35] meaning that a hole with spin up is reflected as an electron with the same spin and a hole with spin down is normally reflected as a hole with spin down (See Fig. 1 b)). More precisely, the presence of a MBS in the latter case, leads to a spin scattering symmetry breaking. There is a special spin orientation n, called the Majorana's polarization, along which electrons or holes are totally Andreev reflected as a hole or electron respectively with spin conservation whereas particles with opposite spin are normally reflected. This is the essence of SESAR effect [35] that leads to spin polarized current in this kind of hybrid junction. However, this precise direction cannot be determined from first principles and, in general, incoming particles are not spin oriented along this direction which leads to formally more complicated scattering although everything can be understood by decomposing the state onto this spin basis.
In order to evaluate the WTD and use it as a tool to probe the scattering properties of ABS and MBS in hybrid junctions we now need the expressions of the different outgoing scattering states. In such a junction, the interface plays an important role on the transmission which gives a finite width to the states [51][52][53]. In Appendix A, we discuss this effect. However, for the sake of simplicity we focus on the zero temperature and perfect Andreev reflection limit and following Ref. [35,48] write down the different out-going quantum states.
Outgoing states for N/S junction. -In this case, the interface acts as a perfect Andreev mirror where all the holes with a given spin are Andreev reflected as electrons with opposite spin [48]
|ψ ABS out = k V k=0 c † -k,-n c † -k,n |0 . (2)
Again, this quantum state is isotropic in spin space and simply corresponds to a stream of one-dimensional free electrons with energies between µ s and µ s + eV and two possible spin states (two channels of free fermions).
Outgoing states for N/TS junction. -The presence of the MBS deeply affects the outgoing state. As mentioned before, the key point is that a hole with a spin n is totally reflected as an electron with the same spin whereas a spin -n hole is subjected to perfect specular reflection with the same spin as well. It is therefore simpler to write the outgoing state in the Majorana polarization frame which reads
|ψ MBS out = k V k=0 c † -k,n c -k,-n |0 , (3)
where it is now obvious that the outgoing stream of electrons is totally spin-polarized in the n direction. Therefore, if one were able to measure the electronic current in this spin direction, one would get the result of a perfect single quantum channel. On the contrary, if one measures it in a random direction, for instance ẑ, one would measure partition noise just because the spin operators Ŝz does not commute with Ŝn . In that sense, such an experiment is very similar to the so called Stern and Gerlach historical experiment as we will discuss later. Apart from this remark the many body state can be simply obtained from (3), by replacing the c † n with the proper linear combination of c † ↑ and c † ↓ of the right basis. This can be done using the scattering matrix obtained in [35] depending on the value of the Zeeman field V z . However, in the simple case where the Zeeman field is large compared to the superconducting gap, the Majorana is fully polarized in the z-direction which means that n and ẑ are the same. Another simple case is right at the topological transition when the Zeeman field is just above the gap where c
† -k,↑ n = 1 √ 2 (c † -k,↑ ẑ + c † -k,↓ ẑ ).
Finally, it is important to note that in the non-topological case, namely when V z ∆, the outgoing state behaves like in the swave junction (usual Andreev reflection on an ABS) [35] and along the transition, the scattering matrix is not continuous.
Waiting time distribution. -We now turn to the calculation of the WTD. To do so, we need to specify the detection process. A time-resolved single electron detector is placed far away from the interface and is assumed to be sensitive to electrons only with energy above the superconducting chemical potential µ S . In addition the detector can be spin selective or not. Following Ref. [46,48] the WTD is obtained from the Idle Time Probability (ITP), namely the probability of not detecting any electron during a time slot τ . The precise definition of it depends on the detector capabilities. Without spin filtering it reads
Π(τ ) = ψ out | : e -Q ↑,E>µs :: e -Q ↓,E>µs : |ψ out , (4)
where : • • • : stands for the normal ordering and
Q σ,E>µs = x0+v F τ x0 c † σ (x)c σ (x)Θ(E -µ S )
dx is nothing else than the probability of presence of a charge Q during a time slot τ with E > µ S and spin projection σ =↑ / ↓ along a given direction (eg x, ẑ...). In Appendix B, we discuss in more details the derivation of Q and Π depending on the applied filtering, energy or/and spin. In the case of spin filtered detection (for instance spin up with respect to a given direction), this quantity is
Π(τ ) = ψ out | : e -Q ↑,E>µs : |ψ out . (5)
In both cases, the WTD is obtained from the second derivative of the ITP with respect to τ , W(τ ) = τ d 2 Π(τ ) dτ 2 , where τ is the mean waiting time given by 1/ τ = dΠ dτ (τ = 0). Eq. ( 4) and ( 5) are evaluated numerically for both many-body scattering states (2) and (3) with the same method as Ref. [46,48]. However, before discussing our results it is useful to recall several established results on WTD in quantum coherent conductors. In Ref. [39,46], it was shown that for a single quantum channel with a voltage bias eV (spinless electrons), the scattering quantum state is a train of non interacting fermions whose WTD is approximately the Wigner Surmise
W W S (τ ) = 32 π 2 τ 2 τ 3 exp - 4 π τ τ 2 , (6)
with τ = h/eV is the average waiting time, which means that due to Pauli's exclusion principle, electrons are separated in time by τ on average. An important feature of this WTD is the fact that it vanishes for τ τ which is the hallmark of fermionic statistics. If now this stream of electron is partitioned by a scatterer with energy independent transmission coefficient T , this WTD is continuously modified until it reaches an exponential form T exp(-T τ /τ )/τ when T 1. This exponential shape is the signature of uncorrelated events since detected electrons are well separated in time and therefore uncorrelated. In this case, the mean waiting time is τ = τ /T and therefore the average current e/ τ = e 2 h V T in agreement with the so called Landauer's formula [54]. Finally, when spin 1/2 are considered there are two conducting channel at disposal and the WTD no longer vanishes for small waiting times. At perfect transmission, it is described by the generalized Wigner-Dyson statistics [46].
WTD without spin filtering. -We start with the simplest situation where the single electron detector is spin insensitive. In the absence of MBS, it was shown [48] that the situation reduces to a stream of one dimensional free electrons with two spin components. Indeed, at perfect Andreev reflection, all the incoming holes are converted into electrons (Andreev mirror) with spin flip and energy between µ S and µ S + eV . The WTD is therefore the one of two perfect and independent quantum channels and is described by the generalized Wigner-Dyson distribution [46] depicted on Fig. 2a. The average waiting time is τ = h/2eV or in other words the average current is 2e 2 h V . On the other hand, in the topological case, the SESAR effect selects only one spin species reducing the possibilities to a single perfect quantum channel (with spin orientation +n). As a consequence, the WTD boils down to the so called Wigner surmise [39] also depicted on Fig. 2a. The average waiting time is h/eV and the average current e 2 h V therefore twice smaller than for the topological case. This is in agreement with the common interpretation that a Majorana behaves as "half an electron" [55].
We therefore conclude that not only the WTD repro- duces well known differences about the average current but also exhibits a qualitative mismatch between the two situations. With MBS, the WTD is exactly zero at τ = 0 because of Pauli's exclusion principle whereas it is not in the ABS case since two channels are available [46]. However, this discrepancy must be visible in any statistical measure of the electronic current like noise, third cumulant and Full Counting Statistics (FCS) in the same way that it is between one and two channel standard mesoscopic conductors.
WTD with spin filtering. -We now turn to a richer situation where we assume the single electron detector to be spin sensitive along a direction d and therefore collects only electrons with spin projection ↑ d. The following discussion is basically equivalent to the interpretation of the famous Stern and Gerlach experiment. The key point is that quantum state ( 2) is spin isotropic whereas (3) is spin polarized in the n direction. In particular, all possible observables are totally independent of the detector spin orientation in the ABS case. This is in strong contrast with the MBS where, for instance, filtering spin along ±n leads obviously to orthogonal results. We illustrate this statement on the WTD but it is very important to note that it applies to any other observables such as the spin resolved average current [35] or noise [4] and FCS in general.
For ABS, the single particle detector, whatever its spin orientation d, filters one spin species, namely ↑ d. Since they are equally populated and independent, the outgoing state (2) reduces to a single perfect quantum channel and its WTD is therefore Wigner surmise (see Fig. 2b). This WTD is characterized by a single peak centered around τ = τ with broad fluctuations and exactly zero value at zero which is the hallmark of Pauli exclusion principle as already mentioned. When a MBS is present, the precise shape of the WTD crucially depends on the detector spin orientation. Although quite academical, we can start by setting it to n. In that case, the detector collects every electron coming from the interface and the WTD is also the one of a single quantum channel. In this situation both ABS and MBS yield the same spin resolved WTD. If we choose now d = -n the detector collects nothing and if d slightly deviates from -n only a few electrons are kept and the WTD is expected to be exponential with rate eV h P d where
P d is the overlap | ↑ d | ↑ n | 2 .
For arbitrary d, the detector will partition the single quantum channel according to spin. The situation is almost formally equivalent to the one of a spinless single quantum channel flowing across a Quantum Point Contact (QPC) with energy independent transmission probability [39]. Here this transmission probability will simply be given by the overlap P d between | ↑ d and | ↑ n . In the special case where d ⊥ n, the quantum state ( 3) is a balanced mixture of | ↑ d and | ↓ d and then will be filtered exactly like a single quantum channel across a QPC with transmission probability 1/2. The situation can be implemented experimentally either by tuning V z just above ∆ and setting d = ẑ or in the limit V z ∆ where n = ẑ and filtering spin along x or y. This is shown on Fig. 2b where we have evaluated Eq. 5 along ẑ right above the topological transition by brute force numerics (limited to a quite small number of basis state (thirteen here) which explains the small discrepancy) and compared it to the expected result with very good agreement.
At this point, it is important to give some explanations on the experimental feasibility. The substrate, namely the heterojunction, has already been fabricated during the quest for Majorana quasi-particle [21][22][23] and consists of a Rashba nanowire partially in contact with an s-wave superconductor and in presence of a Zeeman field. The crucial point is to detect reflected electrons one by one in the normal part. Although still quite challenging, single electron detection technology is progressing very fast and might become a routine in the near future as reviewed in [45,49]. Otherwise, partial information on waiting times can be extracted from the average current, shot noise or second order coherence function obtained from Hong-Ou-Mandel experiment [45,56,57].
Conclusion. -We have studied the consequences of the presence or not of a MBS at the interface between a normal and a superconducting conductor on the electronic WTD. When a single electron detector is placed far away from the interface and detects electrons above the superconducting chemical potential µ S without spin filtering we observe a clear qualitative distinction between the topological and non topological situations. In addition, we have shown that the non topological situation (ABS) is immune to spin filtering in sharp contrast with the topological one due to SESAR effect. This conclusion is valid for the WTD which makes it a clear fingerprint of MBS but is also true for other quantities like the average current or higher moments of the FCS which can be easier to measure in actual experiments. Extension of this work could be the study of the influence of Coulomb repulsion when the superconducting part is not grounded but floating or the poisoning by another Majorana [58,59] and temperature or disorder effects. * * *
We are grateful to G. Candela, G. Haack and J. Klinovaja for useful discussions and remarks. The research of D. C. was supported by the Swiss NSF and NCCR QSIT.
Appendix A: Finite width of the outgoing states.
-We study here the effect of a finite energy width of the Majorana bound state and show that it does not qualitatively change the waiting time distribution. Due to the finite hopping strength at the interface, the Majorana bound state located at zero energy has a finite width Γ. The main consequence of this is the energy dependence of the Andreev reflection coefficient at the interface which becomes [60]
R A (E) = Γ E + iΓ 2 . (7)
However, we can recover the same states (2) and (3) as in the main text when taking the limit eV Γ. Using this assumption, we can calculate the ITP and thus the WTD when the states have a finite width (see Fig. 3) using the energy dependent coefficients [46].
In fig. 3, we can see that the effect of the broadening of the states has no dramatic effects on the WTD. For this reason, we do not focus on this effect in the main text.
Appendix B: Derivation of the Idle Time Probability. -In this appendix we explain how to calculate the idle time probability, following the notations of Ref. [46]. It is important to note that the energy range of detected particles is assumed to be small enough that the dispersion relation of electrons or holes is linear E = hv F k. In that case, charge measurements over a time window ∆t are equivalent to charge measurements over a space window ∆x/v F thanks to Galilean invariance. The key point to calculate Q σ,E is to specify the detection procedure. This includes the possibility to only detect positive/negative energies, spin projection up/down with respect to a given quantization axis. In an actual experiment, the detection can be done by connecting the superconductor or the topological superconductor to two quantum dots instead of a normal metal. By doing so one can filter energy by applying an external gate on the two dots or select the spin by using interacting quantum dots with strong repulsion in order to get rid of the spin degeneracy [61,62] . Analytically these properties can be implemented easily in the definition of Q. This operator can be represented in the basis of the scattering states as
Q E>µs = t(k)t * (k ) e i(k-k )v F τ -1 i(k -k ) dk 2π dk 2π (8)
where the t(k) are the energy dependent transmission amplitudes of a scattering state and may be chosen in that case as t(k) = 1 if E(k) > µ s and t(k) = 0 if E(k) < µ s .
In order to compute the ITP, the transport window has to be discretized into N energy compartments of size eV /N with corresponding momentum intervals of size κ = eV N hv F with v F = hk F m is the Fermi velocity defined with m the electron mass. Using this discretization leads to the following matrix elements for Q E>µs in the large N limit
with m, n = 1, ...., N . From this definition of the ITP and when the average is taken over a Slater determinant of free fermions, the WTD can be cast as a determinant of the form [41,46] Π(τ ) = det(1 -Q τ ), (10) which can be evaluated with a computer. Then, it is straigthforward to extend this detection procedure to a spin selective one by setting spin dependent transmission coefficients.
Fig. 1 :
1 Fig. 1: (color online) Schematic picture of the Andreev reflection processes in the two different junctions. a) Normal-(trivial)Superconducting junction: a hole with spin ↑ n (↓ n)is converted into an electron with opposite spin ↓ n (↑ n) and b) Normal-(topological)Superconducting junction: a hole with spin ↑ n is converted into an electron with same spin ↑ n and a hole with spin ↓ n is reflected as a hole also with same spin ↓ n. In the first case n denotes any possible direction whereas in the second case a Majorana bound state appears at the interface and sets a special spin direction n for scattering (see text). In both cases a bias voltage eV is imposed and brings the superconducting chemical potential µS above the Fermi energy EF of the normal metal with the restriction that eV ∆ the superconducting gap.
Fig. 2 :
2 Fig. 2: (color online) WTDs versus τ /τ of electrons flowing out of the interface without spin filtering a) and with spin filtering orthogonal to the majorana polarization b). The solid gray line (resp. red line) corresponds to an NS junction without (resp. with a) MBS (see text) at perfect Andreev reflection. The dashed black line represents the WTD of a single channel normal conductor with transmission probability one (a)) and one-half (b)) for comparison [39].
Fig. 3 :
3 Fig. 3: (color online) WTDs versus τ /τ of electrons flowing out of the interface of a N/S (gray line) and a N/TS (red line). The dashed and solid line correspond to two different width of the states as mentioned in the legend.
2 (
2 [Q] m,n = κt * κm t κn π e -i 2 κ(n-m)v F τ sin (κn-κm)v F τ κnκm) |
01768663 | en | [
"spi.meca.mefl"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768663/file/JNNFM_two_particle_PREPRINT.pdf | Bloen Metzger
Guillaume Ovarlez
Sarah Hormozi
email: [email protected]
Mohammadhossein Firouznia
Keywords: Yield stress materials, PIV visualization, Low-Reynolds-number flows, Simple-shear flow, Elastoviscoplastic materials, Noncolloidal yield stress suspensions
The interaction of two spherical particles in simple-shear flows of yield stress fluids
Mohammadhossein Firouznia, Bloen Metzger, Guillaume Ovarlez, Sarah
The interaction of two spherical particles in simple-shear flows of yield stress fluids
Introduction
The flows of non-Newtonian slurries, often suspensions of noncolloidal particles in yield stress fluids, are ubiquitous in many natural phenomena (e.g. flows of slurries, debris and lava) and industrial processes (e.g. waste disposal, concrete, drilling muds and cuttings transport, food processing). Studying the rheological and flow behaviors of non-Newtonian slurries is therefore of high interest. The bulk rheology and macroscopic properties of noncolloidal suspensions are related to the underlying microstructure, i.e., the arrangement of the particles. Therefore, investigating the interactions of particles immersed in viscous fluids is key to understanding the microstructure, and consequently, to refine the governing constitutive laws of noncolloidal suspensions. Here, we study experimentally the interaction of two particles in shear flows of yield stress fluids.
There exists an extensive body of research on hydrodynamic interactions of two particles in shear flows of Newtonian fluids. One of the most influential studies on this subject is performed by Batchelor and Green [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] who then used the knowledge of two particle trajectories and stresslets to scale up the results and provide a closure for the bulk shear stress in a dilute noncolloidal suspension to the second order of solid volume fraction, φ [START_REF] Batchelor | The determination of the bulk stress in a suspension of spherical particles to order c 2[END_REF]. Moreover, they showed that due to the fore-aft symmetry of the particle trajectories, Stokesian noncolloidal suspensions do not exhibit any normal stress difference.
The work of Batchelor and Green was followed by subsequent attempts [START_REF] Jeffrey | Calculation of the resistance and mobility functions for two unequal rigid spheres in low-reynolds-number flow[END_REF][START_REF] Kim | The resistance and mobility functions of two equal spheres in low-reynolds-number flow[END_REF][START_REF] Jeffrey | The calculation of the low reynolds number resistance functions for two unequal spheres[END_REF][START_REF] Kim | Microhydrodynamics: principles and selected applications[END_REF] to develop accurate functions describing the hydrodynamic interactions between two particles, which built a foundation for further analytical studies [START_REF] Batchelor | The effect of brownian motion on the bulk stress in a suspension of spherical particles[END_REF][START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF][START_REF] Zarraga | Normal stress and diffusion in a dilute suspension of hard spheres undergoing simple shear[END_REF] and powerful simulation methods such as Stokesian Dynamics [START_REF] Brady | Stokesian dynamics[END_REF]. A large body of theoretical and numerical studies has been done to solve the relative motion of two spherical particles in order to obtain the quantities required for the calculation of the bulk parameters, such as mean stress and viscosity in suspensions with a wide range of solid fractions (dilute to semi-dilute) [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF][START_REF] Brenner | On the stokes resistance of multiparticle systems in a linear shear field[END_REF][START_REF] Wakiya | Particle motions in sheared suspensions xxi: Interactions of rigid spheres (theoretical)[END_REF][START_REF] Lin | Slow motion of two spheres in a shear field[END_REF][START_REF] Guazzelli | A physical introduction to suspension dynamics[END_REF].
The Stokes regime without any irreversible forces leads to symmetric particle trajectories, and consequently, a symmetric Pair Distribution Function (PDF), i.e., the probability of finding a particle at a certain position in space with respect to a reference particle. These result in a Newtonian bulk behavior without any development of normal stress differences in shear flows. However, even in Stokesian suspensions the PDF is not symmetric [START_REF] Blanc | Experimental signature of the pair trajectories of rough spheres in the shear-induced microstructure in noncolloidal suspensions[END_REF][START_REF] Blanc | Microstructure in sheared non-brownian concentrated suspensions[END_REF][START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF][START_REF] Parsi | Fore-and-aft asymmetry in a concentrated suspension of solid spheres[END_REF][START_REF] Gao | Direct investigation of anisotropic suspension structure in pressure-driven flow[END_REF] and the loss of symmetry can be related to contact, due to roughness [START_REF] Blanc | Kinetics of owing dispersions. 9. doublets of rigid spheres (experimental)[END_REF][START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF][START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF] or other irreversible surface forces (e.g., repulsive force leads to an asymmetric PDF in a similar fashion to how a finite amount of Brownian motion does [START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF]).
The microstructure affects the macroscopic properties of noncolloidal suspensions leading to non-Newtonian effects (i.e., normal stress differences) and phenomena such as shear induced migration of particles [START_REF] Morris | A review of microstructure in concentrated suspensions and its implications for rheology and bulk flow[END_REF][START_REF] Singh | Normal stresses and microstructure in bounded sheared suspensions via stokesian dynamics simulations[END_REF][START_REF] Sierou | Rheology and microstructure in concentrated noncolloidal suspensions[END_REF][START_REF] Stickel | Fluid mechanics and rheology of dense suspensions[END_REF]. Thus, the development of accurate constitutive equations requires considering the connection between the microstructure and macroscopic properties either explicitly [START_REF] Phan-Thien | A new constitutive model for monodispersed suspensions of spheres at high concentrations[END_REF][START_REF] Stickel | A constitutive model for microstructure and total stress in particulate suspensions[END_REF][START_REF] Stickel | Application of a constitutive model for particulate suspensions: Time-dependent viscometric flows[END_REF] or implicitly through the particle phase stress [START_REF] Miller | Suspension flow modeling for general geometries[END_REF][START_REF] Morris | Curvilinear flows of noncolloidal suspensions: The role of normal stresses[END_REF][START_REF] Morris | Pressure-driven flow of a suspension: Buoyancy effects[END_REF][START_REF] Morris | A review of microstructure in concentrated suspensions and its implications for rheology and bulk flow[END_REF][START_REF] Nott | Pressure-driven flow of suspensions: simulation and theory[END_REF].
A yield stress fluid deforms and flows when it is subjected to a shear stress larger than its yield stress. In ideal yield stress models, such as the Bingham or Herschel-Bulkley models [START_REF] Huilgol | Fluid mechanics of viscoplasticity[END_REF], the state of stress is undetermined when the shear stress is below the yield stress and the shear rate vanishes. In the absence of inertia, the solutions to flows of ideal yield stress fluids have the following features: (i) uniqueness (ii) nonlinearity of the equations (iii) symmetries of the domain geometry, coupled methodologically with reversibility and reflection of solutions [START_REF] Putz | Creeping flow around particles in a bingham fluid[END_REF]. Therefore, flows around obstacles, such as spheres, should lead to symmetric unyielded regions and to symmetric flow lines in the yielded regions, as observed in simulations [START_REF] Beris | Creeping motion of a sphere through a bingham plastic[END_REF][START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF][START_REF] Blackery | Creeping motion of a sphere in tubes filled with a bingham plastic material[END_REF][START_REF] Beaulne | Creeping motion of a sphere in tubes filled with herschel-bulkley fluids[END_REF][START_REF] Deglo De Besses | Sphere drag in a viscoplastic fluid[END_REF].
However, recent studies report on phenomena such as loss of fore-aft symmetry under creeping condition and formation of negative wake behind particles, which cannot be explained with the assumption of ideal yield stress fluid [START_REF] Putz | Settling of an isolated spherical particle in a yield stress shear thinning fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF]. While these behaviors have been attributed to the thixotropy of the material previously [START_REF] Gueslin | Flow induced by a sphere settling in an aging yield-stress fluid[END_REF], recent simulations show similar behaviors for nonthixtropic materials when elastic effects are considered [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF][START_REF] Fraggedakis | Yielding the yield stress analysis: A thorough comparison of recently proposed elasto-visco-plastic (evp) fluid models[END_REF]. Therefore, elastoviscoplastic (EVP) models are proposed which consider the contribution of elastic, plastic and viscous effects simultaneously in order to analyze the material behavior more accurately [START_REF] Saramito | A new constitutive equation for elastoviscoplastic fluid flows[END_REF][START_REF] Saramito | A new elastoviscoplastic model based on the herschel-bulkley viscoplastic model[END_REF][START_REF] Dimitriou | Describing and prescribing the constitutive response of yield stress fluids using large amplitude oscillatory shear stress (laostress)[END_REF]. The field of inclusions (i.e. solid particles, fluid droplets and air bubbles) in yield stress fluids is not as advanced as that of Newtonian fluids. The main challenges are due to the nonlinearity of the constitutive laws of yield stress fluids and resolving the structure of unyielded regions, where the stress is below the yield stress (for more details see [START_REF] Hormozi | Visco-plastic sculpting[END_REF]). To locate the yield surfaces that separate unyielded from yielded regions, two basic computational methods are used: regularization and the Augmented Lagrangian (AL) approach [START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF]. On the experimental front, techniques such as PIV [START_REF] Gueslin | Flow induced by a sphere settling in an aging yield-stress fluid[END_REF][START_REF] Putz | Settling of an isolated spherical particle in a yield stress shear thinning fluid[END_REF][START_REF] Gueslin | Sphere settling in an aging yield stress fluid: link between the induced flows and the rheological behavior[END_REF][START_REF] Holenberg | Particle tracking velocimetry and particle image velocimetry study of the slow motion of rough and smooth solid spheres in a yield-stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF][START_REF] Ahonguio | Influence of surface properties on the flow of a yield stress fluid around spheres[END_REF], PTV [START_REF] Holenberg | Particle tracking velocimetry and particle image velocimetry study of the slow motion of rough and smooth solid spheres in a yield-stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF], Nuclear Magnetic Resonance (NMR) [START_REF] Van Dinther | Suspension flow in microfluidic devicesa review of experimental techniques focussing on concentration and velocity gradients[END_REF][START_REF] Ovarlez | Flows of suspensions of particles in yield stress fluids[END_REF], X-ray [START_REF] Heindel | A review of x-ray flow visualization with applications to multiphase flows[END_REF][START_REF] Gholami | Timeresolved 2d concentration maps in flowing suspensions using x-ray[END_REF], Magnetic Resonance Imaging (MRI) [START_REF] Powell | Experimental techniques for multiphase flows[END_REF] are used to study the flow field inside the yielded region as well as determining the yield surface.
Generally speaking, studies of single and multiple inclusions (i.e., rigid particles and deformable bubbles and droplets) in yield stress fluids are abundant.
These studies mainly focus on resolving important physical features when dealing with yield stress suspending fluids, e.g. buoyant inclusions can be held rigidly in suspensions [START_REF] Bhavaraju | Bubble motion and mass transfer in non-newtonian fluids: Part i. single bubble in power law and bingham fluids[END_REF][START_REF] Potapov | Motion and deformation of drops in bingham fluid[END_REF][START_REF] Tsamopoulos | Steady bubble rise and deformation in newtonian and viscoplastic fluids and conditions for bubble entrapment[END_REF][START_REF] Singh | Interacting two-dimensional bubbles and droplets in a yield-stress fluid[END_REF][START_REF] Dimakopoulos | Steady bubble rise in herschel-bulkley fluids and comparison of predictions via the augmented lagrangian method with those via the papanastasiou model[END_REF][START_REF] Lavrenteva | Motion of viscous drops in tubes filled with yield stress fluid[END_REF][START_REF] Holenberg | Interaction of viscous drops in a yield stress material[END_REF][START_REF] Maleki | Macro-size drop encapsulation[END_REF][START_REF] Chaparian | Yield limit analysis of particle motion in a yield-stress fluid[END_REF]; multiple inclusions appear not to influence each other beyond a certain proximity range [START_REF] Singh | Interacting two-dimensional bubbles and droplets in a yield-stress fluid[END_REF]; flows may stop in finite time [START_REF] Chaparian | Yield limit analysis of particle motion in a yield-stress fluid[END_REF]; etc. Other studies exist which address the drag closures, the shape of yielded region, the role of slip at the particle surface and its effect on the hydrodynamic interactions [START_REF] Deglo De Besses | Sphere drag in a viscoplastic fluid[END_REF][START_REF] Jossic | Drag and stability of objects in a yield stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF][START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF].
Progressing beyond a single sphere and tackling the dynamics of multiple particles in a Lagrangian fashion is a much more difficult task. Therefore, another alternative is to address yield stress suspensions from a continuum-level closure perspective. The fundamental objective is then to characterize the rheological properties as a function of the solid volume fraction (φ) and properties of the suspending yield stress fluid. Recent studies show that adding particles to a yield-stress fluid usually induces an enhancement of both yield stress and effective viscosity while leaving the power-law index intact [START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Mahaut | Yield stress and elastic modulus of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Ovarlez | Shear-induced sedimentation in yield stress fluids[END_REF][START_REF] Ovarlez | A physical model for the prediction of lateral stress exerted by self-compacting concrete on formwork[END_REF][START_REF] Vu | Macroscopic behavior of bidisperse suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF][START_REF] Ovarlez | Flows of suspensions of particles in yield stress fluids[END_REF].
Unlike the case of settling of particles in yield stress fluids, no attention has been payed to the study of pair interactions of particles in simple flows of yield stress fluids. Our knowledge about this fundamental problem is essential to form a basis for further studies regarding the suspensions of non-Brownian particles in yield stress fluids. To this end, we present an experimental study on the interaction of two small freely-moving spheres in a Couette flow of a yield stress fluid. Our main objective is to understand how the nonlinearity of the suspending fluid affects the particle trajectories, and consequently, the bulk rheology. This paper is organized as follows. Section 2 describes the experimental methods, materials and particles used in this study along with the rheology of our test fluids. In Section 3, we present our results on establishing a linear shear flow in the absence of particles, flow around one particle and the interaction of particle pairs in different fluids including Newtonian, yield stress and shear thinning. Finally, we discuss our conclusions and suggestions for future works in Section 4.
Experimental methods and materials
In this section we describe the methodology and materials used in this study.
Experimental set-up
The schematic of the experimental set-up is shown in Fig. 1. It is designed to produce a uniform shear flow within the fluid enclosed by a transparent belt. The belt is tightened between two shafts one of which is coupled with a precision rotation stage (M-061.PD from PI Piezo-Nano Positioning) with high angular resolution (310 5 rad) while the other shaft rotates freely. The rotation gener- ated by the precision rotation stage drives the belt around the shafts and hence, applies shear to the fluid maintained in between. In order to have the maximum optical clarity along with the mechanical strength to afford the tension, Mylar sheets (polyethylene terephthalate films from Goodfellow Corporation) of 0.25 mm thickness are used to make the belt. The set-up is designed to reach large enough strains (γ ¦ 45) to ensure the steady-state condition. The design is inspired by Rampall et al. [START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF] and the Couette apparatus is the same as that used by Metzger and Butler in [START_REF] Metzger | Clouds of particles in a periodic shear flow[END_REF].
The flow field is visualized in the plane of shear (xy plane) located in the mid-plane between the free surface and bottom of the cell. A fraction of the whole flow domain is illuminated by a laser sheet, which is formed by a line generator mounted on a diode laser (2W, 532 µm). Fluid is already seeded homogeneously with fluorescently labeled tracer particles, which reflect the incident light (see Sec.2.2). Tracer particles should be small enough to follow the flow field without any disturbance and large enough to reflect enough light needed for image recording. The thickness of the laser sheet is tuned to be around its minimum in the observation window with a plano-convex cylindrical lens. Images are recorded from the top view via a high quality magnification lens (Sigma APO-Macro-180 mm-F3.5-DG) mounted on a high-resolution digital camera (Basler Ace acA2000-165um, CMOS sensor, 2048 1080 pixel 2 , 8 bit). The reflected light is filtered with a high-pass filter (590 nm) through which the direct reflection (from the particle surface) is eliminated. A transparent window made of acrylic is carefully placed on the free surface of the fluid in order to eliminate the deformation of the fluid surface and by this, the quality of images is improved significantly. The imaging system is illustrated schematically in Fig. 1.
Particles
Particles used in this study are transparent and made of PMMA (polymethyl methacrylate, Engineering Laboratories Inc.) with radius of a 1 mm, density of 1.188 gr©cm 3 and refractive index of 1.492 at 20 `C. They are dyed with Rhodamine 6G (Sigma-Aldrich) which enables us to preform PTV and PIV at the same time. In order to dye particles the procedure proposed by Metzger and Butler in [START_REF] Metzger | Clouds of particles in a periodic shear flow[END_REF] is followed; PMMA particles are soaked for 30 minutes in a mixture of 50 % wt. water and 50 % ethanol with a small amount of Rhodamine 6G maintained at 40 `C. They are rinsed with an excess amount of water afterwards to assure there is no extra fluorescent dye on their surface and the coat is stable.
The surface of the particles from the same batch have been previously observed by Phong [START_REF] Pham | Origin of shear-induced diffusion in particulate suspensions: Crucial role of solid contacts between particles[END_REF][START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF] and Souzy [START_REF] Souzy | Mélange dans les suspensions de particules cisaillées à bas nombre de reynolds[END_REF] using Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM). The root mean square and peak values of the roughness are measured to be 0.064 0.03 µm and 0.6 0.3 µm respectively after investigating an area of 400 µm 2 [START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF]. Moreover, in order to perform PIV, the fluid is seeded with melamine resin particles dyed with Rhodamine B with a diameter of 3.87 µm, provided by Microparticle GmbH.
Fluids
In this study, three different fluids have been used including Newtonian, yield stress and shear thinning fluid; each of the fluids is described in the following sections:
Newtonian fluid
The Newtonian fluid is designed to have the density and refractive index (RI) matched with that of the PMMA particles. Any RI mismatch could lead to refraction of the laser light when it passes the particle-fluid interface which decreases the quality of the images and makes the post processing very difficult or even impossible. However, we only have one or two particles in our experiments and therefore, a slight refractive index mismatch does not result in a poor quality image. The fluid consists of 76.20% wt. Triton X-100, 14.35% wt. of zinc chloride, 9.31% wt. of water and 0.14% wt. of hydrochloric acid [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF] with the viscosity of 4.64 P a.sec and refractive index of 1.491 10 3 at room temperature. A small amount of hydrochloride acid prevents the formation of zinc hyperchlorite and thus enhances the transparency of the solution. Water is first added to zinc chloride gradually and the solution is stirred until all solid particles dissolve in the water. Since the process is exothermal we let the solution cool down to reach room temperature. After adding hydrochloride acid to the cooled solution, Triton X-100 is added and mixed until the final solution is homogeneous.
Yield stress fluid
Here we limit our study to non-thixotropic yield-stress materials with identical static and dynamic yield-stress independent of the flow history [START_REF] Balmforth | Yielding to stress: recent developments in viscoplastic fluid mechanics[END_REF][START_REF] Ovarlez | On the existence of a simple yield stress fluid behavior[END_REF].
To this end, we chose Carbopol 980 which is a cross-linked polyacrylic acid with high molecular weight and is widely used in the industry as a thickening agent. Most of the experimental works studying the flow characteristics of the simple yield-stress fluids utilize Carbopol since it is highly transparent and the thixotropy can be neglected. Carbopol 980 is available in a form of anhydrous solid powder with micrometer sized grains. When mixed with water, polymer chains hydrate, uncoil and swell forming an acidic solution with pH¢ 3 4.
When neutralized with a suitable basic agent such as sodium hydroxide, microgels swell up to 1000 times of their initial size (10 times bigger radius) and jam (depending on the concentration) forming a structure which exhibits yieldstress and elastic behavior [START_REF] Gutowski | Scaling and mesostructure of carbopol dispersions[END_REF][START_REF] Lee | Investigating the microstructure of a yield-stress fluid by light scattering[END_REF]. Rheological properties of Carbopol gels are dependent of both concentration and pH. At intermediate concentrations, both yield-stress and elastic modulus increase with pH until they reach their peak values around the neutral point, where they are least sensitive to pH. A comprehensive study on the microstruture and properties of Carbopol gel is provided by Piau in [START_REF] Piau | Carbopol gels: Elastoviscoplastic and slippery glasses made of individual swollen sponges: Meso-and macroscopic properties, constitutive equations and scaling laws[END_REF].
In order to make Carbopol gel with a density matched with that of PMMA particles mentioned in sec. 2.2, first, a solution of deionized water 27.83% wt. and glycerol 72.17% wt (provided by ChemWorld) is prepared, which has the same density as the PMMA particles. Then, depending on the concentration needed for the experiment (varies in the range of 0.07-0.2 % wt. in this study), the corresponding amount of Carbopol 980 (provided by Lubrizol Corporation) is added to the solution while it is being mixed by a mixer. The dispersion is let to be mixed for hours until all Carbopol particles hydrate and the dispersion is homogeneous. A small amount of sodium hydroxide (provided by Sigma-Aldrich) is then added in order to neutralize the dispersion. It is suggested to add all of the neutralizer at once, or at least in a short amount of time since as pH increases the viscosity increases drastically which would increase mixing time. The solution becomes more transparent as it reaches neutral pH. The refractive index of the Carbopol gels used in this study varies in the range of 1.370510 3 . By investigating the rheological properties of the gel at different pHs, we found pH 7.4 to be a stable point with highest yield-stress and elastic modulus. The solution is then covered and mixed for more than eight hours.
The final solution is transparent, homogeneous with no visible aggregates. Also, the rheometry results of all samples taken from different parts of the solution batch collapse. The compositions of all Carbopol gels used in this study are described in Table 1.
Shear thinning fluid
In order to investigate the effect of yield-stress and shear thinning individually, it is required to study the problem with a shear thinning fluid with no yield stress. Therefore, we chose Hydroproxypyl Guar which is a derivative of the guar gum, a polysaccharide made from seeds of guar beans. Jaguar HP-105 (provided by Solvay Inc.) is used in this study which is widely used in cosmetics and personal care products [START_REF] Inc | Jaguar, product guide for personal care solutions[END_REF]. It is transparent when mixed with water and exhibits negligible yield stress in low to moderate concentrations. The refractive index of the guar gum solutions used in this study varies in the range of 1.368 5 10 3 .
In order to make a solution of Jaguar HP-105 with the same density as the particles, we follow the same scheme mentioned earlier for Carbopol gel in sec.2.3.2. First, a solution of deionized water 27.83% wt. and glycerol 72.17% wt (provided by ChemWorld) is prepared. While being mixed by a mixer, depending on the desirable concentration (in this study varies from 0.30.6% wt.), corresponding amount of Jaguar HP-105 is added gradually to the solution. The dispersion is covered and mixed for 24 hours until a homogeneous solution is achieved. Homogeneity is tested by comparing rheometry results performed on samples taken from different spots in the container. The compositions of the guar gum solutions used in this study are described in Table 1.
Rheometry
Unlike Newtonian fluids, the effective viscosity of the non-Newtonian fluids depends on the shear rate and flow history. Here, we explain the rheological tests performed to characterize the non-Newtonian behaviors. For each test the procedure is described followed by the results and the interpretation. All measurements shown in this section are carried out using serrated parallel plates with a stress-controlled DHR-3 rheometer (provided by TA Instruments) on samples of Carbopol gels and guar gum solutions referred to as "YS1-2" and "ST" respectively. The rheological properties of all test fluids used in this study are described in Table .1.
A logarithmic shear rate ramp with γ 4 0.001, 10$ sec 1 is applied on samples of test fluids for a duration of 105 sec in order to find the relation between shear rate and shear stress, τ f γ¦ (see Fig. 2). During the increasing shear ramp, the material is sheared from rest. The behavior of the yield stress material is hence similar to a Hookean solid until the stress reaches the yield stress. Beyond the yield stress, the material starts to flow like a shear thinning liquid. On the contrary, during the decreasing shear ramp, the yield stress material is already in flow condition and the stress asymptotes to the yield stress at low shear rates (see Fig. 2a). The value of yield stress during both increasing and decreasing ramps are identical. This is the typical behavior of non-thixotropic yield-stress materials (more information can be found in [START_REF] Uhlherr | The shear-induced solid-liquid transition in yield stress materials with chemically different structures[END_REF][START_REF] Coussot | Rheometry of pastes, suspensions, and granular materials: applications in industry and environment[END_REF]). The measurements of increasing and decreasing ramps overlap beyond yield stress and show no sign of hysteresis. The rheological behavior of Carbopol gel is described well by the Herschel-Bulkley (see Eq. 1) model as shown in Fig. 2: : increasing ramps (W), decreasing ramps (u) and the corresponding Herschel-Bulkley fits (q) described in the Table 1. The inset of (b) presents the variation of viscosity versus shear rate for ST.
τ τ y K γ¦ n (1)
Where τ y is the yield stress, K is the consistency and n is the power index.
These values are calculated for YS1-2 in range of γ 4 0.01, 10$ sec 1 in Table . 1.
Fig. 2b shows the rheology of the guar gum solution, ST in the plane of shear stress versus shear rate. The Carreau-Yasuda model has generally been adopted to explain the rheological behavior of guar gum solutions [START_REF] Risica | Rheological properties of guar and its methyl, hydroxypropyl and hydroxypropylmethyl derivatives in semidilute and concentrated aqueous solutions[END_REF][START_REF] Szopinski | Structure-property relationships of carboxymethyl hydroxypropyl guar gum in water and a hyperentanglement parameter[END_REF]. The inset of Fig. 2b shows the viscosity of the guar gum solution versus shear rate following the Carreau-Yasuda model. We see that the viscosity presents a plateau, η 0 ¤ 12.2 P a.sec, in the limit of small shear rates, γ 6 0.1 sec 1 . At γ 7 0.1 sec 1 viscosity decreases with shear rate until it reaches another plateau at higher shear rates. Here, we adopt a power-law model which properly describes the rheological behavior of the material in the range of shear rate in our experiments. The values of consistency and power-law index are reported in Table 1.
Practical yield-stress fluids exhibit viscoelastic behavior as well. Therefore, it is expected that the shear history has an impact on the behavior of the material. We have adopted two experimental procedures to evaluate the effect of shear history. In the first procedure, we shear the material ensuring that the strain is sufficient to break the micro-structure of gel and reach a steady state.
Then, we rest the material for one minute (zero stress) and apply the shear in the same direction as the pre-shear (hereafter called positive pre-shear). In the second procedure, we reverse the direction of the applied shear after imposing a pre-shear on the material (hereafter called negative pre-shear) and a rest period. Fig. 3a shows that under a constant applied shear stress the yield stress material reaches its steady state after a larger strain when negative preshear is applied. However, the shear history does not affect the behavior of the guar gum solution as shown in Fig. 3b. These procedures helped us design the experimental protocol for our Couette flow experiments (see Sec. 3.3.2).
One can conclude that a preshear in the same direction as the shear imposed subsequently in the experiments is appropriate for having a behavior close to that of ideal visco-plastic behavior.
In order to characterize the viscoelasticity of the test fluids further, the shear storage modulus, G ¬ and the shear loss modulus, G ¬¬ (representing the elastic and viscous behavior of the material respectively) are measured during oscillatory tests. Dynamic moduli of YS1 and ST are shown in Fig. 4 as a function of strain amplitude, γ 0 4 10 1 , 10 3 $ % while frequency is constant, ω 1 rad.sec 1 . We observe that the behavior is linear up to γ 0 ¤ 1% in YS1 while it remains linear at larger strain amplitudes, γ 0 ¤ 10% in ST. Elastic effects are dominant (i.e.
G
¬ 7 G ¬¬ ) at strain amplitudes lower than γ 0 ¤ 100% in the yield stress material, YS1 (see Fig. 4a). At γ 0 7 100%, the shear loss modulus becomes larger than the shear storage modulus in YS1 indicating that the viscous effects take over. On the other hand, elastic and viscous effects are equally important in ST in the linear viscoelastic regime as the shear loss and shear storage moduli have identical values under γ 0 ¤ 100% (see Fig. 4b). At larger strain amplitudes however, the shear loss modulus becomes larger implying larger viscous effects. The values of G ¬ and G ¬¬ reported in Table 1 are measured at ω 1 rad.sec 1 , γ 0 0.25 %. In Fig. 5 the variation of dynamic moduli is given as a function of frequency for the Carbopol gel, YS1 and the guar gum solution, ST. Different curves correspond to different strain amplitudes (γ 0 1, 5, 20, 50, 100 %).
Post-processing
The PMMA particles are tracked during their motion via Particle Tracking Velocimetry (PTV) to extract the trajectories. Images are recorded at strain increments of γ rec 8 0.6% to ensure high temporal resolution. In each image, the center and radius of each particle is detected via the Circular Hough Transform [START_REF] Peng | Detect circles with various radii in grayscale image via hough transform[END_REF][START_REF] Duda | Use of the hough transformation to detect lines and curves in pictures[END_REF]. Due to the small strain difference between two respective images and consequently small displacement of PMMA particles, same particles are identified and labeled in two images. Applying this methodology to all images we obtain trajectories of particles.
Particle Image Velocimetry (PIV) is employed to measure the local velocity field from successive images recorded from the flow field. It is worth mentioning that in this method we calculate the two dimensional projection of the velocity field in the plane of shear (xy plane).
We have used the MatPIV routine with minor modifications in order to analyze PIV image pairs [START_REF] Sveen | An introduction to matpiv v. 1.6. 1[END_REF]. Each image is divided into multiple overlapping sub-images, also known as interrogation windows. The PIV algorithm goes through three iterations of FFT-based cross-correlation between corresponding interrogation widows in two successive images in order to calculate the local velocity field. The velocity field measured in each iteration is used to improve the accuracy during the next iteration where the interrogation size is reduced to one half. Window sizes of 64 64, 32 32 and 16 16 pixels (¢ a©9) with the overlap of 50% are selected respectively during the first, second and third iterations. Following each iteration, spurious vectors are identified by different filters such as signal-to-noise ratio, global histogram and local median filters. Spurious vectors are then replaced via linear interpolation between surrounding vectors. Since less than 3.1% of our data is affected we do not expect a significant error due to the interpolation process. The measured velocity is ignored if the interrogation window overlaps with the particle surface (detected earlier via PTV algorithm). The size independence of the velocity measurements is verified by comparing the results with that obtained when we increase the interrogation widow size to 32 32 pixels (¢ a©4.5).
γ0 = 1% γ0 = 5% γ0 = 20% γ0 = 50% γ0 = 100% (a)
Experimental results
Establishing a linear shear flow in the absence of particles
The first step is to establish a linear shear flow field within the experimental set-up. Any deviation from the linear velocity profile across the gap of the Couette-cell affects the flow field around one particle, or the interaction of two particles. Our Couette-cell has a finite dimension bounded with a wall from the bottom, an acrylic window from the top and two rotating cylinders from the sides (see Fig 1). It is essential to show that a linear shear flow is achievable in the middle of the set-up and not affected by the boundaries. Reynolds number is defined as:
Re 4ρ 2U ©H¦a 2 µ (2)
which is of the order O 10 5 ¦ in our experiments, implying that the iner- tial effects are negligible. Here a and H are the particle radius and gap width respectively, U is the maximum velocity across the gap, ρ is the density and µ is the viscosity of the fluid. Moreover, according to the aspect ratio of the Couette-cell (50 cm long versus 2 cm wide), the central region where measurements are made is far from the shafts. In the absence of inertia and boundary effects the solution to the momentum equations would give us a linear velocity profile in our configuration, independent of the rheology of the test fluids. In this section, we present our experimental results showing how a linear shear flow field is established within the Couette-cell when we have different suspending fluids including Newtonian fluids, yield stress fluids and shear thinning fluids.
In the case of the Newtonian fluid, Fig 6a shows the velocity profile across the gap for different shear rates imposed at the belt. The velocity field is averaged along the x-direction (flow direction). We normalize the velocity with the maximum velocity across the PIV window, u c , and show that all velocity profiles collapse to a master curve (see Fig. consequently with the Newtonian fluid.
When we deal with a yield stress test fluid, there exist more dimensionless numbers in addition to the bulk Reynolds number including Bingham number (B) which is the ratio of yield stress (τ Y ) to the viscous stress (K γn ) in the flow:
B τ Y K γn (3)
Another important dimensionless number is Deborah number which is the ratio of the material time scale to the flow time scale. For elastoviscoplastic materials, the relaxation time λ, the elastic modulus G ¬ , and the apparent plastic viscosity η p are related via η p λG ¬ where the so-called plastic viscosity is defined as follows [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF]:
η p τ τ Y γ (4)
Comparing Eq.( 4) with (1), we conclude η p K γn1 . Therefore, the Deborah number is:
De λ γ K γn G ¬ (5)
Velocity fields obtained via PIV measurements are averaged along the flow direction. Fig. 7a shows the measured velocity profiles across the gap when normalized by the maximum velocity across the PIV window, u c . Next, shear rate profiles are calculated from the averaged velocity profiles according to Eq. ( 6) and are used to calculate the shear stress profiles via the Herschel-Bulkley model (shown in Fig. 7b and 7c respectively). Shear rate profiles are normalized by the average shear rate across the gap γc , while stress profiles are normalized by the average stress across the gap τ c . γloc
Ø 2 ∂u d ∂x ¦ 2 2 ∂v d ∂y ¦ 2 ∂u d ∂y ∂v d ∂x ¦ 2 (6)
It is evident that as we increase the Bingham number, the velocity profile deviates from a linear shape, and consequently, the shear rate is not constant. This is quite a unique observation for a yield stress fluid, and the rheology of the fluid can explain this puzzle. Let us take a closer look at the variation of stress with respect to the shear rate shown in Fig. 2 for the yield stress test fluids used in the experiments. We can see that at low shear rates (i.e. high Bingham numbers), such as 0.01 6 γ 6 0.1 sec 1 , a small variation in the shear stress projects to a large variation in the shear rate. On the contrary, at higher shear rates γ 7 1 sec 1 (i.e. low Bingham numbers) the same amount of stress variation corresponds to a significantly smaller variation in the shear rate. Fig. 7c shows the variation of stress across the gap is of the same order for all Bingham numbers while the resulting shear rate profiles are significantly different in terms of inhomogeneity. This implies that a small stress inhomogeneity due to any imperfection of the set-up and the test fluid (finite dimension of the set-up, slight inhomogeneity in the test fluid or etc.) projects into a larger shear rate inhomogeneity as we increase the Bingham number. This stress inhomogeneity is estimated from Fig. 7c to be ¤ 2% in our set-up.
Both the characteristic length of the inhomogeneity and its amplitude increase as the Bingham number increases. Our results show that for B 8 2, the shear rate inhomogeneity is minimal (comparable to that of the Newtonian test fluid), and we can establish a linear velocity profile in the set-up for the case of a yield stress fluid. Therefore, all the experiments in this work are performed for B 6 2.
One particle in a linear shear flow
This section is aimed at studying a linear shear flow around one particle in the limit of zero Re when we have different types of fluids including Newtonian, yield stress and shear thinning. A theoretical solution is available for a particle in a Newtonian fluid subjected to a linear shear flow field. We use the theoretical solution to validate our experimental results. The effect of a non-Newtonian fluid on the flow field around one particle is then investigated experimentally. Studying the disturbance fields around one particle is key to understanding the hydrodynamic interaction of two particles, and consequently, the bulk behavior of suspensions of noncolloidal particles in non-Newtonian fluids.
Stokes flow around one particle in a linear shear flow of a Newtonian
fluid: comparison of theory and experiment First, we compare our PIV measurements with the available theoretical solution for the Stokes flow around one particle in a linear shear flow of a Newtonian fluid [START_REF] Leal | Advanced transport phenomena: fluid mechanics and convective transport processes[END_REF]. The normalized velocity field obtained via a theoretical solution is illustrated in Fig. 8a along with the measured velocity field via PIV in Fig. 8b, which is normalized by the velocity at the belt. A quantitative comparison is given in Figs. 8d-8f where dimensionless velocity profiles are compared at cross sections located at different distances from the particle center, x©a 2.5, 1, 0.
It is noteworthy to mention that the PIV measurements are available at distances r©a 9 1 , where r is the distance from the particle center and ¢ 0.1 is given by the resolution of the PIV interrogation window. The close agreement between our velocity measurements with that predicted by the theory allows us to employ our method for the case yield stress fluids, where the theoretical solution is unavailable. Our experimental data can be used as a benchmark for these fluids.
Creeping flow around one particle in a linear shear flow: Newtonian and non-Newtonian suspending fluids
We present our PIV measurements of creeping flows around one particle in linear shear flows of Newtonian, shear thinning (guar gum solution) and yield stress (Carbopol gel) suspending fluids. About 100 PIV measurements (i.e., 100 PIV image pairs) are averaged afterwards to reduce the noise. The origin of the coordinate system, x, y, z¦, is fixed on the center of the particle and translates with it (non-rotating). We subtract the far field velocity profile from the experimentally-measured velocity field in order to calculate the disturbance velocity field around one particle:
u d u d , v d ¦ u u (7)
Where u d and v d are components of the disturbance velocity vector along the flow direction and gradient direction respectively. The disturbance velocity field is then normalized by the maximum disturbance velocity in the PIV window. Fig. 9 shows the normalized disturbance velocity field around one particle in linear shear flows of a Newtonian fluid (theory: Fig. 9a and experiment: Fig. 9b), a yield stress fluid (experiment of Carbopol gel: Fig. 9c) and a shear thinning fluid (experiment of guar gum solution: Fig. 9d). The shear flow is established as u γy, 0, 0¦ where γ 7 0. The disturbance velocity field is normalized by the maximum disturbance velocity in the field. Although the theoretical solution for the case of a single rigid sphere in a simple-shear flow of a Newtonian fluid exists, there is no theoretical solution in the case of a yield stress fluid. Therefore, our experimental measurements shown in Fig. 9c serves as the first set of information about simple-shear flows around a spherical particle. Fig. 10 shows the colormaps of shear rate around one particle in linear shear flows of a Newtonian fluid (theory: Fig. 10a and experiment: Fig. 10b), a yield stress fluid (experiment of Carbopol gel: Fig. 10c) and a shear thinning fluid (experiment of guar gum solution: Fig. 10d). The magnitude of local shear rates are calculated by taking the spatial derivative of the disturbance velocity fields based on Eq. ( 6). Although taking the derivative of experimental data (i.e., PIV measurements of the velocity field) amplifies the noise, averaging over more than 100 PIV measurements reduces the noise and allows us to see the qualitative features.
For the Newtonian fluid, our experimental results shown in Fig. 9b are in a very close agreement with the theoretical solution illustrated in Fig. 9a. we can see that the disturbance velocity has fore-aft symmetry and decays as we move away from the particle surface. Unlike the Newtonian fluid, fore-aft symmetry is broken for our non-Newtonian test fluids (see Fig. 9c and9d). The fore-aft asymmetry is significantly larger for the Carbopol gel (in Fig. 9c). As mentioned in Section 1, the loss of fore-aft symmetry is not predicted for the flow field around one particle if we use ideal visco-plastic constitutive models; e.g. Herschel-Bulkley and Bingham models [START_REF] Beris | Creeping motion of a sphere through a bingham plastic[END_REF][START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF][START_REF] Blackery | Creeping motion of a sphere in tubes filled with a bingham plastic material[END_REF][START_REF] Beaulne | Creeping motion of a sphere in tubes filled with herschel-bulkley fluids[END_REF]. However, practically speaking, both the guar gum solution and Carbopol gel are polymer based solutions with slight elasticity, and consequently, these are not ideal visco-plastic fluids. Elastic effects are thus responsible for the fore-aft asymmetry observed in Fig. 9c and9d. For viscoelastic fluid flows, uniqueness and nonlinearity are present but the symmetry and reversibility are missing. We should mention that by adopting an appropriate pre-shear procedure in our experiments (described in Section 2.4), we eliminated the possible effects due to the shear history.
Despite the loss of fore-aft symmetry which is evident in Fig. 9c and Fig. 9d, we note that the velocity disturbance field is symmetric with respect to the center of the particle (symmetric with respect to a point). This is indeed expected. Assume two fluid elements are moving towards the particle and located at the top left and bottom right of the flow field, but at the same vertical distance from the particle. Both fluid elements experience the same shear history during their motion (e.g., compression, extension, rotation) resulting in a symmetric flow field with respect to the center of the particle. The Deborah number is calculated based on the values of shear storage moduli measured at the frequency ω 1 rad.sec 1 with low strain amplitudes, γ 0 0.25 % (see Table 1). In the experiment with the Carbopol gel, YS1 (Fig. 9c) the Deborah number is De 0.15 while it is De 1.03 in the case of guar gum solution, ST(Fig. 9d).
Although the Deborah number is relatively small in our experiments, it clearly affects the flow field around the particles. This is consistent with the results of Fraggedakis et al. [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF] where they observed the effect of slight elasticity in a yield stress fluid to be significant in establishing the flow field around a single particle settling in a stationary column of a yield stress fluid. Despite the smaller value of the De number for the case of Carbopol gel compared to the guar gum solution, we see that the fore-aft asymmetry is larger. It can be due the interplay between plastic and elastic effects in the Carbopol gel which is an elasoviscoplastic material. Further investigation is required to reveal the role of plastic and elastic effects individually and mutually in establishing the flow field in a wide range of Bingham and Deborah numbers. This can be explored via a computational study since practical limitations exist in tackling this problem experimentally. For example, it is not possible to change the Deborah number in our experiments independent of other parameters such as Bingham number. Also, it is not feasible to increase the Deborah number significantly with the aid of conventional yield stress fluids such as Carbopol gels.
Variation of disturbance velocity around one particle at fixed distances from the particle center (r fixed) is illustrated in Figs. 11a and11b. It shows more clearly the fore-aft asymmetry in the Carbopol gel compared to that of the Newtonian fluid. Velocity is normalized with its maximum value at each distance, u c,r in Figs. 11a and11b.
The disturbance field shows how regions around a particle are affected by the presence of a particle. When disturbance velocity is zero or very small at a region it means this region lies outside of the zone influenced by the particle. Studying the disturbance fields around one particle is thus essential to predict the interaction of two particles, and consequently, the bulk behavior of dilute suspensions. The extent of disturbance is better seen on the velocity profiles. Figs. 11c and11d show the variation of disturbance velocity around one particle along different directions (θ fixed) normalized with the maximum disturbance velocity along each direction, u c,θ . It is evident that the disturbance velocity decays more rapidly in the case of the yield stress fluid and shear thinning fluid.
The maximum decay occurs in the flow of Carbopol gel around one particle. This means two particles will feel each other at a farther distance in a Newtonian fluid than in a generalized Newtonian fluid.
Interaction of two particles in a linear shear flow
In this section we study experimentally the interaction of two spherical PMMA particles in a linear shear flow of Newtonian, yield stress and shear thinning fluids. First, we compare our experimental results for the case of a Newtonain suspending fluid with the existing models [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF] and analytical solutions [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] describing the relative motion of two particles in a linear shear flow without the inertia. We proceed afterwards to study the non-Newtonian effects on the interaction of particles in a linear shear flow.
Interaction of two particles in a linear shear flow of a Newtonian fluid:
theory and experiment Fig. 12 shows the schematic of a particle trajectory around a reference particle in a linear shear flow. Depending on the initial offset, y 0 ©a, the particles follow different trajectories. If the initial offset is small enough, two particles collide and separate further apart on the recession zone (symmetry is broken). However, if the initial offset is large enough that they do not make contact, the corresponding trajectory is expected to be symmetric due to the symmetry of the Stokes equations. It is noteworthy to mention that in the case of smooth particles with no surface roughness, a contact is not possible due to divergence of lubrication forces. However, practical contact occurs due to unavoidable roughness at the surface of particles. For more details see theoretical [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] and experimental works [START_REF] Darabaner | Particle motions in sheared suspensions xxii: Interactions of rigid spheres (experimental)[END_REF][START_REF] Blanc | Kinetics of owing dispersions. 9. doublets of rigid spheres (experimental)[END_REF][START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF][START_REF] Blanc | Experimental signature of the pair trajectories of rough spheres in the shear-induced microstructure in noncolloidal suspensions[END_REF].
The interaction of two particles could be described at different ranges of separation by accurate hydrodynamics functions based on works by Batchelor [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] and Da Cunha [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF]. It is assumed that inertial and Brownian effects are negligible, particles are neutrally buoyant and spherical. The appropriate set of hydrodynamic functions must be chosen according to the separation of two particles, r and the roughness, ε. Using the aforementioned hydrodynamic functions we calculated the relative trajectories of two particles via 4th-order Runge-Kutta to march in time. The results are plotted in Fig. 13a. The trajectories fall into two categories of asymmetric and symmetric whether or not a contact occurs respectively.
Here, we present our experimental results for two particles suspended in a linear shear flow of a Newtonian fluid. The experimental trajectory map of two particles is shown in Fig. 14. In addition, we have compared the experimental trajectory map with those calculated from theoretical solutions in Fig. 13b. The best match is achieved by manually setting the roughness to ε theo 5.5 10 4 in the model which is close to the peak value of roughness, ε exp 6 3 10 4
reported by Phong in [START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF] for particles from the same batch. We see a great agreement between the theoretical and experimental trajectory map. The relative trajectories are symmetric with respect to y axis between the approach and the recession side if two particles do not contact. However, at lower initial offsets, when particles come into contact due to an unavoidable roughness at the particles surfaces, two particles separate further apart on their recession.
Consequently, the particle trajectories are fore-aft asymmetric. It is evident that all trajectories along which the particles come into a contact will collapse on each other at the downstream after separation.
Particles are tracked via PTV and the flow field is investigated via PIV simultaneously. Therefore, we can link the particle trajectories to the information obtained from the flow field. Fig. 15 illustrates a typical example of a trajectory line with its corresponding velocity and local shear rate colormaps at different points along the trajectory line for two particles in a linear shear flow of a Newtonian fluid. The second particle approaches the reference particle from x©a 6 0. When particles are far from each other, the distribution of shear rate around them resembles that of a single particle, i.e., the particles do not see each other. The particles interact as they approach, and the shear rate distribution and velocity field around them change correspondingly. After they come into contact, they seem to get locked together and rotate like a single body (between points B and D in Fig. 15) then separate from each other. Shear rate fields are normalized by the far-field shear rate.
-4 -2 0 2 4 6 -1 0 1 2 3 x/a y/a (a) -4 -2 0 2 4 6 -1 0 1 2 x/a y/a (b)
3.3.2. Interaction of two particles in a linear shear flow of a yield stress fluid: experiment In this section we present our experimental results on the interaction of two PMMA spherical particles in a linear shear flow of Carbopol gel, which is a yield stress fluid (see Sections 2.3.2 and 2.4). In such case, a theoretical solution does not exist due to the nonlinearity of the governing equations of motion, even in the absence of inertia. While the majority of the experimental works and simulations focused on the settling of particles in yield stress fluids, there are no simulation or experimental work on the interaction of two particles in a linear shear flow of a yield stress fluid in the literature. However, a paper relating a numerical 2D study of the interaction of pairs of particles in an ideal Bingham fluid is under review at the same time as our paper; our experimental results will be qualitatively compared to the simulation results when relevant [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF].
In the absence of inertia, the knowledge of roughness and initial offset are sufficient to predict the interaction, and consequently, the relative trajectory of two particles when we are dealing with Newtonian fluids. However, there are more parameters influencing the interactions of two particles in a yield stress fluid. We expect that the value of Bingham number should strongly affect the relative motion of two particles.
Moreover, viscoelastic effects are not always negligible when dealing with non-ideal yield stress fluids and their contribution must be evaluated (see [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF][START_REF] Fraggedakis | Yielding the yield stress analysis: A thorough comparison of recently proposed elasto-visco-plastic (evp) fluid models[END_REF]). According to the range of Deborah number in our experiments, De 4 0.04, 1.3$ we believe that viscoelastic effects can play an important role, which is consistent with [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF].
In addition, shear history is another parameter which affects the interaction of two particles due to the strain hardening in the non-ideal yield stress test fluids. As discussed earlier in sec. 2.4, for a sample of Carbopol gel, the material undergoes different transient flow states depending on the applied shear history. Our results show that when the material is pre-sheared in a negative direction, the trajectories experience a relatively longer transient regime (results not included). This is consistent with our results in Fig. 3 which suggest that the material reaches a steady state at larger strains under negative pre-shear. In the course of this study, we apply the same shear history in all of the experiments via adopting the positive pre-shear procedure in order to avoid strain hardening and to be as close as possible to a model plastic behavior. However, we should mention that the dimension of our Couette-cell is large enough to allow us to apply sufficient amounts of pre-strain to reach steady state condition, regardless of the shear history. x/a y/a While shearing the material we study the interaction of particles and the flow field via performing PTV and PIV respectively. Fig. 16 shows the trajectory map of particles in a Carbopol gel at γ 0.34 sec 1 , B 1.23 and De 0.15.
Two features are evident. First, the fore-aft asymmetries exist for all the trajectories including those with no collisions of particles. When the initial offset is large enough that there is no contact, particles experience a negative drift along the y-direction after passing each other (i.e. y f y 0 6 0). We think that this pattern can be attributed to the elasticity of the test fluid since no such behavior is observed in simulations when the fluid is considered ideal visco-plastic (e.g. Bingham model) [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. Second, for trajectories with small initial offsets, the second particle moves downward along the velocity gradient direction on the approach side while it moves upward on the recession side. The same pattern is observed in the simulations by Fahs et al. in [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF] for yield stress fluids as well as Newtonian fluids. These local minima in trajectories disappeared in their results for the Newtonian fluid when the domain size is increased from 24a 12a to 96a 48a. However, this pattern for the yield stress fluid (with B 10) disappeared at a larger domain size, 192a 96a. Hence, we can conclude that this might be due to the interplay of wall effects and non-Newtonian behavior.
Fig. 17 shows trajectories of two particles in a Carbopol gel at two different Bingham numbers, starting from approximately equal initial offsets. As expected, the particle trajectories strongly depend on the Bingham number. As we increase the Bingham number the second particle approaches the reference particle to a close distance and separates with a larger upward drift. This can be related to the stronger decay of the disturbance velocity at larger Bingham values around a single particle (see Sec. 3.2). This feature, which has been also observed in simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF], implies larger asymmetry in the PDF, and consequently, larger normal stress differences in the yield stress suspensions as we increase the Bingham number.
Fig. 18 shows a typical example of a trajectory line with its corresponding velocity and local shear rate colormaps at different points along the trajectory line for two particles in a linear shear flow of a yield stress fluid. Shear rate fields are normalized with the applied shear rate at the belt. The second particle approaches the reference particle from x©a 6 0. We see that particles interact as they approach and the shear rate distribution and velocity field around them change (see colormpas associated with point A, Figs. 18b andc). After they come into contact they seem to get locked together and rotate like a single body (between points B and C in Fig. 18). They separate from each other afterwards on their recession.
3.3.3. Interaction of two particles in a linear shear flow of a shear thinning fluid: experiment A Carbopol gel exhibits both yield stress and shear thinning effects. In order to investigate the effect of each non-Newtonian behavior individually, we perform similar experiments with a shear thinning test fluid without a yield stress. We use a Hydroproxypyl Guar solution which is transparent with negligible thixotropy at low concentrations (see Sections 2.3 and 2.4).
A map of the relative trajectory map of two particles in a linear shear flow of the guar gum solution, ST, is illustrated in Fig. 19. Unlike yield stress suspending fluids, trajectories do not exhibit downward and upward motions at the approach and recession zone respectively. A slight asymmetry exists when particles do not come into a contact, but this is much smaller than that of yield stress suspending fluids. When a contact occurs, the trajectories are all asym-
metric.
Fig. 20 illustrates a sample trajectory with its corresponding velocity and shear rate fields at different points along the trajectory line for two particles in the guar gum solution ST (see Table 1). The second particle approaches the reference particle from x©a 6 0. Shear rate fields are normalized with the applied shear rate at the belt.
Particle trajectories versus streamlines
As mentioned earlier in Section 3.2.2, the disturbance velocity decays more rapidly in the non-Newtonian fluids considered in this study. In other words, the influence zone around a single particle is smaller when dealing with yield stress and shear thinning fluids compared to the Newtonian fluid. In Fig. 21 we compared the trajectories of two particles subjected to a shear flow with the streamlines around a single particle (experimental velocity field). We can see that they overlap up to closer distances in the Carbopol gel and guar gum solution.
The streamlines around one particle can be viewed as the limiting form of when two particles are far away or when one particle is much smaller than the other one. The discrepancy between the fluid element streamlines and trajectories is related to the lubrication and contact of the particles. Fig. 21 shows that this discrepancy is minimal when the initial offset is large, meaning the pairwise interaction does not occur. Further computational and theoretical investigations are needed to build up trajectory maps of particle pairs in complex fluids from the flow field around a single particle in shear flows of complex fluids.
Discussion and conclusions
In this work, we have developed an accurate experimental technique to study the interaction of two spherical particles in linear shear flows of Newtonian, yield stress and shear thinning fluids. We have made use of PIV and PTV techniques to measure the velocity fields and particle trajectories respectively. Rheometry is employed in order to characterize the behavior of our test fluids.
We showed in Section 3.1 that we can establish a linear velocity profile in our Newtonian and non-Newtonian test fluids. In addition, for yield stress fluids, we observed that stress inhomogeneity (naturally present due to any imperfection in the set-up or the test fluid) could project to a larger amount of shear rate inhomogeneity as we increase the Bingham number. By restricting the range of Bingham number to B 6 2, we managed to eliminate this effect and achieve a linear shear flow in Couette device.
Next, we studied the flow around one particle when it is subjected to a linear shear flow. Our results are in a very close agreement with the theoretical solution for a Newtonian suspending fluid. Also the length scale of variation of the disturbance velocity is significantly smaller in yield stress fluids compared to that of Newtonian fluids. This affects the interaction of two particles, and consequently, the bulk rheology of suspensions of noncolloidal particles in shear thinning and yield stress fluids.
We provided the first direct experimental measurement of the flow disturbance around a sphere in a yield stress fluid. This can serve as a benchmark for simulations when dealing with suspensions of noncolloidal particles in yield stress fluids. Our study shows that Carbopol gel exhibits significant viscoelastic behavior which affects the particle interactions. We observed that even the disturbance field around a single particle in a shear flow cannot be explained without considering the viscoelastic effects. Hence, employing elastoviscoplastic (EVP) constitutive models [START_REF] Saramito | A new constitutive equation for elastoviscoplastic fluid flows[END_REF] [47] are necessary when accurate simulations are considered [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF]. Due to the experimental limits, further theoretical and computational studies are required to characterize the contribution of elastic and plastic effects in establishing the flow field around a single particle.
In the next step, we studied the interaction of a pair of neutrally buoyant particles in linear shear flows of Newtonian, yield stress and shear thinning fluids. In the case of Newtonian suspending fluids, we observed a very close agreement between our measurements and the available theoretical solution, which shows the merit of our experimental method. Subsequently, the same method has been employed to study the problem with yield stress and shear thinning suspending fluids which we have no theoretical solutions available for. As it is evident in Fig. 22, fore-aft asymmetry is enhanced for trajectories of particles in yield stress fluids (also observed in simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]) and shear thinning fluids. Even a slight asymmetry has been observed in tra-jectories with no collision. These observations imply greater asymmetry in the PDF and stronger normal stress differences in the yield stress suspensions.
It is noteworthy to mention that for yield stress suspending fluids, in the absence of inertia, the interaction of particles depend on various parameters such as Bingham number, Deborah number, shear history, initial offset and roughness. Hence, obtaining the entire trajectory space is not feasible experimentally for yield-stress fluids. However, overall trends and patterns could be understood by investigating a limited number of systematic measurements. The effect of different parameters on the interaction of particles is investigated in this study.
As mentioned in Section 3.2.2, in the guar gum solution and Carbopol gel, variations along the trajectory lines are confined in a closer neighborhood of the particle. We can link this observation to the variation of the disturbance velocity field around one particle in yield stress fluids where the length scale of the decay is smaller than that in Newtonian suspending fluids (see Figs. 9, 11c and 11d ). This feature has been observed in the numerical simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. It means that two particles feel each other's presence at closer distances, and when they do, the interactions are more severe. One can conclude that the short-range interactions are more important when dealing with yield stress suspending fluids. Due to the limited resolution of the experimental measurements close to the particles, especially when they are touching or very close (separations of the order the size of the interrogation window), accurate simulations with realistic constitutive models are required to understand and characterize the short-range hydrodynamic interactions, particularly the lubrication forces.
Another distinct feature observed during the motion of two particles in a yield stress fluid is the downward and upward motion of the second particle along the velocity gradient direction during approach and recession. This phenomenon could affect the microstructure, and consequently, the PDF of yield stress suspensions. This pattern has been observed experimentally for shear thinning suspending fluids in [START_REF] Snijkers | Hydrodynamic interactions between two equally sized spheres in viscoelastic fluids in shear flow[END_REF]. Also, similar behavior is observed for both Newtonian and yield stress fluids in the simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. By increasing the gap size, w©a, the downward and upward motion disappeared in their results for Newtonian fluid. However, for yield stress fluid, such behavior disappeared at larger gap sizes. We have not observed this feature during the motion of two particles in the shear thinning fluid in the course of this project, but it is perhaps due to the fact that this behavior is present only at initial offsets smaller than the range covered in our experiments. The confinement effects might be responsible for this behavior. The extent of such effects could be amplified in the presence of yield stress fluids. Further investigations are needed to understand the underlying mechanisms properly.
Figure 1 :
1 Figure 1: Schematic of the planar Couette-cell and the imaging system: the left shaft is driven by a precision rotation stage while the right shaft rotates freely. Walls are made from transparent acrylic, which allows laser to illuminate the flow field (styled after Fig. 1 of [74])
Figure 2 :
2 Figure2: Stress versus shear rate for a cycle of logarithmic shear rate ramps applied on samples of YS1 (a) and ST (b): increasing ramps (W), decreasing ramps (u) and the corresponding Herschel-Bulkley fits (q) described in the Table1. The inset of (b) presents the variation of viscosity versus shear rate for ST.
Figure 3 :
3 Figure 3: Normalized stress versus strain for samples of yield stress and shear thinning test fluids under a constant shear rate with different shear histories: (a) YS1 at γ 0.129 sec 1 B, De¦ 2.0, 0.09), (b) ST at γ 0.26 sec 1 De 1.03. Triangle markers represent negative pre-shear while square markers indicate positive preshear.
Figure 4 :
4 Figure 4: Elastic and viscous moduli with respect to strain amplitude in strain amplitude sweep tests with an angular frequency of 1 rad.sec 1 on samples of YS1 (a), and ST (b). G ¬ starts to decrease beyond a critical strain below which it is nearly constant.
ω (rad.sec -1 )
ω (rad.sec -1 ) γ0 = 1% γ0 = 5% γ0 = 20% γ0 = 50% γ0 = 100% (d)
Figure 5 :
5 Figure 5: Dynamic moduli, G ¬ (left column) and G ¬¬ (right column) for samples of YS1 (first row) and ST (second row) during frequency sweep from 0.1 to 100 rad©sec. Different markers correspond to different strain amplitudes, γ 0 1, 5, 20, 50, 100 %.
Figure 6 :
6 Figure 6: (a) Velocity profiles averaged along the x-direction for the Newtonian fluid when subjected to different shear rates of γ 0.18, 0.26, 0.35, 0.44, 0.52, 0.61, 0.70, 0.79 sec 1 . (b)
Figure 7 :
7 Figure 7: (a) Normalized velocity profiles across the gap when YS2 undergoes shear flows at different Bingham numbers: B, De¦ 4.6, 0.05¦ (c), B, De¦ 3.2, 0.07¦ (]), B, De¦ 2.3, 0.10¦ (), B, De¦ 2.2, 0.10¦ ([), B, De¦ 2.0, 0.11¦ () compared to that of the Newtonian fluid, NWT (v). (b) The corresponding dimensionless shear rate profiles and (c) stress profiles.
Figure 8 :
8 Figure 8: (a) Normalized velocity field obtained via theoretical solution for a Newtonian fluid. (b) Normalized velocity field for the Newtonian fluid NWT measured via PIV at γ 0.27 sec 1 . (c) Schematic of the particle and locations where velocity profiles are compared with the theory. (d-e) Comparison between velocity profiles obtained from theory () and experimental measurements (]) at different locations.
Figure 9 :
9 Figure 9: Normalized disturbance velocity fields around one particle in the shear flow of different fluids: (a) theoretical solution for a Newtonian fluid, (b) experimental results for a Newtonian fluid at γ 0.27 sec 1 , (c) experimental results for the Carbopol gel, YS1 at
Figure 10 :
10 Figure 10: Normalized shear rate fields around one particle in the shear flow of different fluids: (a) theoretical solution for a Newtonian fluid, (b) experimental results for a Newtonian fluid at γ 0.27 sec 1 , (c) experimental results for the Carbopol gel, YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦, (d) experimental results for the guar gum solution, ST at γ 0.26 sec 1 De 1.03.
Figure 11 :
11 Figure 11: Variation of disturbance velocity at fixed distances ((a) r©a 1.8, (b) r©a 2.3) around one particle in different test fluids: NWT at γ 0.27 sec 1 , YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦, and ST at γ 0.26 sec 1 De 1.03. Variation of disturbance velocity along different directions in different test fluids: (c) θ 45 `, (d) θ 135 `.
Figure 12 :
12 Figure 12: A schematic of two particles subjected to a shear flow and the general shapes of their trajectory: a) trajectory when two particles pass each other with no collision. b) trajectory when two particles collide.
Figure 13 :
13 Figure 13: a) Relative trajectory map calculated via Da Cunha's model [20], ε 5.5 10 4 . b) Relative trajectories obtained from the theoretical solution compared with those measured from the experiment (dashed colored lines) with the same initial offsets (y©a wit x 6 0).
Figure 14 :
14 Figure 14: Trajectory map of two particles in a linear shear flow of the Newtonian fluid. The reference particle is located at the origin and the second particle is initially at x©a 6 0.
Figure 15 :
15 Figure 15: (a) Trajectory line of two particles in the Newtonian fluid subjected to a shear rate of γ .27 sec 1 . (b-k) Left column is the velocity fields at different points marked along the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 16 :
16 Figure 16: Trajectory map of two particles in a shear flow of the Carbopol gel YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦.
Figure 17 :
17 Figure 17: Relative trajectories of two particles in the Carbopol gel YS1 with similar initial offsets at two different Bingham numbers. Dashed line corresponds to γ 1.70 sec 1 B, De¦
Figure 18 :
18 Figure 18: (a) Trajectory line of two particles in the Carbopol gel, YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦. (b-k) Left column is the velocity fields at different points marked on the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 19 :
19 Figure 19: Trajectory map of two particles subjected to a shear flow of ST at γ, De¦ 0.26 sec 1 , 1.03¦.
Figure 20 :
20 Figure 20: (a) Trajectory line of two particles in the guar gum solution, ST at γ 0.26 sec 1 De 1.03. (b-k) Left column is the velocity fields at different points marked on the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 21 :
21 Figure 21: Two-particle trajectories (solid lines) compared with the streamlines around one particle (dashed lines) in shear flows of different fluids: (a) NWT at γ 0.27 sec 1 , (b) YS1
Figure 22 :.26 sec 1 De 1 .
2211 Figure 22: a-d) Relative trajectories of two particles in shear flows of different test fluids with similar initial offsets: y 0 ©a 0.63 (a), 0.75 (b), 1.05 (c), 2.12 (d). Test fluids include NWT at γ 0.27 sec 1 , YS1 at γ 0.34 sec 1 B, De¦
Table 1 :
1 Composition, pH and rheological properties of the test fluids used in this study: NWT (Newtonian fluid), YS1-2 (yield stress fluids) & ST (shear thinning fluid). Dynamic moduli, G ¬ and G ¬¬ are measured at ω 1 rad.sec 1 , γ 0 0.25 %.
Test fluids
Materials (% wt.) ST YS1 YS2 NWT
Water 71.764 71.969 75.004 9.31
Glycerol 27.707 27.876 24.766 -
Carbopol 980 - 0.116 0.170 -
Jaguar HP-105 0.529 - - -
Sodium hydroxide - 0.039 0.060 -
Triton X-100 - - - 76.20
Zinc chloride - - - 14.35
Hydrochloric acid - - - 0.14
pH - 7.40 7.44 -
τ y (P a) 0 3.3 46.6 0
K (P a.sec n ) 6.7 4.6 18.7 4.6
n 0.46 0.50 0.30 1
G ¬ P a¦ G ¬¬ P a¦ 3.5 3.5 17.9 3.3 213.5 18.9 --
Acknowledgments
This research was supported by National Science Foundation (Grant No.
CBET-1554044-CAREER) via the research award (S.H.). |
01768670 | en | [
"spi.meca.mefl"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768670/file/Batchelor_JFM_hal.pdf | Mathieu Souzy
Imen Zaier
Henri Lhuissier
Tanguy Le Borgne
Bloen Metzger
Mixing lamellae in a shear flow
Introduction
Mixing is a key process in many industrial applications and natural phenomena [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF]. Practical examples include glass manufacture, processing of food, micro-fluidic manipulations, and contaminant transport in the atmosphere, oceans and hydrological systems. During the last decades, substantial progresses have been made in the description of mixing in systems as complex as turbulent flows [START_REF] Warhaft | Passive scalars in turbulent flows[END_REF][START_REF] Shraiman | Scalar turbulence[END_REF][START_REF] Falkovich | Particles and fields in fluid turbulence[END_REF], Duplat & Villermaux 2008[START_REF] Kalda | Simple model of intermittent passive scalar turbulence[END_REF], oceanic and atmospheric flows [START_REF] Rhines | How rapidly is a passive scalar mixed within closed streamlines?[END_REF], the earth mantle [START_REF] Allègre | Implications of a two-component marble-cake mantle[END_REF], porous media [START_REF] Dentz | Mixing, spreading and reaction in heterogeneous media: A brief review[END_REF][START_REF] Villermaux | Mixing by porous media[END_REF][START_REF] Borgne | The lamellar description of mixing in porous media[END_REF] and sheared particulate suspensions [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF]. In particular, the conceptualization of scalar mixtures as ensembles of 'lamellae' evolving through stretching, diffusion and aggregation [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF], Villermaux & Duplat 2003) has allowed deriving quantitative accurate theoretical predictions for the evolution of the full concentration Probability Density Functions (PDF) for a broad range of flows. Within this framework (valid for lamellae thinner than the smallest scale within the flow), mixing is driven by three processes occurring simultaneously: i) stretching a scalar field by a given flow creating elongated lamellar structures, ii) diffusion and compression that compete to define concentration gradients and iii) lamella coalescence leading ultimately to the final homogeneity of the system (Duplat & Villermaux 2008).
In complex configurations such as those listed above, the evolution of the concentration distribution is sensitive to a number of characteristics of the system: the distribution of stretching rates, the macroscopic dispersion rate, the scalar molecular diffusion coefficient and the rate at which lamellae aggregate. While their influence on the dynamics of mixing is clear from a theoretical point of view, they can rarely be observed independently. Moreover, in spite of the numerous experimental studies on mixing, the question of the spatial resolution required to quantify the evolution of a concentration field has received little attention. This latter point is obviously crucial since under-resolved images, by artificially broadening the concentration distribution of isolated lamellae or conversely, by sharpening the concentration distribution of bundles of adjacents lamellae, lead to an erroneous appreciation of the mixing rate.
Here, we present highly resolved experiments where the basic mechanisms governing mixing are quantified individually: the formation of elongated lamellae by stretching, the establishment of a Batchelor scale by competition between compression and diffusion, the enhancement of diffusive mixing by stretching and the diffusive aggregation of lamellae. We consider for this a lamella formed by photo-bleaching of a fluorescent dye in a laminar shear flow where the concentration distribution can be quantified using a well controlled experimental set-up, built specifically to resolve small length scales. This benchmark experiment was chosen precisely because it is well established theoretically. Unambiguous conclusions can therefore be drawn regarding the spatial resolution required to capture the evolution of the concentration distribution of an isolated lamella, and more generally within any mixing protocols. Moreover, it illustrates with an unprecedented resolution and level of details the basic lamellar theories for mixing at the scale of a single lamella. Last, we investigate the coalescence of two nearby lamellae specifically focusing on its impact on the evolution of the concentration distribution. The theoretical prerequisites are recalled in § 2. After presenting the experimental set-up in § 3, the measurements are reported in § 4. Conclusions are drawn in § 5.
Mixing in a laminar shear flow
General picture
We consider a lamella of dye of length l 0 and half width s 0 initially positioned perpendicular to the flow as illustrated in Figure 1.a. As the lamella is advected by a laminar shear flow with a shear-rate γ , its length increases as l(t) = l 0 1 + ( γt) 2 . Thus, considering the sole effect of the advection field (for now neglecting that of molecular diffusion), the half width of the lamella s A (t) decreases following s A (t) = s 0 / 1 + ( γt) 2 (see Figure 1.b) since in this two dimensional flow, mass conservation prescribes s 0 l 0 = s A (t)l(t). The effect of the shear flow can thus be quantified by a compression rate -1/s A ds A /dt which describes how fast the lamella transverse dimension thins down owing to its stretching by the advection field. Conversely, molecular diffusion tends to broaden the lamella with a diffusive broadening rate D 0 /s 2 A , where D 0 is the molecular diffusion coefficient of the scalar. Balancing these two rates, -1/s A ds A /dt ∼ D 0 /s 2 A and assuming γt 1, naturally defines a time-scale, also called the Batchelor time, t B ∼ P e 1/3 / γ where P e = γs 2 0 /D 0 denotes the Péclet number. This time corresponds to the onset of the homogenization of the concentration levels within the system, beyond which the concentration levels within the lamella start to significantly decay.
Exact solution
The complete description of the evolution of the lamella of dye can be found by directly solving the full advection-diffusion equation
C(n, t) C(x, t) s AD σ x x n C( C C n C( C C x x x n n n n x y s A s AD l 0 s 0 θ a) b) c) l 0 γt l(t) z n γ 2 2 2 2
Figure 1. a) Schematic of a lamella of dye of initial length l0 and half width s0 advected in a laminar shear flow. b) Effect of the advection field alone: the strain γt has stretched the lamella and thinned down its transverse dimension to 2sA(t). c) Effect of both advection and molecular diffusion: at the same strain, the half width of the lamella is denoted sAD(t). Inset: schematic of the gaussian concentration field of the lamella with its concentration profiles along the flow C(x, t) and transverse to the lamella C(n, t).
∂C ∂t + u.∇C = D 0 ∇ 2 C, (2.1)
where u = ( γy, 0, 0) denotes the advection field, and C the concentration field. This equation can be simplified [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF][START_REF] Rhines | How rapidly is a passive scalar mixed within closed streamlines?[END_REF][START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF][START_REF] Meunier | How vortices mix[END_REF] if written in a moving frame (n, z) aligned with the directions of maximal compression and stretching of the lamellae (see Figure 1.c), and by using Ranz's transform [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF]) which uses warped time units, τ = t 0 D 0 /s 2 A (t )dt , and distances normalized by the lamella transverse dimension, ξ = n/s A (t). Equation (2.1) then reduces to a simple diffusion equation
∂C ∂τ = ∂ 2 C ∂ξ 2 , (2.2)
whose solution for an initial gaussian concentration profile with maximum C 0 is
C(ξ, τ ) = C 0 √ 1 + 4τ e -ξ 2 1+4τ . (2.3)
The maximum concentration of the lamella thus decays with time according to
C max (t) = C 0 √ 1 + 4τ . (2.4)
Since the half width of the lamella in Ranz's units is
σ ξ = (1 + 4τ ) 1/2 (see equation 2.
3), the transverse dimension of the lamella s AD (t), accounting both for the effect of advection and diffusion, and expressed in standard unit of length is
s AD (t) = σ ξ s A (t) = s 0 (1 + 4τ ) 1/2 1 + ( γt) 2 .
(2.5)
The latter expression gives access to the half width of the lamella along the flow (x-direction)
σ x (t) = s AD (t) cos θ = σ 0 (1 + 4τ ) 1/2 , (2.6)
with σ 0 = s 0 and cos θ = 1/ 1 + ( γt) 2 (see Figure 1).
We now have access to a more accurate estimation of the Batchelor time which by definition is reached when τ = 1 [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF]. Since τ = D 0 (t+ γ2 t 3 /3)/s 2 0 , the Batchelor time, for P e 1, is
t B ≈ (3P e) 1 3
γ .
(2.7)
At this time, the transverse dimension of the lamella, from now on referred to as the Batchelor scale [START_REF] Batchelor | Small-scale variation of convected quantities like temperature in a turbulent fluid. part 1. general discussion and the case of small conductivity[END_REF], is found to be
s AD (t B ) = s 0 √ 5(3P e) -1 3 . (2.8)
Last, the evolution of the concentration distribution P (C, t) of the lamella can be derived. Using the following change of variables P (C)dC = P (x)dx yields P (C) = P (x)|dC/dx| -1 which can easily be expressed from
C(x, t) = C max (t)e - x 2 σ 2 0 (1+4τ ) ,
(2.9)
and the uniformity of P (x). Considering the range of concentration C th ≤ C ≤ C max (t), above any arbitrary threshold concentration C th (larger than the experimental background noise), one obtains
P (C, t) = 1 βC ln(C max /C) , (2.10)
where β is a normalizing prefactor insuring that
Cmax(t) C th P (C, t)dC = 1.
The above set of equations fully describes the evolution of a lamella of dye advected in a laminar shear flow.
Experimental set-up
The experimental set-up is shown in Figure 2. It consists of a shear-cell made of two parallel plexiglass plates sandwiching a fluid cell 85 mm long, 25 mm wide and of height h = 3 mm. The fluid is sheared by setting the two plates into opposite motion with two high-resolution translation stages (not shown on the schematic). The travel-range of the plate enables a maximum strain of 30. The cell is sealed on both ends by two PTFE plates (grey in Figure 2), and on the sides by two transparent side walls (not shown on the schematic) mounted on the bottom moving plate.
The fluid is a Newtonian mixture of triton X-100 (77.4 wt%), zinc chloride (13.4 wt%) and water (9.2 wt%) having a density ρ = 1.19 g.cm -3 and viscosity η = 4.2 Pa.s which ensures laminar flow conditions (over the full range of shear rates investigated Re = ρ γh 2 /η ≤ 10 -3 ). Prior to filling the cell, some fluorescent dye (Rhodamine 6G) is thoroughly mixed to the fluid at a concentration of 2 10 -6 g.mL -1 . The molecular diffusion coefficient of this dye was measured in the cell with the fluid at rest using a technic similar to that reported in [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF]. We find D 0 ≈ 2.66 10 -13 m 2 s -1 at a temperature of 22 • C.
The initial lamella is generated using fluorescent recovery after photo-bleaching, [START_REF] Axelrod | Mobility measurement by analysis of fluorescence photobleaching recovery kinetics[END_REF]. A high-power-laser-diode beam (Laser Quantum Gem512-2.5W) is shaped into a thin laser sheet using the combination of lenses set in their 'bleaching' configuration as shown in Figure 2.a. This sheet, oriented in the yz plane (perpendicular to the flow) and used with full intensity, locally changes the conformation of the rhodamine molecules which irreversibly become unable to fluoresce.
Then, the set-up is set into its 'visualization' configuration by rotating the cylindrical lens 90 • : this orients the laser sheet along the xy plane (along the flow). The lamella then appears as a thin dark vertical line, see Figure 2.b. Note that the initial lamella is in fact a transverse sheet, rather than just a line. This avoids diffusion in the z-direction and enables us to study a purely 2D process. This photo-bleaching technic, relative to a direct injection of fluorescent dye, also insures well-controlled initial conditions: the lamella is uniform and its initial concentration profile is gaussian owing to the gaussian nature of the impacting laser beam.
Immediately after bleaching, the fluid is sheared at the desired shear rate and images of the evolution of the lamella are acquired using a camera (Basler Ace2000-50gm, 2048×1080 pixel 2 , 12 bit) coupled to a high-resolution macroscope (Leica Z16 APO×1). The set-up was designed to provide an image resolution (0.8 µm/pixel) small enough to resolve the Batchelor scale (equation 2.8). For instance, with γ = 0.01 s -1 and s 0 = 25µm, we have s
AD (t B ) = s 0 √ 5(3 γs 2 0 /D 0 ) -1 3 ≈ 10µm.
A high-pass filter (590 nm) is positioned between the sample and the camera to eliminate direct light reflexions. To avoid photobleaching during the image acquisition, the intensity of the laser is lowered to 100 mW and the image acquisition is synchronized with a shutter opening only during acquisition times. Note that all experiments are performed at T = 22 ± 0.05 • C by setting the temperature of the water running through the bottom moving plate with a cryo-thermostat.
Experimental results
A single lamella
Figure 3.a shows successive pictures of a lamella undergoing a laminar shear (see also supplementary movie 1). Initially vertical and highly contrasted, the lamella progressively tilts under the effect of the shear flow while blurring under the effect of molecular diffusion. Accurate measurement of the lamella's concentration profile along the flow (x-direction) are obtained by averaging over all horizontal lines of pixels after trans- lating these lines to make their maximum concentration coincide. The resulting average concentration profile of the lamella is shown in Figure 3.b for successive strains: the maximum concentration decays while the width increases. These trends are well captured by fitting each concentration profile with a gaussian of the form C(x, t) = C max (t)e -x 2 /σ 2 x (t)
(see Figure 3.b).
The resulting maximum concentration C max (t) and width σ x (t) are plotted in Figure 3.c and 3.d versus time for experiments performed at different Péclet number (4.5 ≤ P e ≤ 1190). The Péclet number was varied by repeating the experiment at various shear-rates γ = [6 × 10 -4 -0.3] s -1 . The agreement with equations 2.4 and 2.6 is very good for both C max (t) and σ x (t). Note that in both cases, γ, s 0 and D 0 are fixed by the experimental conditions; there is thus no adjustable parameter. When plotted as a function of the dimensionless time t/t B , where t B = (3Pe) 1/3 / γ is the Batchelor time, these data are found to collapse, for all Pe, on the same master curve, see Figure 3.e and f. For t < t B , C max and σ x remain constant. Then when the effect of molecular diffusion becomes significant, i.e for t > t B , C max (respectively σ x ) starts to decrease (respectively increase) following the power law t -3/2 (respectively t 3/2 ), consistently with the long time trends of equations 2.4 and 2.6. These measurements clearly illustrate how mixing is accelerated by imposing an external macroscopic shear: larger applied shear rates (larger Péclet numbers) result in earlier mixing times.
We have so far probed the lamella along the direction of the flow. However, further insight into the mixing process, specifically on the advection-diffusion coupling presented above, are provided by probing the lamella width along its transverse direction (along n, see Figure 1). Figure 4.a shows the evolution of s AD (t) measured experimentally. At an intermediate time, the thickness of the lamella is found to decrease like t -1 . After reaching a minimum, it increases like t 1/2 . These trends precisely illustrate the expected interplay between advection and diffusion. The lamella width initially decreases as imposed by the kinematics of the flow following the intermediate time trend (for t < t B ) of equation 2.5, s AD (t) ∼ s 0 ( γt) -1 . However, this compression of the lamella progressively steepens its concentration gradients which beyond the Batchelor time, eventually makes the broadening effect due to molecular diffusion become dominant. The transverse dimension of the lamella then re-increases diffusively like t 1/2 . At the Batchelor time t B , the lamella typically reaches its minimum thickness which is equal to the Batchelor scale s AD (t B ) (within 3%). As shown in Figure 4.b, reporting this direct measurement of the Batchelor scale obtained for various Péclet numbers matches the expected prediction s AD (t B ) = s 0 √ 5(3P e) -1 3 , see equation 2.8. To fully describe the mixing process, we also measure the evolution of the lamella's concentration distribution P (C, t). The distribution P (C) is obtained from the histogram of intensities collected from all the pixels constituting the lamella which are above a threshold C th = 0.1C 0 . This discards the background image noise enabling us to focus on the peak of interest, that of high concentration. Changing the value of the threshold above the background noise, changes the extend of the spatial domain over which P (C) is computed, but it does not affect its shape. We obtain concentration distributions which have a characteristic U-shape [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF], see Figure 4.c. The distribution's maximum probability, initially located at C = C 0 , progressively drifts to lower values as the lamella is stretched and diffuses. The prediction provided by equation 2.10 is in good agreement with the measured concentration distributions obtained for all Péclet number and again, without adjustable parameter.
The distributions shown above are well resolved since the Batchelor scale is always larger than 10 pixels. However, in most studies on mixing, such a large resolution is not achieved. Moreover, when mixing is investigated in complex systems such as turbulent flows [START_REF] Villermaux | Coarse grained scale of turbulent mixtures Physical review letters[END_REF], porous media (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF] or recently in sheared particulate suspensions [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF], the initial lamella of dye does not evolve towards a single lamella with uniform thickness but instead, towards a large number of lamellae having widely distributed thicknesses. In such situations, it is important to know which part of the distribution of lamellae can be resolved experimentally.
To address this point, we systematically investigate how the set-up spatial resolution can bias the measured concentration distribution of a single lamella. This can be achieved by a simple coarsening protocol: the reference concentration distribution (blue dotted line in Figure 5) is obtained from a highly resolved image where the lamella half width spans 8 pixels. When coarsening the pixels of this reference image, (merging 2×2 pixels into one, 4×4 pixels into one and so on) we find that the concentration distribution remains unchanged for images having 4 and 2 pixels across the lamella half width (green dotted lines in Figure 5). Conversely, larger coarsening, i.e. images having 1 pixels or less across the lamella half width, yield erroneous results: the concentration distribution departs from the reference one (red dotted lines in Figure 5). The limit of 1 pixels per lamella half width is surprisingly small and comes as encouraging news for experimentalists. This limit holds as long as the concentration profile is smooth (e.g gaussian) and there is a finite tilt between the lamella and the lines of pixels. In such case, the progressive drift of the lamella position relative to that of the pixels scans all the possible concentration levels, thereby providing results consistent with the fully resolved PDF.
Two Lamellae
As mentioned in the introduction, mixing in complex systems generally involves coalescence between dispersed lamellae. This latter step is crucial for the system to reach its final homogeneity. Here, the photo-bleaching technic is used to produce two adjacent lamellae and investigate a single coalescence event. Figure 6.a shows images of two parallel lamellae captured at successive strains (see also supplementary movie 2). The two lamellae are initially distinct from each other and, as they are stretched by the flow and diffuse, they progressively merge to eventually form one single entity. The evolution of their concentration profiles measured along the flow x-direction are shown in Figure 6.b. The two maxima of concentration, corresponding to each lamella, decay (blue arrows) while the minimum located in between increases (green arrow). This goes on until the two lamellae merge into one single lamella whose maximum subsequently decreases (yellow arrow). This evolution can easily be predicted owing to the linearity of the diffusion equation: the concentration profile of a set of two lamellae, with individual profiles C 1 (x, t) and C 2 (x, t), is simply obtained from the summation C(x, t) = C 1 (x, t) + C 2 (x, t) [START_REF] Fourier | Théorie Analytique de la Chaleur[END_REF]. Using, for each lamella, equation 2.9 with the initial experimental maximum intensity and width, we thereby obtain the concentration profiles shown in Figure 6.d.
To anticipate the impact of this coalescence on the evolution of the concentration distribution, lets recall that P (C) ∼ |dC/dx| -1 . Thus, each value of C(x, t) where the concentration profile presents an horizontal slope gives rise to a peak in the concentration distribution. This can be clearly observed on the experimental concentration distributions shown in Figure 6.c. As long as the two lamellae are distinct, one not only observes two peaks at large concentrations (corresponding to the maximum of concentration of each lamellae), but also one peak at small concentration (corresponding to the minimum concentration located in between the two lamellae). Then, as the lamellae are stretched and diffuse, the two peaks corresponding to the two concentration maxima, move to the left (blue arrow in Figure 6.c). Conversely, as the concentration in between the two lamellae increases, the small concentration peak moves to the right (green arrow in Figure 6.c. Coalescence occurs when the three peaks collide; the distribution eventually recovers its characteristic U-shape as the lamellae have coalesced into a single entity. The same phenomenology can be observed on the concentration distributions obtained from numerically computing the slope of the predicted concentration profiles, see Figure 6.e.
Discussion and conclusions
We have explored experimentally the basic mechanisms of mixing in stirred flows by thoroughly investigating the evolution of an initially segregated scalar (a lamella of flu-orescent dye) stretched within a laminar shear flow. A high resolution set-up, using a photo-bleaching technic to generate and control the shape of the initial lamella, was built in order to resolve length scales at which diffusion plays a significant role. Our measurements of the evolution of the lamella concentration profiles and concentration distributions are, without adjustable parameter, in excellent quantitative agreement, for P e = [4.5 -1190], with the theoretical predictions.
We also investigated the evolution of the lamella's transverse dimension which conspicuously illustrates the advection-diffusion coupling yielding at intermediate time to a t -1 compression of the lamella dominated by the kinematics of the flow, followed, after the Batchelor time, by a t 1/2 broadening dominated by molecular diffusion. The Batchelor scale, which to our knowledge was experimentally observed only once [START_REF] Meunier | Transport and diffusion around a homoclinic Point[END_REF], was here measured systematically for various Péclet number and found to follow the expected behavior, s AD (t B )/s 0 = √ 5(3P e) -1 3 . Most importantly, through a coarsening protocol, we determined the minimal experimental spatial resolution required to resolve the concentration distribution of a single lamella: its half width must be larger than 1 pixel. This requirement is general and constrains the measurement of any mixing protocol. Indeed, for all stretching protocol, lamellae reach their minimum thickness at the Batchelor time while they are still isolated individual entities. Resolving P (C) at this time, which requires to resolve each individual lamella, therefore is the most demanding in term of spatial resolution: the half width of the lamella at the Batchelor time, s AD (t B ), shall at least span 1 pixel. Note that measuring P(C) at longer times can be less stringent. Indeed, beyond the Batchelor time, lamellae diffuse sufficiently that, when their spacing becomes sufficiently small, nearby lamellae start merging together. The concentration gradients then no longer vary over the lamellae transverse dimension but instead, over a larger length-scale which reflects the size of the bundles of merging lamellae. In the context of turbulent mixtures for instance, this 'coarse-grained scale' was proposed to follow η = LSc -2/5 where L denotes the stirring length and Sc the Schmidt number [START_REF] Villermaux | Coarse grained scale of turbulent mixtures Physical review letters[END_REF]). However, the latter length-scale is only relevant after lamellae have merged which occurs after the Batchelor time. Therefore, resolving P(C) at all times requires an experimental spatial resolution satisfying s AD (t B ) > 1 pixel.
Finally, we have investigated the coalescence between two lamellae and its impact on the evolution of the concentration distribution. Lamellae's overlap gives rise to a nontrivial peak at low concentration. This observation is important as it may be relevant to interpret more complex situations.
To conclude, the high resolution experimental technics developed for the present study and the determination of its limitations, open promising perspectives for future studies on mixing.
Figure 2 .
2 Figure 2. (color version online) a) Schematic of the set-up. b) Typical image of a lamella obtained by photo-bleaching.
Figure 3 .
3 Figure 3. (color version online) a) Successive images of a lamella undergoing shear (see also supplementary movie 1). b) Corresponding averaged concentration profiles along the flow (x-direction). The black lines are fitting gaussian profiles. c) Normalized maximum concentration Cmax/C0 and d) normalized half width of the concentration profiles σx/σ0 versus time for experiments performed at different Péclet numbers. The black lines correspond to equation 2.4 in c) and equation 2.6 in d). In both cases, γ, s0 and D0 are set and fixed by the experimental conditions. e) and f) Same data plotted versus t/tB.
Figure 4 .
4 Figure 4. (color version online) a) Evolution of transverse dimension of the lamella sAD(t)/s0 versus time for experiments performed at different Péclet numbers. The black lines correspond to equation 2.5. The dotted line corresponds to the solution in the absence of shear, i.e. in the pure diffusion limit (Pe = 0). b) Corresponding Batchelor scale sAD(tB)/s0 versus Péclet number; the black line corresponds to equation 2.8. c) Concentration distribution P (C/C0) measured at successive strains, γt, for a lamella sheared at P e = 4.5. The black lines correspond to equation 2.10 with γ. In all cases, γ, s0 and D0 are fixed by the experimental conditions.
Figure 5 .
5 Figure 5. (color version online) Distributions of concentration P (C/Cmax) obtained from the progressive digital coarsening of an experimental image.
Figure 6 .
6 Figure 6. (color version online) a) Coalescence of two nearby lamellae advected in a laminar shear flow (see also supplementary movie 2). Evolution of the corresponding experimental b) concentration profiles and c) concentration distributions obtained at successive strains γt. Predictions for the evolution of d) the concentration profiles and e) the concentration distributions at the same strains. Again, γ, s0 and D0 are fixed by the experimental conditions.
We thank E. Villermaux and P. Meunier for their inspiring comments and suggestions. We would also like to thank Sady Noel for machining the experimental set-up. This work was supported by ANR JCJC SIMI 9, by the Labex MEC ANR-10-LABX-0092, by ANR-11-IDEX-0001-02 and by ERC consolidator grant ReactiveFronts 648377. |
01768671 | en | [
"spi.meca.mefl"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768671/file/2016_Souzy_JFM.pdf | M Souzy
H Lhuissier
E Villermaux
B Metzger
Stretching and mixing in sheared particulate suspensions
Stretching and mixing in sheared
particulate suspensions
Introduction
Sheared particulate suspensions represent a quasi-unique system where efficient dispersion spontaneously occurs even under low Reynolds number flow conditions. For instance, the transfer of heat [START_REF] Sohn | Heat Transfer enhancement in laminar slurry pipe flows with power law thermal conductivities[END_REF], Metzger 2013) or mass [START_REF] Wang | Hydrodynamic diffusion and mass transfer across a sheared suspension of neutrally buoyant spheres[END_REF][START_REF] Wang | Augmented Transport of Extracellular Solutes in Concentrated Erythrocyte Suspensions in Couette Flow[END_REF][START_REF] Souzy | Super-diffusion in sheared suspensions[END_REF] across a suspension of non-Brownian particles is significantly enhanced when the suspension is submitted to a macroscopic shear. This would not happen in a pure Newtonian fluid where the laminar streamlines remain perpendicular to the scalar (heat or concentration) gradients. In a sheared suspension, the macroscopic stationary imposed shear results at the particle scale in an unstationary flow: particles constantly collide with one another, change streamlines and thus generate disturbances within the fluid which promote the dispersion of the scalar, prelude to its subsequent mixing. Two mechanisms have been identified to explain the origin of the transfer enhancement. First, the particle translational shear-induced diffusivity, a phenomenon which has been widely investigated over the last decades [START_REF] Eckstein | Self-diffusion of particles in shear flow of a suspension[END_REF][START_REF] Arp | The Kinetics of Flowing Dispersions IX. Doublets of Rigid Spheres (Experimental)[END_REF][START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Breedveld | Measurement of the full shear-induced self-diffusion tensor of noncolloidal suspensions[END_REF][START_REF] Sierou | Shear-induced self-diffusion in non-colloidal suspensions[END_REF], Metzger 2013). Second, the particle rotation whose impact is particularly important at the boundaries, where particles disrupt the diffusive boundary layer by a 'rolling-coating' effect [START_REF] Souzy | Super-diffusion in sheared suspensions[END_REF]. These studies mainly focused on the rate of transfer across sheared suspensions which is customary characterized by an effective diffusion coefficient much larger than the scalar molecular diffusivity. Another aspect of transport enhancement concerns the mixing properties of the system, namely its ability, starting from a given spatial scalar distribution, to reach homogeneity. Figure 1 shows how a blob of dye with initial size s 0 , diffuses while it is deformed by the complex flow in the interstitial fluid of a suspension. The important question which naturally arises is to understand how this initially segregated system reaches homogeneity, and particularly how long this process takes. By essence, it involves both advection by the flow and molecular diffusion of the scalar. Such a problem has been studied in a wide range of situations involving a single fluid phase such as shear flows [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF], vortice flows (Meunier 2003), turbulent jets [START_REF] Duplat | A nonsequential turbulent mixing process[END_REF], or flows in porous media (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF]. These studies all underline the crucial importance of the rate at which fluid material lines are elongated by the flow [START_REF] Villermaux | On dissipation in stirred mixtures[END_REF]. The knowledge of these 'stretching laws' allows to estimate the mixing time: the time when the scalar concentration fluctuations start to significantly decay [START_REF] Batchelor | Small-scale variation of convected quantities like temperature in a turbulent fluid. part 1. general discussion and the case of small conductivity[END_REF]. For instance, in a simple shear flow with rate γ, the material lines grow as γt. In the limit of large Péclet number P e = γs 2 0 /D, the mixing time for a scalar blob of initial size s 0 is t mix ∼ γ-1 P e 1/3 , where D denotes the molecular diffusivity of the dye. In chaotic flows, where the stretching rate is maintained, the material lines stretch exponentially, as e γt , and t mix ∼ γ-1 ln P e.
In spite of their crucial importance for mixing issues, stretching laws in particulate suspensions have never been studied experimentally, nor have the general question about the mixing time in such a system been. Stretching in particulate suspension has been addressed indirectly using numerical simulations through the measurement of the suspension largest Lyapunov exponent [START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF][START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF], Metzger 2012, Metzger 2013). In such a chaotic system, the mean stretching rate of fluid elements can be assimilated to the largest Lyapunov exponent. The reported positive Lyapunov exponents indicate that the stretching laws must be exponential. Stretching has also been explored theoretically with the motivation of understanding the rheology of such system when the suspending fluid is viscoelastic. It was shown that the expected exponential stretching of the polymers should affect the pressure drop in fixed beds of spheres or fibres [START_REF] Shaqfeh | Polymer stretch in dilute fixed beds of fibres or spheres[END_REF] or the viscosity of freely suspended fibres in a simple shear flow [START_REF] Harlen | Simple shear flow of a suspension of fibres in a dilute polymer solution at high Deborah number[END_REF].
In this paper, we specifically address the question of the stretching kinematics by performing experiments on non-Brownian and spherical particulates suspended in a viscous and Newtonian fluid that is steadily and uniformly sheared. In this limit, the flow kinematics is independent of both the shear rate γ and the molecular diffusivity. The sole parameter expected to affect the stretching process is the particulate volume fraction φ. We investigate the stretching laws in particulate suspensions varying the volume fraction over a wide range 20% φ 55%, for which collective effects between particles are present but the suspension still flows easily, since it is still far from jamming. After presenting the experimental set-up in § 2, we first compare the evolution of a blob of dye sheared in a pure fluid (without particles) to that of a blob sheared in a suspension ( § 3). This experiment illustrates the complexity of the advection field induced by the presence of the particles. Then, following the Diffusive Strip Method of Meunier 2010, accurate velocity field measurements of the fluid phase ( § 2.3) are used to determine the stretching laws. Material lines are found to stretch, on average, exponentially with time ( § 4), at a rate which agrees with the largest Lyapunov exponents reported in 3D Stokesian dynamic simulations [START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF][START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF]. Beyond the mean, we tackle the complete statistics of stretching, that is to say, the distributions of elongation as a function of strain and particle volume fraction, which are found to converge towards log-normal distributions. In § 5, we present a model, based on a multiplicative stretching process, which explains quantitatively the experimental distributions of the material line elongation and its dependance to γt and φ. Finally, the crucial implication of these findings for scalar mixing are developed and discussed in §6, before we conclude in §7.
Experimental set-up
The experimental set-up is shown in figure 2. It aims at steadily and uniformly shearing a viscous particulate suspension, injecting a small blob of dyed fluid, and observing both the flow and the mixing of the dye. The set-up consists of a transparent cell in which a transparent mylar belt is tightly mounted at the top of the cell on two cylinders and at the bottom on two ball bearings. One cylinder is entrained by a rotating stage (M-061.PD from PI Piezo-Nano Positioning) with high angular resolution (3×10 -5 rad). The motion of the belt generates in its central region a linear shear flow. The suspension is allowed to flow below the cylinders and a constant spacing between the belt and the inner wall of the cell is maintained all around the cell. This specific design, which is an evolution of that used in Metzger 2012, minimizes secondary flows and ensures a velocity profile with constant shear rate within the belt.
Particles and liquid
The particles and the liquid are carefully chosen to allow the visualization of the dye and of the flow inside the suspension, as well as to ensure a purely viscous flow without buoyancy effects. This requires using a transparent media, matching both the density and the refraction index of the particles, and using a fairly viscous liquid.
To fulfill the above requirements, we use mono-disperse spherical particles (PMMA from Engineering Laboratories Inc.) with density ρ = 1.18 kg/m 3 and diameter d = 2 mm, especially chosen for their smooth surface and good transparency. The liquid is a Newtonian mixture of Triton X-100 (77.4 wt %), Zinc Chloride (13.4 wt %) and water (9.2 wt %) with viscosity η = 3 Pa s and having the same density as the particles at room temperature. Its composition is optimized to match both the refractive index and the density of the particles. A small amount of hydrochloric acid (≈ 0.05 wt%) is added to the solution to prevent the formation of zinc hypochlorite precipitate, thereby significantly improving the optical transparency of the solution. Last, to finely tune the index matching between the particles and the liquid, the temperature of the set-up is adjusted with a water bath surrounding the shear cell.
The solid volume fraction φ of the suspension is varied between 20 and 55%. To ensure that inertial effects are negligible, the shear rate γ is set to typically 0.15 s -1 , which corresponds to a particulate Reynolds number ρ γd 2 /η ∼ 10 -4 .
Imaging
The suspension is observed in the flow-gradient plane (xy plane): a slice of suspension is illuminated by a laser sheet across the transparent belt and imaged from the top (see figure 2).
The laser sheet is formed by reflecting a laser beam (2 W, 532 nm) on a standard laserprinter mirror (rotating at ∼ 10000 rpm). This technique was found to produce a light sheet with a better spatial homogeneity than that obtained with classical cylindrical or Powel lense techniques. The sheet is collimated and focused to a thickness of ∼ 60 µm with the help of two perpendicular plano-convex lenses. Last, a high-pass filter (590 nm) eliminates direct light reflexions. The suspension is imaged with a high-resolution camera (Basler Ace2000-50gm, 2048x1080 pixel 2 , 12bit) coupled to a high-quality magnification lens (Sigma APO-Macro-180 mm-F3.5-DG). To avoid that the particles distord the free surface of the suspension through which the visualization is realized, a small plexiglass window is positioned on the free surface, above the region of interest, which locally ensures a flat interface. The window has a small hole allowing the injection of a blob of dyed fluid with a syringe.
Velocity field measurements
The velocity field in the suspending liquid is measured in the plane of the laser sheet (xy), at half-distance between the bottom and the free surface (see figure 3), performing particle image velocimetry (PIV), which yields the two-dimensional velocity field {u, v}, which does not necessarily verify incompressibility. To perform PIV, the liquid is seeded with small passive fluorescent tracers (3.23 µm PMMA B-particles from MF-Rhodamine) at a very low volume fraction (∼ 10 -5 φ). These small and diluted tracers do not affect the flow but allow its visualization and quantification, as shown in figure 3 and Movie 1. The large (2 mm) particles of the suspension do not interact with the laser sheet and appear as black discs. Note that all the particles have the same size; the apparent size differences arise from their different vertical positions relative to the laser sheet plane. The PIV routine is adapted from a Matlab code developed by Meunier 2003. Images are captured every 0.1 s, which corresponds to a strain increment of 0.015. To perform PIV, the images are divided into equally spaced and overlapping sub-images with a typical size of d/20 (32 pixels). The local velocity field is computed by cross-correlating successive sub-images. The presence of a particle in the sub-image is detected with the help of two filters (for the maximum of correlation and for the standard deviation of the sub-images), in which case the corresponding velocity vector is not used (see figure 3b). For each volume fraction, three independent runs over a strain of 20 are performed.
The independence of the measured velocity field on the PIV sub-image size was verified by decreasing the latter to ∼ d/40 (16 pixels). Besides increasing the data noise, no significant effect was found on the measured velocities.
Molecular diffusivity measurements
The molecular diffusion coefficient of the dye (rhodamine 6G) is measured by observing the spreading, in the absence of flow, of a slice of liquid depleted in dye. A small Hele-Shaw cell (100 µm thick) is filled with dye-doped suspending liquid without particles. A thin slice of liquid is initially depleted in dye by bleaching the dye with a high power laser sheet across the cell (see figure 4). The depleted slice appears as a dark line having a gaussian profile which diffuses with the diffusion coefficient of the dye. The spatial variance of the gaussian profile χ 2 is measured over one day, and the diffusivity is determined from D = [χ 2 (t)χ 2 (0)]/2t 1.44 10 -13 m 2 s -1 . This value is consistent with that of 4.14 10 -10 m 2 s -1 found by [START_REF] Culbertson | Diffusion coefficient measurements in microfluidic devices[END_REF] for the diffusivity of the same dye in water, given that water is 3000 times less viscous than the suspending liquid and that, according to the Einstein-Sutherland law, D ∝ 1/η.
General observations
To illustrate the influence of particles on mixing in a shear flow, we first compare the evolution of a blob of dye sheared in a pure fluid (without particles) to that of a blob sheared in the suspension of particles, see Movie 2. A cylindrical blob of dyed fluid is injected, at rest and at t = 0, in the middle of the shear cell. Initially, the blob has a diameter s 0 2 mm, is aligned with the vorticity direction and is centered on the neutral velocity plane. This results in a macroscopically two-dimensional initial configuration, and ensures that the blob does not drift with the flow but only deforms. Figure 5 shows, for a Péclet number P e ≈ 10 6 , how mixing proceeds in the two sheared media, from the initial segregated state up to a strain γt = 20. In the pure liquid, the blob of dye stretches homogeneously. Its length increases linearly with time and the blob transverse dimension thus decreases as 1/t. In the suspension, the situation is markedly different: the fluctuations induced by the particles in the fluid phase strongly impact the evolution of the blob. Several conspicuous features deserve being highlighted i) the dispersion and the unfolded length of the blob are significantly enhanced by the particles, ii) both translational diffusivity (transverse undulations of the blob, see figure 5b) and rotation (blob winding around particles, see figure 5c) of the particles contribute to these enhancements, iii) the blob stretching is highly inhomogeneous: at some locations, its transverse dimension becomes much thinner and at others larger than in the pure fluid case, revealing regions of enhanced stretching and regions of compression, iv) at large strains (figure 5d), the blob has separated into several filaments which means that some regions of the blob have already mixed, while in the pure liquid (without particles) mixing has not occurred yet, v) In some regions, the blob evolves into bundles composed of several nearly overlapping filaments [START_REF] Duplat | Mixing by random stirring in confined mixtures[END_REF]. This suggests an underlying stretching/folding mechanism similar to the well known baker's transform [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF].
The above features are generic to the flow of a viscous suspension at large Péclet number. Since inertial effects are negligible, these features are independent of the rate γ at which the suspension is sheared. Similarly, the value of the Péclet number does not influence the general stretching pattern of the blob, but only prescribes the strain γt at which diffusion starts to becomes effective.
This direct comparison clearly illustrates how the liquid velocity fluctuations generated by the particles dramatically accelerate the blob deformation and dispersion. This acceleration is apparent here from the beginning of the shear, because the blob size s 0 is similar to the particle size d. It is however crucial to realize that the strain at which this acceleration establishes is expected to depend on the ratio s 0 /d. If the initial size of the blob s 0 is larger than d, the blob is essentially not stretched by the particle fluctuation motions. It is thus essentially stretched by the linear macroscopic shear until the blob transverse size has thinned down to d, after a typical strain s 0 /d. From that strain on, the particulate fluctuations are expected to contribute directly to the blob stretching.
In stirred flows, such as the case considered here, mixing results from the coupling between advection and molecular diffusion. In the experiment described above, the blob of dye is stretched by the local velocity field: the blob is stretched along its own longitudinal direction and conversely compressed along its transverse direction. The blob thus evolves towards a topology constituted of sheets, or filaments [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF][START_REF] Buch | Experimental study of the fine-scale structure of conserved scalar mixing in turbulent shear flows[END_REF]. Conversely to the effect of advection, molecular diffusion tends to broaden the filaments. This diffusive broadening will at some point counter-balances the rate of compression of the blob caused by the advection. As we already mentioned, this naturally sets a time-scale called the mixing time, t mix , beyond which the concentration levels drop significantly. The mixing time, a key element to understand the overall mixing process, can be estimated from the sole knowledge of the dye molecular diffusion coefficient and from the history of the transverse dimension of the blob. If one assumes that the flow is two-dimensional (this assumption is discussed in 5.3), incompressibility and mass conservation relate at any time the transverse size of the blob to its length l, through s 0 l 0 = s(t)l(t). The mixing time can therefore be estimated from the characterization of the evolution of l(t). Our goal in the following is thus to determine the so-called 'stretching laws', i.e., the time dependence of l in sheared particulate suspensions.
Experimental stretching laws
Our first attempt to measure the unfolded length of the blob l(t) was naturally to perform direct image analysis on images such as those shown in Figure 5. However, the intrinsic dispersion process rapidly distorts the blob into bundles of very close (sometimes merging) filaments, which renders image analysis ineffective above strains of typically 5.
To overcome these limitations, we adopted a different approach inspired from the Diffusive Strip Method. This method happens to be a very powerful experimental tool allowing the determination of the stretching laws over unlimited strains. The key idea is to use the experimental fluid velocity field to numerically advect passive material lines representing portions of the blob. The lines are initially composed of three passive tracers separated by a distance d/20 which discretize a fluid material line. The lines are randomly located in the two-dimensional velocity field with a random orientation (see figure 6a). Each tracer with coordinate x is advected independently from each other according to the local fluid velocity v(x) (obtained by linear interpolation of the instantaneous PIV velocity field) as x(t + ∆t) = x(t) + v(x)∆t, where ∆t is the time between consecutive measurements of the velocity field. As a material line is advected, it is refined by adding more tracers when its length increases or when its local curvature becomes too large (see Meunier 2010 for a detailed description of the refinement procedure).
Figure 6 shows the evolution of two material lines up to a strain of 15, see also Movie 3. The red line successively stretches and folds very similarly to what is observed in the blob experiments (figure 5). Interestingly, the blue line behaves very differently. Although it sustains the same macroscopic strain as the red one, it experiences a much softer stretching only because it started from a different initial location. These different stretching histories reveal the stochastic nature of the stretching induced within particulate suspensions. The stretching laws therefore have to be sought in a statistical sense by repeating the advection procedure over a large number of independent material lines. However, as the material lines lengthen, they may reach the boundaries of the measured velocity field, which limits the maximum strain that can practically be investigated (typically γt < 10). This problem is easily circumvented by realizing that, as long as the stretching laws are concerned, the object of interest is not the material line as a whole but rather the small segments which compose this line and which all stretch differently from each other. We thus perform a new set of calculations focusing on segments: i) initial segments (composed of two tracers) with length d/20 are positioned and oriented randomly in the flow, ii) each time the length of a segment doubles, it is splited in two individual segments that are subsequently advected independently, iii) if a segment reaches the boundary of the velocity field, it is re-injected near the center of the velocity field, iv) when a segment overlaps with a particle, where the velocity field is undefined (as can happen due to the finite time ∆t), it is frozen until the particle has moved away. Owing to these rules, virtually unlimited strains can be considered, and the stretching history of each segment that have been created over this strain can be determined. We define the elongation of these segments as the ratio
ρ(t) ≡ δl(t)/δl 0 (4.1)
of their current length δl(t) to their initial length δl 0 where δl 0 = (d/20)/2 n , with n the number of times the sub-segment was splited in two. Note that to compute the distributions of elongations we present below, the contribution of each segment is weighted by its initial length. Note also that times for which a segment is frozen are not considered. The distribution of elongations at time t therefore represents the portion of the blob that has reached a given elongation after being advected for a duration t. It was built from the stretching histories of 25000 segments advected over 3 independent experimental velocity fields, each of them recorded for a total strain of 20 (typically 4000 images).
Figure 7 shows the experimental stretching laws obtained for a suspension with φ = 35 %, which is generic to the volume fraction range 20 to 55 % investigated. It presents the mean value ρ and the standard deviation σ ρ ≡ ρ 2ρ 2 of the elongation for strains up to 20. At γt = 20, the segments have on average lengthened by typically 10 3 , which is about one hundred times larger than in the case of a pure liquid. The striking result is that the presence of particles in a shear flow changes the very nature of the stretching laws from linear to exponential. Indeed, the elongation of material line in a simple shear (without particles) follows
ρ lin (θ, t) = 1 + 2 cos θ sin θ γt + sin 2 θ γ2 t 2 , (4.2)
where θ denotes the angle between the line initial orientation and the flow direction. On averaging ρ 2 lin (θ, t) over all possible orientations, we obtain
ρ lin (t) = 1 + γ2 t 2 /2, (4.3)
which is only of order 10 for a strain of 20, and increases linearly with time for large strains. Equation (4.3) is ploted in figure 7 to illustrate the contrast with the elongations actually measured in particulate suspensions: the mean elongation in suspensions is different both in magnitude and in law. Moreover, by contrast with the pure fluid case, the stretching variability of individual material lines is very broad as evidenced by the exponential growth of the standard deviation σ p . These results corroborate the preliminary blob experimental visualizations where, in the suspension, many filaments having very different transverse thickness can be observed while the pure fluid case solely exhibits one uniform thickness, (figure 5d). More precisely, figure 7b shows the distributions of the relative elongations P (ρ/ ρ ) at successive strains. The distribution of elongations broadens rapidly such that at a strain γt = 20, it spans more than eight decades. At that strain, the right tail of the distribution contains segments elongated 10 4 times relative to the average ρ , which corresponds to an absolute elongation of ρ ∼ 10 7 , in stark contrast with the uniform average elongation of 10 obtained in a simple shear. As figure 7b shows, these distributions are found to be well fitted by log-normal distributions (shown as dashed lines). Note that the apparent absence of data on the left hand side of the distributions is fully consistent with lognormal distributions. Indeed, for broad distributions, the statistical weight of the left hand side of the distribution vanishes. Our data thus fully resolve the meaningful part of the distribution.
The advective strip method presented above was repeated with velocity fields measured in suspensions with different volume fractions φ ranging from 20% to 55%. The same trends as those detailed for φ = 35 % are systematically observed. As shown in figure 8, it is moreover found that larger particulate volume fractions increase both the growth rate of the average elongation ρ and that of the standard deviation σ ρ . This indicates that a larger volume fraction results in larger fluid disturbances which, in turn, induce a faster and more random elongation of the fluid material lines. Fitting these curves with exponential growths in strain e κ γt yields for ρ , κ ρ = 0.09 + 0.74 φ, and for σ ρ , κ σρ = 0.12 + 1.03 φ. In the range of volume fraction investigated, the growth rates are found to increase linearly with φ. No measurements could be performed above 55% as the large normal stress built in the suspension starts to deform the belt.
To summarize, by kinematically advecting passive segments using the experimental velocity fields of the fluid phase, we measured the elongation of fluid material lines in sheared particulate suspensions. Two important features characterize these elongations: i) the mean and the standard deviation grow exponentially, ii) the distribution converges to a log-normal. In the following, using two measurable properties of the fluid velocity field, namely the local shear rate distribution and the Lagrangian correlation time, we present a mechanism accounting for these observations.
ρ φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55%
Origin of the stretching laws
Principle
We consider the elementary component of a fluid material line: the segment, see figure 9. At that scale, much smaller than the particle size, the local shear γloc is uniform.
Considering the broad distribution of the segment orientations, we assume that the local shear rate has a random orientation with respect to the segment. Therefore, as long as the local shear rate γloc persists, the average elongation of the segment is (see equation 4.3)
ρ = 1 + γ2 loc t 2 /2.
(5.1)
Note that an individual segment can be stretched or compressed depending on wether it is located in a diverging or compressive region of the flow, respectively. However, once averaged over all possible orientations, the segment net elongation is strictly larger than unity. Two questions then naturally emerge: what are the local shear rates? and how long do these shear rates persist? In the following two sections, we address these questions by providing information about the local shear rates and the Lagrangian correlation time of the velocity field.
0 2 4 φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55% 0
Local shear rate
We measure the local shear rate from the experimental two-dimensional velocity fields. To this end, we define the local shear rate by the norm of the symmetric part of the strain tensor:
γloc = 2 ∂u ∂x 2 + 2 ∂v ∂y 2 + ∂u ∂y + ∂v ∂x 2 , (5.2)
where {u, v} are the {x, y} components of the velocity field. This definition disregards the rotation part of the strain tensor. For a simple shear, one has γloc = cte = γ. Figure 10a shows a typical local shear rate map, obtained in a suspension with volume fraction φ = 35%. The color-scale represents γloc amplitude normalized by the applied macroscopic shear rate γ, that is to say the amplification of the shear due to the presence of particles. The local shear rate is highly non-uniform and its value can greatly exceed the macroscopic shear rate. Interestingly, large local shear rates occur preferentially in the vicinity of the particles, however there is no apparent correlation between large local shear rates and small inter-particle distances.
φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55% fit for φ= 40% 0 V/ γd 0 1 2 -2 -1 φ = 20% 0 1 2 -2 -1 1 2 4 3 5 0 1 2 4 3 5 V V γt γt φ = 50% a) b) c) d) V/ γd
More quantitatively, we report in figure 10b the distribution of normalized local shear rates obtained for various volume fractions. Clearly, the local shear rate exceeds most of the time the imposed macroscopic shear rate, sometimes by one order of magnitude, and this trend accentuates with increasing volume fractions. The mean normalized value γloc / γ is plotted versus φ in the inset of figure 10c. It is found to fit well γloc / γ = A/(φ cφ) δ . Fixing φ c = 0.58 this yields A 0.56 and δ 0.7. Note that the last point, corresponding to φ = 55%, was not included in the fitting procedure since we suspect that it is biased by the deflexion of the belt mentioned above. Note also that PIV using smaller boxes resulted in very similar local shear rate distributions with less than 6% difference on the average. The trends discussed above may also be interpreted in terms of a macroscopic viscosity. In such case, the relevant quantity to investigate is the second moment of the local shear rate distribution γ2 loc [START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF], Lerner 2012, Dagois-Bogy 2015). Values of this quantity have recently been obtained by Trulson et al. from numerical simulations of dense frictional suspensions [START_REF] Trulsson | Effect of Friction on Dense Suspension Flows of Hard Particles[END_REF]. They report that γ2 loc / γ ∼ (J/µ) -1/3 , where J = γη f /P is the viscous number, with P the confining pressure, and µ the suspension macroscopic friction coefficient. Since η s /η f = σ/ γη f = µ/J, this results in γ2 loc / γ ∼ (η s /η f ) 1/3 . Combining this with η s /η f ∼ (φ cφ) -2 and using φ c = 0.58 [START_REF] Boyer | Unifying suspension and granular rheology[END_REF]) leads to γ2 loc / γ ∼ (φ cφ) -2/3 , which is in fairly good agreement with the measured scaling, as figure 10c shows.
Lagrangian correlation time
The second important quantity of the suspending liquid flow is the persistence time of the velocity fluctuations induced by the particles. Figures 11a andb show the transverse Lagrangian velocity V (perpendicular to the flow) of a passive tracer advected by the fluid at a low and a large volume fraction, respectively. Consistently with the magnitude of the local shear, more concentrated particulate suspensions develop velocity fluctuations with larger amplitudes. However these fluctuations are found to persist on a much shorter time as φ increases. The duration t c for which a segment is coherently stretched by the flow is directly prescribed by this persistence time, which we define from the Lagrangian velocity auto-correlation functions. As shown in figure 11c, these functions decorrelate exponentially with strain. In the range of volume fraction investigated, the dimensionless correlation time γτ , inferred from this exponential decay, decreases linearly with φ as γτ 0.62 -1.08 φ,
(5.3) (see figure 11d). We expect t c to be of the order of τ and thus write (5.4) with α an order one constant. Note that as shown in figure 7, this persistence time ( γ-1 ) is much shorter than the observation period ( 10 γ-1 ).
t c = ατ,
Multiplicative stretching process
With informations about the local shear rates and their persistence time at hand, we now explain the elongations of fluid material lines as a sequence of uncorrelated cycles of stretching. During the first cycle of duration t c , a given segment of a material line is elongated by the local shear rate γloc,1 resulting in a stretching
∆ρ 1 = 1 + ( γloc,1 t c ) 2 /2, (5.5)
where γloc,1 is a local shear rate which probability is prescribed by the distribution P ( γloc / γ), cf. figure 10b. After the duration t c , the local velocity field de-correlates and the local shear rate map is entirely redistributed. The segment then experiences a new local shear rate γloc,2 , which at t = 2t c yields ρ = ∆ρ 1 1 + ( γloc,2 t c ) 2 /2, and so on. The total elongation at time t, after N = t/t c cycles, is the product of all the elementary elongations occurring at each cycle ρ(t) =
N =t/tc i=1 ∆ρ i . The logarithm of this expression can be written as a sum
ln ρ ≡ t/tc i=1 ln ∆ρ i = 1 2 t/tc i=1 ln[1 + ( γloc,i t c ) 2 /2].
(5.6)
Since the elementary stretchings are independent, the distribution of ln ρ is expected, by virtue of the central limit theorem, to be normal. This multiplicative stretching model thus predicts ρ to converge, after a few t/t c cycles, to a log-normal distribution. This prediction is in agreement with the experimental results shown in figure 7b. The distribution of ln ρ, i.e. the normal distribution, writes
P (x = ln ρ) = 1 √ 2πσ e -(x-µ) 2 2σ 2 , (5.7)
with a non-zero mean µ ≡ ln ρ = ln ∆ρ γt c γt, (5.8) and variance
σ 2 ≡ ln 2 ρ -µ 2 = ln 2 ∆ρ -ln ∆ρ 2 γt c γt.
(5.9)
Both the mean and the variance of the distribution of ln ρ increase linearly with time.
They also vary with the particulate volume fraction due to the φ-dependence of γloc and t c . This variation with φ is better appreciated by recasting equations (5.8) and (5.9) into µ = f (φ) γt, (5.10) (5.11) with f (φ) ≡ ln ∆ρ / γt c and g(φ) ≡ ( ln 2 ∆ρln ∆ρ 2 )/ γt c only depending on φ. Note that f (φ) and g(φ) are crucial quantities. Since the time dependency is known, they contain all the information about the asymptotic of the stretching laws in suspensions. The multiplicative stretching model not only explains the origin of the log-normal distributions of elongations measured experimentally, but also the exponential increase of the mean elongation ρ and variance σ 2 ρ shown in figure 7. Indeed, the mean and variance of the (log-normal) distribution of ρ can be deduced from the mean and the variance of the (normal) distribution of ln ρ following ρ = e (f +g/2) γt , (5.12) and σ 2 ρ = (e g γt -1)e (2f +g) γt e 2(f +g) γt , (
σ 2 = g(φ) γt,
the last simplification in σ 2 ρ becoming true after a few t c . Furthermore, the particulate volume fraction dependence of f (φ) and g(φ) can be computed from the persistence time t c and the distribution of local shear rates, using equations (5.10) and (5.11) together with equations (5.8) and (5.9). In the experimental range 20 % < φ < 55 %, this yields f (φ) 0.104 + 0.298 φ,
(5.14) and g(φ) -0.069 + 0.810 φ, (5.15)
with the structure constant α, set once and for all φ, to 0.3 for computing µ, and to 3.9 for computing σ 2 . These rates f and g both increase with φ, in agreement with the experimental trends. Note that this dependence on the volume fraction is non-trivial, since f and g result from the product of γloc and t c , which have opposite trends with φ: the former increases whereas the latter decreases with increasing φ.
The predictions of the multiplicative stretching model are compared to the experimental stretching laws obtained by the Diffusive Strip Method in figure 12. The agreement is good for all volume fractions and all strains, which suggests that the multiplicative stretching model presented above captures the relevant mechanisms at the origin of the stretching laws.
Comments on the stretching process
Stretching of material elements in nature, may they be passive like in the present case, or with internal restoration forces like polymers [START_REF] Shaqfeh | Polymer stretch in dilute fixed beds of fibres or spheres[END_REF][START_REF] Afonso | Nonlinear elastic polymers in random flow[END_REF] may have different origins. The stochastic models to describe them usually present a net drift, and a random noise terms. The relative amplitudes of these two contributions are in our analysis given by f (φ) and g(φ), respectively (see equations (5.10) and (5.11)). The first term sets the growth of ln ρ , while the second sets that of ln 2 ρln ρ 2 . At a 10 0 10 2 10 4 10 0 10 2 10 4 10 1 10 0 10 3 10 2 10 1 10 0 10 2
φ= 20% φ= 25% φ= 30% φ= 35% φ= 40% φ= 45% φ= 50% φ= 55% c) d) σ model ρ σ exp ρ ρ exp ρ model 0.2 0.3 0.4 0.5 φ 0.2 0.4 0.6 0.4 0.8 1.2 0.2 0.3 0.4 0.5 φ 1.6 a) b) f + g 2 2f + 2g
Rates from advective strip method Model: Model:
0.8 Figure 12.
Comparison between the experimental stretching (extracted from figure 8) and the multiplicative stretching model. (a-b) Mean elongation ρ and standard deviation σρ. The values from the advective strip method are plotted versus those predicted by the multiplicative stretching model. (c) Comparison between the exponential rates of the mean elongation κρ obtained from the advective strip method (see figure 8a) and the model prediction f + g/2 (5.12). (d) Comparison between the exponential rates of the variance of the elongation 2κσ ρ (see figure 8b) and the model prediction 2(f + g) (5.13).
microscopic level, the growth of a given material line depends on its orientation with that of the local velocity gradient. The line length l(t) may increase or decrease depending on wether it is aligned with a diverging or compressive region of the flow. For instance, in a flow corresponding to the pure Brownian motion limit [START_REF] Cocke | Turbulent Hydrodynamic Line-stretching: The Random Walk Limit[END_REF], for which ρ/ρ = B(t) with B(t) a zero-mean, Delta correlated noise, i.e. B(t) = 0 and B(t )B(t ) = (1/τ 0 )δ(tt ), these two contributions are balanced and the net line growth d ln ρ(t)/dt = l(t)/l(t) is, on averaging over all directions, identically zero. In that case, representative of t c → 0, the material lines only grow through the contribution of the fluctuations of B(t), which results in d ln ρ /dt = 0, and (ln ρ) 2 ∼ 2t/τ 0 : the logarithm of the elongation diffuses.
In particulate suspensions, the situation is different since if the direction of the stretch indeed changes at random, it has a finite persistence time. In such case, it has been shown that material lines tend to preferentially align in the direction of elongations (see [START_REF] Cocke | Turbulent Hydrodynamic Line Stretching: Consequences of Isotropy[END_REF][START_REF] Orszag | Comments on 'Turbulent Hydrodynamic Line Stretching: Consequences of Isotropy[END_REF][START_REF] Girimaji | Material-element Deformation in Isotropic Turbulence[END_REF][START_REF] Duplat | Persistence of Material Element Deformation in Isotropic Flows and Growth Rate of Lines and Surfaces[END_REF] and also equation 5.5). Thus, over an observation period larger than the (non-zero) correlation time t c , we expect ln ρ in figure 13(b)), on average, the logarithm of the elongations ln ρ increases with time. Material lines in particulate suspensions thus grow from the contribution of both a drift and a noise. The stretching process thus corresponds to a noisy multiplicative sequence of correlated motions, like the random Sine Flow [START_REF] Meunier | The diffusive strip method for scalar mixing in two dimensions[END_REF], or porous media flows (Le Borgne 2015). Porous media and sheared particulate suspensions have similar exponential stretching laws. This is true in 3D systems as in both cases the fluid trajectories are chaotic. Note however that for 2D systems the implications of steadiness change the picture qualitatively. In a 2D porous media, the flow is steady and there are only two degrees of freedom : the flow is thus not chaotic. The elongation of material lines in 2D synthetic porous media have been shown to grow algebraically rather than exponentially (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF]. Conversely in 2D sheared suspensions, the time dependence of the flow allows the system to be chaotic (Metzger 2013). One therefore expect to observe exponential stretching laws in sheared particulate suspensions also in purely 2D configurations.
Further remarks
We would like to point out certain limitations of the present study. First, the present findings and their analysis are restricted to the particulate volume fraction 20 % φ 55 %, for which material lines in the suspending liquid stretch exponentially with strain. This is not necessarily the case outside form this range. In particular, as φ → 0, this exponential trend must cross-over to linear since the elongation of material lines in a simple shear is linear with strain. We however anticipate that the exponential trend could hold down to fairly low volume fractions but only emerge after increasingly large strains, since the velocity correlation time in the dilute limit should follow τ ∼ ( γφ) -1 and diverge at low φ. Further investigations are needed to characterize this dilute regime (φ < 20%). Second, the PIV measurements performed here are two-dimensional and provide the fluid velocity projected in the (xy) plane only. They therefore neglect part of the stretching of the material lines, namely that involving deformations in the vorticity direction (z). However, we believe that they resolve the stretching mechanism and most of its magnitude for the following reasons: i) these measurements resolve the fluid displacements in the gradient direction (y), which is the only direction for which displacements couple with the main shear flow to produce an enhance stretching. The fluctuations in the vorticity direction are thus expected to produce less stretching than those occurring in the gradient direction. ii) Particles in a shear flow rotate mainly about the vorticity axis thereby inducing fluid disturbances mostly in the velocity-gradient plane, which we consider. Here again, the effects of the velocity disturbances induced by the particle rotation should be smaller in the vorticity direction than those occurring in the velocity-gradient plane. iii) More quantitatively, the stretching rates f + g/2 predicted by the present model based on 2D data are in good agreement with the largest Lyapunov exponents obtained from 3D Stokesian simulations, see figure 14. From the above considerations, it is likely that the mechanisms at the origin of the scalar dispersion, stretching and subsequent mixing are well characterized by the present measurements, even though those are limited to the information contained in the xy plane. Third, as already mentioned in section 3, the stretching of material lines is exponential at every scale, but the stretching of a material blob with thickness s 0 is expected to follow that of material lines only if its thickness is smaller than the correlation scale of the fluid motion, which is of order d (in the other case, the blob is first essentially stretched by the macroscopic shear γ until s d). In the following, we will therefore only consider the relevant case s 0 d.
The latter considerations have important consequences on the estimation of the blob thickness s, hence on the mixing time that we will address in the next section. For an arbitrary elongation w/w 0 in the vorticity direction (z), mass conservation gives s 0 l 0 w 0 = s(t)l(t)w(t). However, in light of the above discussion, the flow is assumed to be two dimensional with w/w 0 l/l 0 . Mass conservation thus results in s 0 l 0 = s(t)l(t).
(5.17)
Using direct image analysis, we have checked that this is experimentally verified. A blob with initial surface s 0 l 0 being converted into a strip with length l(t) and thickness s(t) The median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times can be compared to the dimensionless mixing time ∼ P e 1/3 expected in a pure fluid (dashed line).
indeed obeys, before it starts mixing, to equation (5.17), suggesting that the flow is indeed area preserving.
Implications for mixing
In such area preserving flow, the thickness s(t) of a distorted blob decreases in inverse proportion of its length l(t) according to equation (5.17). As recalled in the introduction, the mixing time for a given blob portion of thickness s is reached when its compression rateṡ/s is balanced by its rate of diffusive broadening D/s 2 . At that time, called the mixing time, the scalar concentration carried by that portion of the blob starts to significantly decay i.e., mix. Since in particulate suspensions ρ = l(t)/l 0 = e κ γt , the mixing time writes t mix γ-1 ln(κP e)/(2κ). We also found that the logarithm of the elongations of an ensemble of such material lines is normally distributed with a mean and a variance growing linearly with time following µ = ln ρ = f (φ) γt and σ 2 = g(φ) γt (see equations (5.10) and (5.11), respectively). These results are illustrated in figure 15a. Since, similarly to the logarithm of the elongations, the stretching rates, κ γ = ln ρ/t are normally distributed, the median mixing time, obtained for the mean stretching rate, i.e. for κ γ = ln ρ /t = µ/t = f (φ) γ, is
t med mix ≈ 1 2f (φ)
γ ln(f (φ)P e). (6.1)
Considering a blob distorted in such a way that it samples all the possible elongations in the global statistics, the above estimate provides the time at which half of the blob has reached its mixing time. The logarithmic dependence of the mixing time on the Péclet number is different from that obtained in a simple shear flow (without particles) for which ρ γt yields t mix γ-1 P e 1/3 . Introducing particles in a viscous fluid therefore becomes more and more efficient at reducing the mixing time as the Péclet number increases. In the present study, the Péclet number is P e ∼ 10 6 . The median mixing time for φ = 35 % is thus t med mix 30/ γ, which has to be compared with t mix 100/ γ in a pure shear flow. Note that varying the volume fraction from 20 % to 55 % increases f (φ) only by a typical factor of 2, which decreases the median mixing time by about the same moderate factor.
In practical situations, mixing half of the scalar may not be the relevant question, precisely because in particulate suspensions, elongations are, as seen in figure 5, broadly distributed. So are mixing times. To address this point, we estimate, for the same conditions as previously, the mixing times for the portions of the blob that undergo the largest and the lowest stretching rates respectively, i.e. the mixing times corresponding to both tails of the distribution (highlighted in figure 15a). The 3 % most strongly stretched portions of the blob are bounded by ln ρ = µ + 2σ. The expressionṡ/s = D/s 2 results in 2f (φ) γt + 4 g(φ) γt = ln[(f (φ) + g(φ)/ γt)P e], which yields the mean field mixing time t 3% mix 14/ γ. On the other end of the distribution, the less stretched portions of the blob, bounded by ln ρ = µ -2σ, reach their mixing time at t 97% mix 64/ γ, later than if it were sheared in a pure fluid. In figure 15b, the median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times are plotted as a function of the Péclet number. This shows that if the concern is to mix essentially all the scalar, large Péclet numbers ( 10 5 ) are required before mixing in a suspension becomes more efficient than in a pure fluid. Persistent poorly stretched regions are deterring. The relative width σ/µ of the stretching rate distribution decreases in time like t -1/2 but this only mildly decreases the spreading of the mixing times as P e increases, since t mix ∝ ln P e. For instance, at P e = 10 20 , the mixing times remain fairly distributed with t 97% mix /t 3% mix > 2. Finally, the results obtained on the stretching laws must be related to the overall dispersion of the blob. In a random flow, line stretching, and dispersion, are two different things: because the extent of the area occupied by the blob grows more slowly than the area where the scalar constitutive of the blob is dispersed, the blob will at some point unavoidably reconnect and merge by overlapping onto itself [START_REF] Duplat | Mixing by random stirring in confined mixtures[END_REF]. Let us see how: after the mixing time, a scalar blob with length growing like l(t) = l 0 e γt has a transverse concentration profile whose width is confined to the Batchelor scale √ Dt. The area A occupied by the scalar is thus A = √ Dt l 0 e γt , growing exponentially in time. Now, the spatial support of the blob undergoes a dispersion induced by the particle effective dispersion coefficient D eff ∼ γd 2 [START_REF] Eckstein | Self-diffusion of particles in shear flow of a suspension[END_REF]. The total area explored by the blob of dye, within which the blob folds, is typically (see also [START_REF] Taylor | Dispersion of soluble matter in solvent flowing slowly through a tube[END_REF] in a related, but different context), Σ ∼ (l 0 + √ D eff t) × (s 0 + √ D eff t) γt ∼ d 2 ( γt) 2 , growing algebraically in time. Because an exponential will always beat a power law, there will necessary be a time for which the area occupied by the scalar overcomes that visited by the blob (i.e. Σ/A < 1), and from that instant of time, overlaps of the folded scalar filaments will be unavoidable. Such an event is illustrated in figure 16. These overlaps will locally delay the mixing process and therefore affect the whole route of the mixture towards homogenization. This aspect, and more generally all aspects regarding the concentration content of the mixture and its evolution, are left for future research.
Conclusions
Motivated by the need to understand on a firm basis the mixing properties of particulate flows, we have provided a complete characterization of the kinematics of stretchings and consecutive elongations of materials lines in non-Brownian particulate suspensions under a simple macroscopic shear. Our observations rely on high resolution PIV measurements of the interstitial fluid velocity field, and our findings are as follows:
i) Following the Diffusive Strip Method of Meunier 2010, we used the experimentally measured velocity fields to numerically advect passive segments in order to reconstruct the stretching histories of fluid material lines. In agreement with previous theoretical predictions and simulation results, we observe that adding particles in a shear flow changes the very nature of the stretching laws from linear to exponential in strain. The growth rate for the mean elongation are found to closely agree with the largest Lyapunov exponent obtained from 3D numerical simulations [START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF][START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF]. Besides the mean, our analysis also provides the full statistics of the material lines elongation: the variances of the elongations also grow exponentially in strain and the distributions of elongations converge toward log-normals. This statistics of elongation was characterized for a large range of volume fractions 20% φ 55%.
ii) Using the same velocity fields, we determined the distribution of the local shear rates intensities and their persistence time. From these, we have shown how the fluid material lines undergo a multiplicative stretching process consisting in a noisy multiplicative sequence of correlated motions. We also discussed the important role of the finite correlation time of the velocity field. The model quantitatively predicts the evolution of the mean and the variance of the elongations of the fluid material lines as well as their evolution towards a log-normal distribution.
iii) We have discussed the importance of this characterization of the flow kinematics to understand how mixing proceeds in sheared particulate suspensions. The exponential stretching results in a mixing time increasing logarithmically with the Péclet number. Moreover, the broad distribution of stretching rates implies a broad distribution of mixing times. The stochastic nature of the stretching process thus allows stretching rates that are smaller than in a pure shear flow. However, our analysis shows that the occurrence of such events becomes negligible at large Péclet number ( 10 5 ) as mixing occurs at larger deformations.
The present study opens the way for a complete description of the mixing process occurring in sheared particulate suspension. In particular, it allows the prediction of the evolution of the concentration distribution P (C, t) [START_REF] Duplat | A nonsequential turbulent mixing process[END_REF]. A quantitative verification of these predictions requires a specific experimental device that resolves the Batchelor scale s(t mix ) which corresponds to the transverse dimension of the filaments at the time when diffusion significantly modifies the concentration levels. Such challenging measurements will be addressed in future studies.
We would like to thank P. Meunier for letting us use his DSM code, S. Dagois Bohy, S. Gallier, E. DeGiuli, M. Wyart, O. Pouliquen for having thoughful discussions and P. Cervetti, S. Noel and S. Martinez for helping us build the experimental set-up. This work was supported by ANR JCJC SIMI 9 and by the Labex MEC ANR-10-LABX-0092 et ANR-11-IDEX-0001-02.
Figure 1 .
1 Figure 1. Some dye, initially confined to a small blob in a flowing particulate suspension (a), mixes with the rest of the suspension (b) by diffusing while the blob is stretched in the complex micro-flow generated by the particles.
Figure 2 .
2 Figure 2. Schematics of the set-up.
Figure 3
3 Figure 3. a) Flow streamlines in the bulk of a suspension. b) Slice of a sheared suspension illuminated by a laser sheet. The small fluorescent tracers seeding the suspending fluid appear as bright whereas the particle intersections with the laser sheet appear as dark, see also Movie 1. c) Magnified view of the suspending liquid velocity field obtained from the PIV (the velocity is not computed in the particles).
Figure 4 .
4 Figure 4. a) Schematics of the set-up used to measure the molecular diffusivity D of the dye (rhodamine 6G), in the suspending liquid (TritonX+ZnCl2+H2O). b) Diffusive thickening of the bleached line at t = 0, 4800, 21600 and 64800 s (the image width is 5 mm). c) Concentration profiles at successive times (0 s< t <64800 s). d) Increase of the spatial variance of the concentration χ 2 (t) -χ 2 (0) versus time. Its fit to 2Dt yields D = 1.44 ± 0.2 × 10 -13 m 2 s -1 .
Figure 5 .
5 Figure 5. Comparison of the stretching processes of a blob of dye sheared at high Péclet (∼ 10 6 ) and low Reynolds numbers (∼ 10 -4 ), in a pure fluid (top), and in a particulate suspension with volume fraction φ = 35% (bottom). The dye appears as dark, and the beads appear as bright, see also Movie 2.
Figure 6 .
6 Figure 6. Example of stretching for two material lines numerically advected using the experimental fluid velocity field, see also Movie 3.
Figure 7 .
7 Figure 7. Stretching laws measured for a suspension with volume fraction φ = 35%. a) Mean value ρ and standard deviation σρ = ρ 2 -ρ 2 of the distribution of elongations versus macroscopic strain in a semilogarithmic representation. The dashed line corresponds to the mean elongation in a pure fluid ρ lin (t) = 1 + γ2 t 2 /2. b) Distribution of the normalized elongations ρ/ ρ at different strains. The dashed curves are log-normal distributions built from the mean value ρ and standard deviation σρ of the experimental elongation distributions.
Figure 8 .Figure 9 .
89 Figure 8. Mean elongation ρ (a) and standard deviation σρ (b) versus strain for increasing volume fractions ranging from 20 to 55%. Insets: growth rate κ of the exponential fit e κ γt to the main curves, as a function of φ. The lines show κρ = 0.09 + 0.74 φ (a), and κσ ρ = 0.12 + 1.03 φ (b).
Figure 10
10 Figure 10. a) Typical local shear rate map for a suspension with volume fraction φ = 35%. b) Experimental distributions of normalized local shear rate P ( γloc / γ) for different volume fractions (the solid line is not a fit but the experimental data, sparse markers are used for sake of clarity). c) γ2 loc / γ versus φc -φ. The best fit γ2 loc / γ ∼ (φc -φ) -β , with φc = 0.58, yields β = 0.601 (see text). Inset: mean normalized local shear rate γloc / γ versus φ. The line is the best fit by A/(φc -φ) δ .
Figure 11 .
11 Figure 11. (a-b) Lagrangian velocity transverse to the flow of a tracer passively advected by the suspending liquid V , as a function of the strain γt. a) φ = 20% and b) φ = 50%. c) Average Lagrangian velocity auto-correlation function V V obtained for different volume fractions versus strain. The velocity auto-correlation functions fit well e -γt/ γτ where τ denotes the correlation time. d) Correlation strain γτ versus φ and corresponding linear fit γτ = 0.62 -1.08 φ.
Figure 13
13 Figure 13. a) Mean logarithm of the material line elongations, ln ρ versus γt for a suspension of volume fraction φ = 35%. b) PDF of the logarithm of the material line elongations P (ln ρ) at successive times.
Figure 14 .
14 Figure14. Comparison between the stretching rates obtained in the present study and the largest Lyapunov exponent obtained from 3D Stokesian dynamics simulations[START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF][START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF]).
Figure 15
15 Figure15. a) Evolution of the distribution P (ln ρ) of the logarithm of the material line elongations in a particulate suspension. The mean of the distribution µ ∼ t and its standard deviation σ ∼ √ t. b) Dimensionless mixing times γtmix in a suspension (φ = 35%) as a function of P e. The median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times can be compared to the dimensionless mixing time ∼ P e 1/3 expected in a pure fluid (dashed line).
Figure 16 .
16 Figure 16. Picture illustrating the complexity of folding of the stretched blob of dye and the potential interaction (merging) of nearby filaments. |
01768677 | en | [
"sdv",
"sdv.mhep"
] | 2024/03/05 22:32:16 | 2017 | https://amu.hal.science/hal-01768677/file/096368916x693329.pdf | Sylvie Cointe
Éric Rhéaume
Catherine Martel
Olivier Blanc-Brude
Evemie Dubé
Florence Sabatier
Francoise Dignat-George
Jean-Claude Tardif
Arnaud Bonnefoy
Françoise Dignat-George
Thrombospondin-1-Derived Peptide RFYVVMWK Improves the Adhesive Phenotype of CD34 + Cells from Atherosclerotic Patients with Type 2 Diabetes
Keywords: Thrombospondin-1 (TSP-1), Atherosclerosis, Type 2 diabetes (T2D), CD34, CD47
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
CD34 is a marker of hematopoietic stem cells (HSCs) that is also expressed on several non-HSCs including endothelial and epithelial progenitors, embryonic fibroblasts, multipotent mesenchymal stromal cells (MSCs), and interstitial dendritic cells. The plasticity of CD34 + cells and their paracrine-stimulating properties on the endothelium during hypoxia make these cells potential candidates for cell transplant therapies combating ischemic diseases such as cardiac failure 1 .
Circulating progenitor cells are known to contribute to neoangiogenesis during numerous processes including healing, lower limb ischemia, vascular graft endothelialization, atherosclerosis, post-myocardial infarction, lymphoid organ neovascularization, and tumoral growth 2 . Clinical trials have demonstrated that intracoronary administration of bone marrow-derived mononuclear cells (BM-MNCs) from autologous progenitor cells or peripheral blood (PB) CD34 + cells mobilized by granulocyte colony-stimulating factor improves the left ventricular ejection fraction and reduces the size of the infarcted area [START_REF] Caballero | Ischemic vascular damage can be repaired by healthy, but not diabetic, endothelial progenitor cells[END_REF][START_REF] Blanc-Brude | Abstract 895: CD47 activation by thrombospondin peptides enhances bone marrow mononuclear cell adhesion, recruitment during thrombosis, endothelial differentiation and stimulates pro-angiogenic cell therapy[END_REF] . However, the benefit of cell therapy is dampened by the negative impact of cardiovascular risk factors, such as atherosclerosis, obesity, and type 2 diabetes (T2D), on the number and function of progenitor cells, thereby jeopardizing their use for autologous treatment 5,6 . Interestingly, a reduced progenitor cell adhesion capacity has been reported in T2D 5,7 . Moreover, strategies aiming at preconditioning CD34 + cells or endothelial progenitor cells prior to injection in animal models of ischemia were reported to improve revascularization in vivo, together with increased adhesion capacity of cells in in vitro models 8 .
The ability of CD34 + cells to adhere and engraft onto damaged vessel walls is crucial to initiate neovascularization 9 . During this process, activated platelets are instrumental in targeting CD34 + cell recruitment to injured vessels via stromal cell-derived factor-1 (SDF-1a) secretion and chemotaxism 10 . Platelets also stimulate the "homing" of CD34 + cells, via CD62-CD162 interaction 11 and their differentiation into mature endothelial cells 8 . One of the most abundant proteins secreted by activated platelets is thrombospondin-1 (TSP-1), which is a multifunctional matricellular glycoprotein bearing proatherogenic, prothrombotic, as well as both pro-and antiangiogenic properties 12 . The interaction of TSP-1, through its COOH terminal RFYVVMWK sequence, with the transmembrane protein CD47 [integrin-associated protein (IAP)] occurs following a conformational reorganization of the C-terminal domain of TSP-1 13 .This interaction positively modulates the function of several integrins including CD51/CD61, CD41/CD61, and CD49b/CD29, thereby modulating cellular functions including platelet activation and adhesion, leukocyte adhesion, migration, and phagocytosis through heterotrimeric G i protein signaling [14][15][16][17] .
We have previously observed that TSP-1-deficient mice exhibit a significant drop in the vessel wall recruitment of BM-MNCs in a FeCl 3 -induced intravascular thrombosis mouse model 18 . We also found that ex vivo RFYVVMWK preconditioning of mouse BM-MNCs stimulates their recruitment to sites of intravascular thrombosis induced by FeCl 3
19
. Indeed, RFYVVMWK increased BM-MNCto-vessel wall interactions and decreased their rolling speeds to the damaged vessel wall, leading to a 12-fold increase in permanent cell engraftment 19 .
The goal of the present study was to analyze the proadhesive effects of RFYVVMWK preconditioning on CD34 + progenitor cells isolated from PB of atherosclerotic patients with T2D. We first explored their "proengraftment" phenotype through the measurement of a panel of biomarkers including cell adhesion receptors, platelet/CD34 + conjugates, and apoptotic markers. We next investigated whether this preconditioning could improve their capacity to adhere to stimulated endothelial cells and subendothelial components.
MATERIAL AND METHODS
Patients
Blood samples were drawn from participants after obtaining informed consent as part of a protocol approved by the ethics committee of the Montreal Heart Institute and in accordance with the recommendations of the Helsinki Declaration. A total of 40 adult males (>18 years old) with stable coronary artery disease or stable angina documented by angiography, all treated with antiplatelet agents and statins, were included in the study. Among these patients, 20 had T2D (T2D group) and 20 were nondiabetic (non-T2D group). The patients were predominantly hypertensive (n = 27), dyslipidemic (n = 38), overweight (n = 25), and with a smoking history (n = 27). Diabetic patients received biguanide (metformin) monotherapy (n = 10), biguanide + sulfonylureas (glyburide or glimepiride) bitherapy (n = 6), biguanide + sulfonylureas (glyburide) + DPP-4 inhibitor (gliptin) tritherapy (n = 2), or no medication (diabetes was controlled by diet). Exclusion criteria were acute coronary syndrome (ACS) or stroke within the past 6 months, treatment with insulin, treatment with peroxisome proliferator-activated receptors (PPARs; pioglitazone and rosiglitazone), extra cardiac inflammatory syndromes, surgery within the last 8 weeks, kidney or liver failure, use of systemic corticosteroids, cancer in the last 5 years, chronic anticoagulation, heart failure [NYHA class 3 or 4 and/or left ventricular ejection fraction (LVEF) <40%], and hemoglobin <100 g/L. Six healthy adult males [healthy donors (HD)] who showed no cardiovascular disease or known T2D were also recruited if they had not taken any medication during the past 15 days before blood sampling. All samples were analyzed in a single-blind manner with respect to the group (T2D or non-T2D).
Isolation of CD34 + and CD34 -Peripheral Blood Mononuclear Cells (PBMCs)
One hundred milliliters of blood was collected by venipuncture into syringes containing ethylenediaminetetraacetic acid (EDTA; 1.8 mg/ml of blood) (Sigma-Aldrich, St. Louis, MO, USA), dispensed into 50-ml conical tubes, and centrifuged at 400 ´ g for 15 min at 20°C, to remove a maximum quantity of platelets while minimizing PBMC loss. EDTA was used throughout the isolation process to avoid platelet binding to CD34 + cells. The platelet-rich plasma (PRP; upper phase) was removed, and the remaining blood components were diluted 1:1 in phosphate-buffered saline (PBS) containing 2 mM EDTA and 0.5% fetal bovine serum (FBS) (Sigma-Aldrich) (PBS/EDTA/FBS). Ficoll at a density of 1.077 g/ml (Amersham Biosciences, Little Chalfont, UK) was added to samples in a ratio of 1:3 and centrifuged at 400 ´ g for 40 min at 20°C (without brakes). The resulting mononuclear cell ring was collected at the Ficoll/plasma interface. Cells were then washed twice with PBS/EDTA/FBS and incubated for 10 min at 4°C with 100 µl of FcR blocking reagent (Miltenyi Biotec, Bergisch Gladbach, Germany) to remove FcR-specific binding antibodies. Cells were then incubated for 30 min at 4°C with 100 µl of magnetic beads bearing anti-CD34 monoclonal antibodies (Microbead; Miltenyi Biotec). After washing with PBS/EDTA/FBS, cells were filtered (30-µm nylon cell strainer; Miltenyi Biotec) to remove cell aggregates or other large contaminants and loaded on a MACS magnetic column (Miltenyi Biotec). Unbound CD34 -cells were collected, while CD34 + PBMCs were retained on the column. After three washes with PBS/ EDTA/FBS, CD34 + cells were recovered in 1 ml of PBS/ EDTA/FBS. To increase the purity of CD34 + cells, this step was repeated once on a new column with the retained fraction. Finally, cell viability was measured with trypan blue (Sigma-Aldrich).
Cell Preconditioning With TSP-1-Derived Peptides
CD34 + and CD34 -cells were diluted either at a concentration of 1,000 cells/µl for adhesion assays or at a concentration of 4,000 cells/µl for flow cytometry assays. Cells were then preincubated with either 30 µM of the CD47 interacting peptide RFYVVMWK (amino acid sequence: Arg-Phe-Tyr-Val-Val-Met-Trp-Lys) (4N1-1; Bachem, Bubendorf, Switzerland), 30 µM of the RFYVVM truncated peptide devoid of CD47-binding activity (Arg-Phe-Tyr-Val-Val-Met) (4N1-2; Bachem), or saline (vehicle) for 30 min at 37°C.
Phenotyping of Preconditioned Cells
The phenotype of preconditioned cells (with TSP-1 peptides or the vehicle, as previously described) was analyzed by flow cytometry using fluorescent-labeled antibodies directed against biomarkers grouped in four panels: panel 1 with CD47 (clone B6H12; R&D Systems, Minneapolis, MN, USA) and TSP-1 (clone A4.1; Santa Cruz Biotechnology, Santa Cruz, CA, USA); panel 2 with the adhesion molecules CD29 (clone TS2/16; eBioscience, San Diego, CA, USA), CD51/CD61 (clone 23C6, eBioscience), and CD162 (clone KPL-1; BD Biosciences, Franklin Lakes, NJ, USA); panel 3 with CD62P (clone P.seK02.22; BD Biosciences); and panel 4 with the apoptosis and cell death markers phosphatidylserine (annexin V labeling), 4¢,6¢-diamidino-2-phenylindole (DAPI), and propidium iodide (PI) (BD Biosciences). Each panel also included antibodies against CD34 (clone 581; BD Biosciences), CD42b (platelet marker; clone HIP1; BioLegend, San Diego, CA, USA), and DAPI to discriminate living cells.
Cell suspension ( 4´ 10 3 cells/µl) was incubated with each antibody panel (previously centrifuged at 2 ´ 10 3 ´ g for 2 min to remove aggregates of antibodies) for 30 min at room temperature in the dark. Immunophenotyping of CD34 + cells was performed on an LSR II flow cytometer (BD Biosciences) and analyzed with Kaluza software (Beckman Coulter, Miami, FL, USA).
Detection of Integrin Polarization and Platelet/CD34 + Conjugates
CD29 and CD51/CD61 distribution on cell surfaces and platelet (CD42b + )/CD34 + cell conjugates was visualized by confocal microscopy (Zeiss Observer Z1 equipped with a Yokogawa CSU-X1 confocal head QuantEM 512SC camera; Intelligent Imaging Innovations, Denver, CO, USA).
Cell Adhesion Onto Collagen-Vitronectin Matrices
Ninety-six-well plates (Sarstedt, Nümbrecht, Germany) were coated overnight at 4°C in PBS containing a mixture of 0.3 µg/ml vitronectin (Sigma-Aldrich) and 1 µg/ml type I collagen (Sigma-Aldrich). The wells were then saturated with 0.1% gelatin [American Type Culture Collection (ATCC), Manassas, VA, USA] for 1 h at room temperature and washed with PBS. Twenty thousand cells in 200 µl of endothelial basal medium-2 (EBM-2; Lonza, Walkersville, MD, USA) were pretreated with either the vehicle, RFYVVMWK, or RFYVVM for 5 min at 150 ´ g at room temperature to quickly spin down the cells onto the matrix. Plates were then incubated for 30 min at 37°C and gently washed with EBM-2. Finally, 100 µl of 2% paraformaldehyde (PFA) and 100 µl of DAPI were sequentially added. Nuclei were counted using an inverted epifluorescence microscope (Axiovert 200M, camera AxioCam MRm; Zeiss, Stockholm, Sweden) coupled with the image analysis software ImageJ [National Institutes of Health (NIH), Bethesda, MD, USA]. Results were expressed as the number of cells adhered per 20 ´ 10 3 cells originally loaded per well.
Cell Adhesion Onto HUVEC Monolayers
Human umbilical vein endothelial cells (HUVECs; PromoCell, Heidelberg, Germany) between passage 4 and 8 were seeded into 96-well plates for 48 h at a density of 25 ´ 10 3 cells/well. After 36 h at 37°C, 5% CO 2 concentration, and 95% relative humidity, HUVECs were stimulated for 18 h with 1 ng/ml tumor necrosis factor-a (TNF-a; R&D Systems) or 10 ng/ml interleukin-1b (IL-1b; Sigma-Aldrich). Cells were then washed twice with Hank's balanced salt solution (HBSS). To differentiate PBMCs from HUVECs during cell counting, PBMCs were prelabeled with 0.5 µg/ml calcein-AM (Sigma-Aldrich) in EBM-2 for 1 h at 37°C and then washed and resuspended in EBM-2 before seeding onto HUVEC monolayers. As for matrix adhesion assays, microplates were centrifuged for 5 min at 150 ´ g at room temperature to quickly spin down the cells onto the HUVECs and then incubated for 1 h at 37°C. After two washes with HBSS, the cells were fixed with 2% PFA. Calcein-AM-labeled PBMCs were then counted by fluorescence microscopy and ImageJ software. Results were expressed as the number of adherent cells per 10 ´ 10 3 cells originally loaded in each well. All experiments were performed in duplicate.
Statistics
Analyses were performed using the GraphPad Prism software v.5.01 (GraphPad Software, San Diego, CA, USA). Data were expressed as mean ± standard error of the mean (SEM). The Kruskal-Wallis nonparametric test was used to compare the three preconditioning treatments (vehicle, RFYVVMWK, and RFYVVM) in cell adhesion assays and the Mann-Whitney nonparametric test to compare biomarkers. Values of p < 0.05 were considered statistically significant.
RESULTS
Patient Characteristics
Both T2D and non-T2D had similar demographic characteristics (Table 1). Almost all participants (95%) had dyslipidemia and high cholesterol levels. The majority was also overweight (BMI > 27 kg/m 2 ) with a smoking history (n = 31). Beside a hypoglycemic therapy in T2D, there were no significant differences in drug regimen, with all patients being treated with antiplatelet drugs and statins. As expected, blood glucose (+40%; p < 0.001) and glycated hemoglobin (+19%; p < 0.001) were significantly higher in T2D participants. T2D also had higher triglyceride levels compared to non-T2D participants (+70%; p < 0.002).
Stimulation of Platelet/CD34 + Conjugate Formation by RFYVVMWK
Circulating CD34 + and CD34 -PBMCs were isolated from HD, T2D, and non-T2D blood by immunomagnetic separation and quantified. The purity of the isolated CD34 + cells was 92 ± 4%. No significant difference was found in total PBMCs (167.7 ± 50.2 ´ 10 6 PBMC/100 ml blood in T2D vs. 141.5 ± 32.5 ´ 10 6 in non-T2D and 136. 4 ± 38.8 ´ 10 6 in HD; p = 0.2) and CD34 -cells (77.2 ± 3.1 ´ 10 6 CD34 - cells/100 ml blood in T2D vs. 62.0 ± 2. 3 ´ 10 6 in non-T2D and 66.6 ± 2.5 ´ 10 6 in HD; p = 0.15). However, twice the amount of CD34 + PBMCs were retrieved from the blood of T2D participants compared to non-T2D and HD participants [218.9 ± 124.1 ´ 10 3 cells per 100 ml of blood vs. respectively, 101.6 ± 29.0 ´ 10 3 cells (p < 0.001) and 117.5 ± 49.8 ´ 10 3 (NS)]. The CD34 + /total PBMC ratio was also significantly higher in cell fractions isolated from T2D participants [0.13 ± 0.06% in T2D vs. 0.075 ± 0.03% in non-T2D (p = 0.0011), and 0.1 ± 0.07% (p = 0.06) in HD]. Although a double enrichment process and extensive wash were used during the purification of CD34 + cells, platelets were still detectable in the final cell preparations (average of 23.5 ´ 10 3 platelets/10 3 CD34 + cells). Using flow cytometry, we measured the extent of platelet/CD34 + cell conjugate formation (hereafter referred to as CD42b + /CD34 + conjugates) in samples and the effect of TSP-1 peptide preconditioning. RFYVVM preconditioning had no significant effect on conjugate formation, with 1.4% CD42b + /CD34 + conjugates in T2D and 2% in the non-T2D participants (NS). RFYVVMWK increased the percentage of CD42b + /CD34 + conjugates up to 11% in T2D (p < 0.0001 vs. RFYVVM) and 9% in non-T2D participants (p < 0.0001 vs. RFYVVM). Progenitor cellplatelet conjugate formation following RFYVVMWK treatment was confirmed by assessing the expression of CD42b and CD34 antigens by confocal microscopy (Fig. 1A).
Expression of CD47 and its Ligand TSP-1 CD47 expression was not modified by the RFYVVMWK preconditioning compared with the nonactive RFYVVM control peptide (Fig. 1B). We observed a 30% lower expression on T2D CD34 + cells compared to non-T2D cells (p < 0.01). CD47 expression was higher on CD42b + /CD34 + conjugates compared to CD42b -/CD34 + cells, probably due to the presence of CD47 on platelets (p < 0.05). TSP-1 was barely expressed on CD42b + /CD34 + conjugates and CD42b -/CD34 + preincubated with RFYVVM (Fig. 1C). RFYVVMWK induced a high expression of TSP-1 on both T2D and non-T2D CD34 + cells, in the presence or absence of platelets (all p < 0.001).
Expression of CD62P and CD162
RFYVVMWK induced a significant increase in CD62P on T2D and non-T2D CD42b + /CD34 + conjugates [+146% (p < 0.001) and +129% (p < 0.001) vs. RFYVVM, respectively], and a low increase in CD42b -/CD34 + cells [+26% (p < 0.01) vs. +25% (p < 0.001)] (Fig. 1D). By contrast, CD162 expression remained unchanged (Fig. 1E).
Expression of Adhesion Receptors CD29 and CD51/CD61
RFYVVMWK preconditioning of CD34 + cells significantly increased the expression of CD29 in both T2D and non-T2D participants [74% (p < 0.01) and 42% (p < 0.05) in CD42b + /CD34 + conjugates, respectively] (Fig. 2A). A strong increase in CD51/CD61 expression was also measured in T2D and non-T2D CD34 + cells [+2,715% (p < 0.001) and +3,260% (p < 0.001) in CD42b + /CD34 + conjugates, respectively] (Fig. 2B). Similar results were observed with CD42b -/CD34 + cells. Integrin polarization and clustering, which are indicators of integrin activation state, were also detected in RFYVVMWK stimulated cells by confocal microscopy (Fig. 2C).
Effect of RFYVVMWK on CD34 + Cell Viability
RFYVVMWK induced increased phosphatidylserine exposure (PI -/annexin V + cells) in both non-T2D CD34 + (7.2% vs. 0.15% with RFYVVM, p < 0.001) and T2D CD34 + cells (12.4% vs. 0.04% with RFYVVM, p < 0.001) (Fig. 3A). The percentage of PI + /annexin V + cells in response to RFYVVMWK was also significantly higher compared to RFYVVM but remained negligible (0.25% in non-T2D and 0.17% in T2D, p < 0.001 vs. RFYVVM) (Fig. 3B).
Cell Adhesion to Vitronectin-Collagen Matrix
CD34 + cells preincubated with vehicle (saline) adhered with values reaching 59.9 ± 8 cells/20 ´ 10 3 seeded cells, compared to 18.8 ± 2.9 CD34 -cells (p < 0.0001) (Fig. 4A). RFYVVMWK strongly increased the adhesion of CD34 + and CD34 -, respectively, by +368% and +468%. RFYVVM peptide had no significant effect, giving results comparable to vehicle. T2D CD34 + cells showed 67% less basal adherence (30 ± 4 cells) compared to non-T2D cells (90 ± 14 cells, p < 0.0001) (Fig. 4B). RFYVVMWK strongly increased the adhesion of T2D and non-T2D CD34 + cells by +786% (266 ± 54 cells) and +232% (296 ± 31 cells), respectively (p < 0.0001 compared to vehicletreated cells).
Cell Adhesion to HUVEC Monolayers Stimulated With TNF-α or IL-1β
We next measured the adhesion of CD34 + cells on HUVEC monolayers prestimulated with TNF-a or IL-1b. T2D and non-T2D CD34 + cells equally adhered to prestimulated HUVEC monolayers. Neither RFYVVMWK nor RFYVVM had a significant effect on cell adhesiveness (Fig. 4C andD).
DISCUSSION
Proangiogenic cell therapy offers many potential applications in regenerative medicine for the treatment of patients with ischemic diseases, particularly in cardiology. Promising preclinical studies have prompted the initiation of numerous clinical trials based on administration of progenitor cells. Several cellular functions are involved in the process of neoangiogenesis such as homing and recruitment of cells, proliferation, endothelial differentiation, and survival. However, cardiovascular risk factors such as T2D were associated with dysfunctional progenitor cells, including impaired adhesiveness 5,6 , which undermines their therapeutic value in autologous cell therapies 20 . CD34 + PBMC recruitment to damaged vessels is a crucial step to initiate the process of vascular repair and neovascularization 9 . In ex vivo settings, we observed a lower basal adhesion of T2D CD34 + cells to vitronectin-collagen matrix compared to the non-T2D CD34 + cells. We thus sought to investigate whether stimulating the adhesiveness of CD34 + PBMCs was feasible in an attempt to improve cell therapy efficiency in T2D.
The transmembrane protein CD47, TSP-1 receptor, associates with CD51/CD61, CD41a/CD61, CD49d/CD29, and CD49b/CD29 integrins to mediate cell adhesion and motility 17 . Herein we provide evidence that prestimulating CD34 + PBMCs with RFYVVMWK, a TSP-1-related peptide that activates CD47, restores and amplifies their adhesiveness to vitronectin-collagen matrix beyond the basal adhesion values obtained in non-T2D patients. In addition, we showed a strong increase in surface expression of CD29 and CD51/CD61 integrins following RFYVVMWK stimulation, thereby providing a possible mechanism to the increased adhesion of CD34 + cells to the subendothelial matrix components. Confocal microscopy strengthened this hypothesis by revealing polarization of integrin at the cell surface, consistent with the clustering process occurring during integrin activation.
The endothelial expression of CD51/CD61 (a v b 3 integrin) and its interaction with extracellular matrix components are crucial during angiogenesis 21 . This interaction triggers vascular endothelial growth factor (VEGF)-A-mediated full activation of VEGF receptor 2 (VEGFR-2), but also yields a strong antiapoptotic effect through the suppression of the p53 and the p53-inducible cell cycle inhibitor p21WAF1/CIP1 activities and the increase in the Bcl-2/Bax ratio 22,23 . Consistent with the later functions of CD51/CD61, RFYVVMWK priming did not compromise CD34 + cell survival, as assessed by annexin V/PI labeling. CD29 (b 1 ) integrin subsets diversely contribute to angiogenesis. Li and collaborators have associated CD29 expression levels with the rate of implantation and colonization of ischemic limbs with bone marrow-derived endothelial precursors, which is of critical importance for inducing therapeutic angiogenesis by cell implantation 24 .
Albeit standardized CD34 + PBMC isolation and purification techniques were used in this study, there were still platelet remnants in the positive fraction, as recurrently reported in the literature addressing CD34 + cell isolation and enrichment 25 . We observed a significant increase in platelet-CD34 + conjugate formation upon RFYVVMWK stimulation, along with an increased expression of CD62P restricted to platelet-CD34 + conjugates. These results are consistent with the previously reported activating effect of RFYVVMWK on platelets 26 . This activation, concomitant to CD34 + cell stimulation, induces platelet secretion and surface CD62P expression, thereby enabling platelets to interact with CD162 (PSGL-1) on CD34 + cells. As previously described by others, platelets are instrumental in neovascularization by targeting CD34 + cell recruitment to injured vessels and promoting their homing and maturation 8,10,11 .
Consistent with this rationale, RFYVVMWK stimulation had no significant effect on CD34 + PBMC adhesion on HUVEC monolayers, stimulated with either TNF-a or IL1-b. These results echo our observations in a TSP-1 knockout mouse model of FeCl 3 -induced intravascular thrombosis, in which we observed that TSP-1 was essential for bone marrow cell (BMC) recruitment to vascular injury sites 18 . We also reported that CD47 preactivation with RFYVVMWK strongly stimulated BMC adhesion and specific recruitment to sites of thrombosis in vivo 19 . The present findings suggest that stimulation by RFYVVMWK confers to CD34 + PBMCs an increased adhesiveness restricted to most damaged and de-endothelialized vascular areas exposing the matrix components, with limited stickiness to healthier areas.
It has previously been suggested that increased expression of adhesion molecules [including CD11a/CD18 (LFA-1), CD49d/CD29 (VLA-4), CD54 (ICAM-1), CD51/ CD61, and CD162] on CD34 + or endothelial progenitor cells and/or increased adhesiveness in vitro could translate into enhanced endothelial repair or neovascularization capacity in vivo [27][28][29] . In coherence with these observations, we have previously reported that priming BM-MNCs with RFYVVMWK results in increased proangiogenic activity in a mouse model of hindlimb ischemia and cell therapy 19 . However, additional studies are required to demonstrate whether the priming of CD34 + cells isolated from PB improves vascularization in vivo.
In a recent study, Albiero et al. suggested that increased adhesiveness of stem cells may hamper their ability of being mobilized from the bone marrow 30 . Our results are in line with the prospect of using the peptide ex vivo as a pretreatment strategy prior to administration of an autologous cell-based therapy product, rather than using RFYVVMWK in vivo. Thus, we anticipate that the endogenous mobilization of stem cells would not be affected. In addition, since the majority of current cell-based therapy strategies are using local injection in ischemic areas, it is unlikely that RFYVVMWK preconditioning of cells can favor homing of injected CD34 + cells into the bone marrow.
RFYVVMWK induced surface expression of TSP-1. This neo-expression was observed even in CD42b -/ CD34 + cells. The timeline of our experimental conditions suggest that TSP-1 originated from exocytosis or platelet secretion rather than from neosynthesis per se. The consequences of TSP-1 expression on CD34 + cells are difficult to anticipate as TSP-1 induces both positive and negative modulation of endothelial cell adhesion, motility, and growth through its interaction with a plethora of cell adhesion receptors, including CD47, CD36, CD51/CD61, and CD29 integrins, and syndecan 12 .
CD47 expression was not modulated upon RFYVV MWK stimulation. CD47 interaction with signal regulatory protein a (SIRP-a), expressed on macrophages and dendritic cells, negatively regulates phagocytosis of hematopoietic cells 31 . Interestingly, we observed that T2D CD42b + /CD34 + conjugates express significantly less CD47 on their surface compared to non-T2D cells. This lower expression of CD47 may contribute to a higher susceptibility of T2D CD34 + to phagocytosis in vivo. Yet, we could not demonstrate lower amounts of CD34 + in T2D PB as previously observed by others 32 . Surprisingly, using a singleblinded counting approach, we measured significantly higher levels of CD34 + cells recovered from T2D patients (n = 20) compared to non-T2D (n = 20). This could be due to the fact that counting of CD34 + cells was performed after enrichment of cells with an immunomagnetic CD34 antibody column rather than on the PB of patients, which may have introduced an unexpected bias in the quantification. In addition, several studies have suggested that glycemic control could impact circulating progenitor cell levels in diabetic patients. Indeed, oral antidiabetics were shown to attenuate the quantitative deficit and improve angiogenic function of progenitor cells in diabetics 33,34 . The underlying mechanisms probably involve reduction in inflammation, oxidative stress, and insulin resistance. Furthermore, a recent study reported a positive correlation between circulating CD34 + cell count and serum triglycerides in nonhypertensive elderly Japanese men, suggesting that triglycerides may stimulate an increase in circulating CD34 + by inducing vascular disturbance 35 . In our patient cohort, triglycerides were significantly higher in T2D patients despite statin treatment. In agreement with the study by Shimizu and collaborators 35 , CD34 + cell count significantly correlated with triglyceride levels in nonhypertensive patients (n = 11; Spearman test; r = 0.81; p < 0.004), but also to a lesser extent in the hypertensive group (n = 27; Spearman test; r = 0.43; p < 0.03).
In conclusion, priming CD34 + PBMCs from T2D patients with the TSP-1 carboxy-terminal peptide RFYVVMWK restores and amplifies their adhesion properties without compromising their viability. These findings may be instrumental to improve proangiogenic autologous cell therapy in several disease settings such as T2D.
Figure 1 .
1 Figure 1. (A) Examples of CD42b + (red)/CD34 + (green) conjugates formed after stimulation with RFYVVMWK observed by confocal microscopy. Scale bars: 5 µm. (B-E) Expression of CD47 (B), TSP-1 (C), CD62P (D), and CD162 (E) on CD34 + /CD42b + conjugates and CD34 + /CD42b -cells after RFYVVM or RFYVVMWK preconditioning. TSP-1, thrombospondin-1; MFI, mean fluorescence intensity; T2D, type 2 diabetes (gray bars); non-T2D, nondiabetic (white bars); NS, not significant. *p < 0.05; **p < 0.01; ***p < 0.001 [analysis of variance (ANOVA)].
Figure 2 .
2 Figure 2. Expression of CD29 (A) and CD51/CD61 (B) on CD34 + /CD42b + conjugates and CD34 + /CD42b -cells after RFYVVM or RFYVVMWK preconditioning. (C) Examples of CD29 (top) and CD51/CD61 (bottom) distribution on RFYVVM (left)-and RFYVVMWK (right)-stimulated cells observed by confocal microscopy are shown. MFI, mean fluorescence intensity; T2D, type 2 diabetes (gray bars); non-T2D, nondiabetic (white bars); NS, not significant. *p < 0.05; **p < 0.01; ***p < 0.001 (ANOVA).
Figure 3 .
3 Figure 3. Percentage of CD34 + annexin V + /PI -and annexin V + /PI + cells after preconditioning with RFYVVM or RFYVVMWK. T2D, type 2 diabetes (gray bars); non-T2D: nondiabetic (white bars); AnV, annexin V; PI, propidium iodide. **p < 0.01; ***p < 0.001 (ANOVA).
Figure 4 .
4 Figure 4. Effect of peptide preconditioning on the adhesion of CD34 -and CD34 + cells onto vitronectin-collagen matrix. (A) All patients (diabetic plus nondiabetic). (B) Diabetic (gray bars) versus nondiabetic patients (white bars). Results are expressed as number of adherent cells per 2 ´ 10 3 seeded cells. ***p < 0.001 (Kruskal-Wallis test). Effect of preconditioning with TSP-1 peptides on the adhesion of CD34 + (nonhashed bars) and CD34 -cells (hashed bars) in diabetic (gray bars) versus nondiabetic patients (white bars) on HUVEC monolayers prestimulated by TNF-a (C) or IL-1b (D). Results are expressed as number of adherent cells per 10 3 seeded cells (p > 0.05, Kruskal-Wallis test). HUVEC, human umbilical vein endothelial cell; TNF, tumor necrosis factor; TSP, thrombospondin; T2D, type 2 diabetes; non-T2D, nondiabetic; NS, not significant. ***p < 0.001 (ANOVA).
Table 1 .
1 Patient Characteristics
Nondiabetics Type 2 Diabetics
Characteristics (n = 20) (n = 20) p
Age (years) 70.1 ± 1.9 69 ± 1.6 0.34
BMI (kg/m 2 ) 29 ± 1.1 30 ± 1.6 0.2
Hypertension (%) 15 (75) 12 (60) 0.5
Dyslipidemia (%) 19 (95) 19 (95) 1
Former smoking (%) 12 (60) 15 (75) 0.46
Active smoking (%) 2 (10) 2 (10) 1
Blood glucose (mmol/L) 5.61 ± 0.1 7.9 ± 0.5 <0.001
HbAIc (mmol/mol) 39 ± 0.9 51 ± 3.6 <0.001
Total cholesterol (mmol/L) 3.8 ± 0.2 3.7 ± 0.2 0.7
LDL cholesterol (mmol/L) 2 ± 0.2 1.8 ± 0.2 0.08
HDL cholesterol (mmol/L) 1 ± 0.1 1.1 ± 0.1 0.8
Triglycerides (mmol/L) 1 ± 0.1 1.8 ± 0.2 0.002
Platelets (G/L) 199 ± 11.4 181 ± 6.3 0.3
Statins (%) 20 (100) 20 (100) NS
Antiaggregant (%) 20 (100) 20 (100) NS
Oral antidiabetics (%) 0 18 (90) <0.001
Results are expressed as means ± SEM. BMI, body mass index; HbA1c, glycosylated hemoglobin; LDL, low-density lipoprotein; HDL, high-density lipoprotein.
ACKNOWLEDGMENTS: This work was supported in part by
the Agence Nationale de la Recherche (Grant No. ANR-07-PHYSIO-025-02). The authors declare no conflicts of interest. |
01768739 | en | [
"shs.scipo"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768739/file/doc00028506.pdf | Odile Heddebaut
email: [email protected]
Floridea Di
email: [email protected]
Ciommo Cambiamo | Changing Mobility
City-Hubs for Smarter Cities. The case of Lille "Euraflandres" Interchange. 1
Keywords: Multimodal interchanges, smart cities, stakeholder involvement, socio-economic impacts, urban planning, urban landmark, Euraflandres Pôles d'échanges multimodaux, smart cities, engagement des acteurs, impacts socioéconomiques, planification urbaine, repère urbain, Euraflandres
Good planning and organization of communication and transport networks contributes to the development of cities that are more fluid and user-friendly and sustainable. All these attributes are included in the concepts underlying "smart cities". The efficient operation of the nodes located at these transport networks is a condition to make the city smarter. As part of the European research project FP7 "City-HUB", 27 interchanges have been studied in nine European countries. The key question to answer was how they function concerning the planning governance and the installations organisation. One of the main results of this research project has been the elaboration of a City-HUB typology that will be used to analyse the upcoming "Euraflandres" interchange transport node within the Métropole Européenne de Lille (MEL). The two adjacent "Gare Lille Flandres interchange" and "Gare Lille Europe interchange" will be integrated in a unique City-HUB named "Euraflandres". We demonstrate that the combination of transportation and urbanism policies within a transport node could stimulate local development, contribute to a sustainable and user-friendly city. The new pole of urban exchanges "Euraflandres" would overcome the pure role of transport infrastructure to become a "place" of life and the urban landmark for a smart city within the Métropole Européenne de Lille.
RÉSUMÉ
Une bonne planification et organisation des réseaux de communication et de transport contribue au développement de villes plus fluides, conviviales et durables. Tous ces attributs sont inclus dans les concepts sous-jacents des «smart cities». Le fonctionnement efficace des noeuds situés sur ces réseaux de transport est une condition pour rendre la ville plus intelligente. Dans le cadre du projet de recherche européen FP7 "City-HUB", 27 pôle d'échanges ont été étudiés dans neuf pays européens. Comment ils fonctionnent du point de vue de la gouvernance et de l'organisation des installations fut la question centrale de ces travaux. Un des principaux résultats de ce projet de recherche a été l'élaboration d'une typologie pour les City-HUB qui servira à analyser le futur pôle d'échange «Euraflandres» au sein de la Métropole Européenne de Lille (MEL). Les deux pôles d'échanges adjacents «Gare Lille Flandres» et «Gare Lille Europe» seront intégrés dans un futur City-HUB unique baptisé «Euraflandres». Nous démontrons que la combinaison des politiques de transport et d'urbanisme dans un noeud de transport pourrait stimuler le développement local, contribuer à une ville durable et conviviale. Le nouveau pôle d'échanges urbains "Euraflandres" permettrait de dépasser son pur rôle d'infrastructures de transport pour devenir un "lieu" de vie et un repère pour une « smart city » comme Lille au sein de la MEL.
INTRODUCTION & BACKGROUND
In the call for papers for the European transport research review (ETRR) special issue on "smart cities and transport infrastructures", the editors stipulate that "smart cities are concerned with new consideration towards environment, such as new ways for consuming and producing clean energies through mobility, oriented with new uses of information, but also better interconnection of networks, including transport means and infrastructure" [START_REF] Carnis | European Transport Research Review, Proposal for a topical collection on "Smart cities and transport infrastructures[END_REF]. This article tries to answer a key question: how an integrated and land mark transport interchange contributes to a smarter, more sustainable and friendly city? What is the role of public transport network connections to increase the use of eco-friendly transport modes? Within this context the paper focuses on the conception of intermodal transport infrastructures such as city-hubs and on their contribution to enhance urban mobility. This having an impact on smart cities environmental and quality of life aspects. After a literature review on the smart city concepts from technological, people and communities points of view (section 2), section 3 presents the evolution of the interchanges definition and their spatial organisation. We develop the idea that City-HUB interchanges are inducing new and smarter practices of transport infrastructure(s). In particular, the organisation of transport networks in City-hubs, associated with their fluidity, their comfort and their eco-friendly characteristics plays a key role within a Smart City. This section presents how the City-HUB interchanges typology works to rank transport interchanges and create a smart and eco-friendly urban environment. Section 4 is dedicated to the interchange case study of the Métropole Européenne de Lille (MEL) where two adjacent interchanges, "Gare Lille Flandres Interchange" and "Gare Lille Europe Interchange", are becoming a potential unique interchange named "Euraflandres". We describe its location as a node within the regional, national, and international railway networks, its socio-economic, spatial organisation and governance. In the conclusion (section 5) we present the main findings of our research work.
LITERATURE REVIEW ON SMART CITIES AND CONSIDERATION OF TRANSPORT DIMENSION
The smart city concept from a technological point of view
At the end of 90's the accent was put on technical aspects and particularly on the ICT (Information and Communication Technologies) to define smart cities (Harrison, Donelly, 2011).
Technology for smart cities is described by [START_REF] Washburn | Helping CIOs Understand "Smart City" Initiatives: Defining the Smart City, Its Drivers, and the Role of the CIO[END_REF] as "smart computing". Smart computing refers to "a new generation of integrated hardware, software, and network technologies that provide IT systems with real-time awareness of the real world and advanced analytics to help people make more intelligent decisions about alternatives and actions that will optimize business processes and business balance sheet results". [START_REF] Bertossi | Villes intelligentes, «smart», agiles, Enjeux et stratégies de collectivités françaises[END_REF] stressed the role of transport infrastructures in its capacity to contribute to the smart city as a "fluid", "intelligent" and "convivial" city. Intermodal city hubs are specific infrastructures that can combine these concepts by enhancing the mobility fluidity, the use of connected tools, and develop urban new neighbourhoods within the combination of transport and urbanism policies seeking conviviality and trying to enhance urban amenities for their citizens.
As explained by [START_REF] Albino | Smart Cities: Definitions, Dimensions, Performance, and Initiatives[END_REF], "cities worldwide have started to look for solutions which enable transportation linkages, mixed land uses, and high-quality urban services with long-term positive effects on the economy. For instance, high-quality and more efficient public transport that responds to economic needs and connects labour with employment is considered a key element for city growth".
In its paper on smart cities, the European Parliament (2014) counts smart mobility as a fundamental component of smart cities with smart governance, smart economy, smart environment, smart people and smart living. "By Smart Mobility we mean ICT supported and integrated transport and logistics systems. For example, sustainable, safe and interconnected transportation systems can encompass trams, buses, trains, metros, cars, bicycles and pedestrians in situations using one or more modes of transport". Moreover, it insists on the interaction between stakeholders. "Smart Mobility prioritises clean and often non-motorised options. Relevant and real-time information can be accessed by the public in order to save time and improve commuting efficiency, save costs and reduce CO2 emissions, as well as to network transport managers to improve services and provide feedback to citizens. Mobility system users might also provide their own real-time data or contribute to long-term planning ((European parliament, 2014, p.28)".
The necessity to combine urban planning and smart cities initiatives is also stressed by Anthopoulos and Vakali, (2016) who examine their interrelations and reciprocities between these policies.
The smart city concept from the involvement of people and communities
Transport infrastructures are also seen as a means to contribute to smarter a city and provide its inhabitant with a better quality of life. The review of literature made by [START_REF] Albino | Smart Cities: Definitions, Dimensions, Performance, and Initiatives[END_REF] shows that there is a need to reintroduce organisation, and "look at people and community needs".
For [START_REF] Chourabi | Understanding smart cities: an integrative framework, 45 th Hawaii International Conference on System Science.Batty et al; Smart cities of the future Eur[END_REF] infrastructures that contribute to smarter cities are seen as technical ones such as wireless infrastructure (fibre optic channels, WI-Fi networks, wireless hotspots), and serviceoriented information systems. But they determine eight critical factors such as management and organisation; technology; governance; policy context; people and communities; built infrastructure and natural environment to understand smart cities. When describing one of these factors particularly important related to people and communities, they count accessibility to provide an impact on citizen's quality of life. Nam and Pardo (2011a) firstly made a literature review to understand the common multidimensional components underlying the smart city concept. They describe a smart city as "an organic connection among technological, human and institutions components". They provide a scheme linking these three factors nourishing the vision of a smart city including smart transportation. They affirm that "social factors other than smart technologies are central to smart cities". When defining smart cities concepts from a technological point of view they say that "ITS can help people make more intelligent decisions about alternatives".
In a new publication, [START_REF] Nam | Smart city as urban innovation: Focusing on Management, Policy and context[END_REF] insist on the fact that "smart is more user-friendly than intelligent which is limited to having a quick mind and being responsive to feedback. A smart city is required to adapt itself to the user needs and to provide customised interfaces". They conclude that "a smart city is not only a technological concept but a socioeconomic development one, service oriented. It is a "new concept of partnership and governance developed through electronic linkage of multi-level, multi-jurisdictional governments and all non-governmental stakeholders such as firms ... and citizens".
METHODS TO DETECT CITY-HUBS ROLE IN A SMART CITY
Evolution of interchanges' definitions
As seen above, smart mobility and good use of transport infrastructures contribute to smarter the city. Intermodal transport interchanges are specific infrastructures enabling this smart mobility.
In this section we demonstrate that the definition of interchanges evolves from a purely functioning one describing the ease of movement inside and outside the interchange towards its integration into a more complex vision of its different interactions with transport, service and city functions. Furthermore it describes how they provide better citizen's quality of life by linking institutional, governance, and socioeconomic development within the urban context.
A clear definition of interchange was elaborated by the Madrid Regional Transport Authority in 1985 with a vision of making them accessible, working and convenient i.e.: an "Area whose purpose is to minimize the inevitable sensation of having to change from one mode of transport to another, and efficiently using the inevitable waiting time" [START_REF] Crtm | Plan de Intercambiadores Madrid[END_REF].
Public transport hubs in many European cities are often designed for different scale functions. [START_REF] Richer | L'émergence de la notion de « pôle d'échanges » : entre interconnexion des réseaux et structuration des territoires[END_REF] describes the three functions of an interchange. He associates the transport function enhancing the mobility fluidity using smart transport services with the city function combining city and land planning, the neighbourhood development and new territorial polarisation, providing city services. The service function represents services that concern the different domains of the previous functions. Source: [START_REF] Richer | L'émergence de la notion de « pôle d'échanges » : entre interconnexion des réseaux et structuration des territoires[END_REF] They can also provide new functions and determine new roles for national rail and road network accessibility, creating new hierarchies within cities. They can have a very important function within the regional planning context providing new urban centralities as explained in section 5.1. Multimodal poles are also integrated into urban and local land planning. They can produce urban regeneration of some areas and be part of a transit oriented development (TOD) policy [START_REF] Calthorpe | The Next American Metropolis: Ecology, Community, and the American Dream[END_REF][START_REF] Cervero | Transit-oriented development and joint development in the United States: A literature review[END_REF] The ultimate function of an interchange is to easily transfer from one mode of transport to another. The main idea is to facilitate intermodal transfers, increase the sustainable transport mode use, and reduce the total journey time, improving the quality of service. Interchange nodes are oriented to coordinate various private and public modes
Spatial organisation within the interchanges
Di [START_REF] Di Ciommo | Using hybrid latent class model for city-HUBs´users behaviour analysis[END_REF] show that users identify the improvement of city-hubs with the quality of time spent inside. The current challenge of interchanges is to facilitate transfer from the use of private motorized vehicles to a shared use of cars (i.e. car sharing or carpooling), to the use of public transport, and non-motorized modes. It is, in a certain way, a planning principle. A comfortable and practicable connection by platforms, information systems, bike and ride options, and pedestrian flows organization around an interchange, will be the pivot for designing, constructing and renewing interchange spaces. Travel intermodality could become a real policy goal to provide passengers with seamless journeys even when they use a combined trip chain.
It is essential to make interchanges attractive places in order to reach or maintain a good level of public transport use. As travel patterns become more complex, currently many public transport users have to make transfers between different transport modes to complete their daily journeys. In this respect, measures oriented to improve public transport service quality are required, such as reducing the transfer inconvenience and providing a seamless travel experience. Moreover, total travel time directly influences trip choices. Good connectivity at public transport stops and stations is therefore critical to overall transport network effectiveness [START_REF] Iseki | Style versus Service? An Analysis of User Perceptions of Transit Stops and Stations[END_REF]. Urban transport interchanges play a key role within transport network since they allow that different modes can be used in an integrated manner. However, transport stations in general must be considered multimodal facilities where travellers are not only passing through; they are also spending time there [START_REF] Van Hagen | Waiting experience at train stations[END_REF]. For this reason, public transport users are particularly affected by the quality of the service provided.
The European "City-HUB" project investigated how transport interchanges work from the point of view of governance and the organization of facilities. This project has determined a number of relationships between transport infrastructure such as multimodal interchanges and their environment. A good link between transport functions and urbanism planning provides efficient use of urban spaces, especially in city centres where there is scarcity of space and where multiple types of trips coexist. The social link represents the necessity to deploy inclusive mobility where persons with special needs could access the transport modes at a fair price. Developing strategic governance between planners, policy makers, operators and the business world ensure a good functioning of these city interchanges. The technical link develops innovations and can change user's habits by implementing intelligent transport systems (ITS) and nomad technologies like itinerary devices on smart phones.
The aim of an interchange will generally be oriented to improve the quality of public transport services and support seamless door-to-door travel. But nowadays an interchange is more than a simple node in a transport network; it includes many elements. Research literature shows that the benefits of urban interchanges relate to time savings, better use of waiting times, urban integration, and improved operational business models [START_REF] Di Ciommo | L'accessibilité: l'enjeu prioritaire de la nouvelle politique des transports publics à Naples, in Bernard Jouve, Les politiques de déplacements urbains en Europe[END_REF]. Besides accessibility improvements, management, and innovation, an efficient use of interchanges should also be considered. It also concerns the urban environment related to its impacts on the land use or constraints by the land use around the interchange. On this basis, we defined a typology of interchanges to classify those interchanges and select the key elements to improve the interchanges location, construction, and organisation.
Scoring interchange weight
These different functions as described above can be applied to interchanges that present particular dimensions and sizes. On the basis of the 27 analysed interchanges in the City-HUB project we have established a typology capturing different interchanges and a scheme for scoring their characteristics in terms of function and logistic dimensions (demand, number of transport modes, services and facilities, location in the city) and their local constraints [START_REF] Di Ciommo | Interchange place in Monzon-de[END_REF]. In particular, the first group of aspects (Dimension A) is related to the internal functions and logistics of an interchange, including transport elements of the interchange and the services and facilities necessary to fulfil the transfer functions properly. This dimension determines the size of the terminal building. The second group (Dimension B) includes the external aspects of the city environment that affect how the building could be in reality. This dimension includes the location of the interchange within the city and whether or not the interchange plan is in conflict with the existing land uses in the surrounding area, and if a specific development plan exists for the area of the interchange or for the interchange in itself. The values given in Dimension A determine the need for space: interchange size. Total score lower than 4 require a Small interchange. Scores 5-7 indicates the need for a Medium one, while higher than 8 means that the interchange should be rather big, becoming an urban Landmark. Dimension B aspect could be negative, positive or neutral, modifying in this way the previous scores and the type of required interchange.
This typology is applied to analyse the "Euraflandres" interchange case study in section 4.5 when questioning its potential to become an urban landmark.
The interchange governance framework
Once the typology of an interchange and urban planning has been defined, the second relevant aspect is its managing and the regulation behind governance. Following the developed analysis for the City-HUB cases studies, the governance framework is specified through carrying out semistructured interviews with key interchange actors (Popelier et al., 2016).
The co-ordination of modes is related to the involvement of different stakeholders in a common governance framework to plan interchange practices and urban space with an urban sustainable scope. The friendly use of these interchanges by citizens is also a goal to attain and new technologies can be deployed in the context of smart cities, such as on-time information, free access networks for mobility purpose but also providing new public spaces and urban facilities.
Despite existing barriers (complex governance framework, physical barriers, functions and logistics to revise, local constraints), all the stakeholders are willing to improve the visibility and the functionality of these interchanges. The city-HUB project shows how the urban interchange has the role of developing activities and regenerating the urban environment, by transforming the surrounding area features. All this will make cities more convivial and fluid, answering per se at least to two key aspects of smart cities: the fluidity and the conviviality (see, the definition of a smart city as a "fluid", "intelligent" and "convivial" by [START_REF] Bertossi | Villes intelligentes, «smart», agiles, Enjeux et stratégies de collectivités françaises[END_REF].
THE CASE STUDY OF THE "EURAFLANDRES" : A POTENTIAL URBAN LANDMARK?
In this section, we need to precise some definitions of used names. "Gare Lille Flandres" means the "Gare Lille Flandres" railway station. "Gare Lille Flandres Interchange" means the interchange composed of the "Gare Lille Flandres" railway station and the metro and tramway stations bearing the same name linked with the other public transports such as buses, public bike sharing named V'Lille and public car-sharing system named Lilas. The same distinction occurs for the "Gare Lille Europe" that is the international railway station and the "Gare Lille Europe Interchange" that combines the entire urban public transports, including international trains, and the future private coaches station (Eurolines, SNCF interurban coaches, ...).
"Euralille" is the name of the shopping mall that is located between the "Gare Lille Flandres Interchange" and the "Gare Lille Europe Interchange".
"Euralille" is also the name of the new business centre created at the end of the 90' at the occasion of the Northern TGV network achievement. In the literature it is also named "Euralille 1" or the CIAG (Centre International d'Affaires des Gares). We will call it the "Euralille CBD (central business district)".
"Euralille 3000 project" is the name of the future "Euralille CBD" development project until the year 2030.
"Euralille spl" is the name of the company in charge of the "Euralille CBD" development.
"Euraflandres" is the future name of the bigger interchange. It will include the "Gare Lille Flandres Interchange", the "Gare Lille Europe Interchange", the "Euralille" shopping mall and the "Euralille CBD".
The place of the "Euraflandres" interchange as a node on networks
The main reason to develop a joint interchange as a unique node is to reduce car dependency and increase public transport use. Effectively we describe in section 4.5 how "Euralille CBD" is the result of political willingness of combining railway infrastructure investment and urban public stations on a high speed railway network and the creation of a completely new business and commercial district.
In February 2016, the MEL published its new document for planning and sustainable development of its land use Scheme of Territorial Coherence (SCOT). For the first time the SCoT, introduces the city-HUB of "Euraflandres" as the linkage of the two interchanges of "Gare Lille Flandres Interchange" and "Gare Lille Europe Interchange" in relation with the new district of Euralille CBD [START_REF] Mel | Projet d'aménagement et de développement durables du SCOT de Lille Métropole[END_REF]. In January 2017, various interviews were conducted with "Euraflandres" stakeholders in order to understand the transformations and evolutions that are projected within the MEL interchanges of "Gare Lille Flandres Interchange" and "Gare Lille Europe Interchange" to become "Euraflandres".
In this SCoT planning document, the name of "Euraflandres" appears and is put forward to affirm its role as the gateway to the MEL. "In the centre of the MEL, the railway stations "Gare Lille Europe" and "Gare Lille Flandres" associated with urban transport constitute a real transport hub. "Euraflandres" is a key nerves centre for travels and must be an attractive, readable and radiant pole serving the users and the overall image of the territory" (MEL, 2016).
"Euraflandres" is located at a hub on the regional, national and international high speed train rail network linking France to the United Kingdom, the Netherlands, Germany, and Belgium. This new position on the international high speed railway network created new centrality for Lille. Effectively, Lille, which was previously at the end of the French networks, being placed at the crossroad of the Northern high speed train network changed its role. Lille has become more central within the European transport network, in connection with different sub-regional areas with access to regional trains and intercity buses (figure 3).
As explained in the SCoT document, "Euraflandres" will be composed of two main interchanges in the MEL: "Gare Lille Flandres interchange" and "Gare Lille Europe interchange". They are located 500 metres from each other in the "railway stations triangle" and offer possibilities to transfer from urban public transports to rail services at the mainly regional level for the "Gare Lille Flandres" railway station and the mainly national and international level for the 'Gare Lille Europe" railway station. A big shopping mall is located between these two interchanges (figure 4). It is located in the centre of the new business district of "Euralille CBD" and bears the same name of Euralille. The Euralille shopping mall is 67,000 m² and includes 164 shops and services of which one is a hypermarket. 4.2 Spatial organisation within the future city-HUB of "Euraflandres"
The "Gare Lille Flandres interchange" is composed of the railway, metro and tramway stations bearing the same name of "Gare Lille Flandres". The "Gare Lille Flandres" Railway Station is an old construction inside the city very close to the main square, and near the old part of Lille. This station opened in the 19 th century in 1848. Its façade is the previous "Gare du Nord" from Paris that was reconstructed in Lille in 1867. It is the second regional train traffic station in France after Lyon. The "Gare Lille Flandres" railway station serves the regional towns with regional trains named TER (Express Regional Trains). It also links in one hour Lille to Paris with direct TGV trains.
"Gare Lille Flandres" railway station counts 17 platforms for more than 500 trains per day and has a traffic of 20 million passengers per year (2012) and 110,000 daily users of which 70,000 take the train and 40,000 are only crossing, using services or going into its shops. It has a lot of connexions with the urban transport network with the metro VAL (light automated vehicule) line 1 and line 2 (located underground second level), two tramway lines towards Roubaix and Tourcoing (at intermediate level) (see figure 1). Since the refurbishment of "Gare Lille Flandres" railway station, other shops have opened such as a sport bike-based shop, a supermarket and restaurants.
The "Gare Lille Europe interchange" is composed of a very modern railway station, metro and tramway stations bearing the same name of "Gare Lille Europe". The railway station was constructed to host the Northern high speed trains (French TGVs, Eurostars and Thalys) on the high speed railway network (see figure 1) and urban public transport station. It also serves regional high speed trains named TER-GV. These trains run on the high speed tracks to link the "Côte d'Opale" coastal area and the cities of Calais, Dunkirk, Boulogne sur Mer until Rang-du-Fliers at the South West of the region.
In order to facilitate moving between different means of transport for passengers, in the two railway stations a new color-coded information is used to display information: in blue, information about railway services (platform number); in green, information about the intermodal transport modes (metro, tramway, buses, taxis, self-service bikes, car-sharing, …) in yellow, information about services (ticket sales counters, waiting rooms, meeting points, toilets, …). It has been tested for the first time in the "Gare Lille Europe" railways station and is now applied in all the French railway stations in order to provide the same travel information and ease the passenger's trips inside the interchanges. This new colour code can be seen inside the two railways stations on figure 4.
Innovations have been made for fares and ticketing information with the new "PassPass" smart card information and ticketing. On-time information displays for train departures are located in the railway stations and in the "Gare Lille Flandres" metro station; in the O'Conway pub and in coffee shops in the "Gare Lille Europe" railway station. Nevertheless there is nearly no transport information in the Euralille shopping mall. A new "mobility agency" the "Pass Pass Boutique" gives information on the public transport network and sells special subscription cards (for students, schoolchildren, etc.) at the "Gare Lille Flandres" railway station. It also sells tickets for either urban trips on urban public transport network or railway trips on the TER regional passenger's network. This "Pass Pass Boutique" is the result of good governance between the different stakeholders allowing the share of transport data.
Intelligent transport supports the improvement of the quality of life in the city, while it offers tools for traffic monitoring, measurement and optimization (Anthopoulos and Vakali, 2016). An application provides real time information inside the "Gare Lille Flandres" railway station for all the MEL urban buses departures. This necessitates good coordination and sharing on data between the railway and urban public transport operators. Here is displayed the fluidity of travels and comfort of passengers that contribute to the concept of smart cities.
The smartness of use of these interchanges is provided inside the interchanges. Information to travellers is provided on the Internet and smartphones; there is an intermodal map with the location of each transport mode; free-of-charge and unlimited Wi-Fi access. Inside the two interchanges there are facilities such as new tickets purchase machines, new waiting lounges with special lounges for the "grand voyageurs" loyalty programme members, coffee shops and pubs, press kiosks, left-luggage service, toilets.
They also offer new facilities: Wi-bike (biking plugs), children's area in the concourse and free use pianos that contribute to a pleasant ambiance. On figure 4 (left) we see that the interchange is used by people that are not travelling but who want to benefit from the facilities such as tables equipped with plugs, free Wi-Fi, and it looks like a "café" ambience with shops surrounding the travel facilities. This show that interchanges become places to meet and live in the city.
4.3 Spatial organisation outside "Gare Lille Flandres Interchange" and "Gare Lille Europe interchange"
"Gare Lille Europe Interchange" and "Gare Lille Flandres Interchange" are offering numerous transport modes to access and egress the two railway stations bearing the same name. They are close to regular bus routes and one specific bus route the Citadine serves them on a circular route inside Lille. Twelve bus routes stop at the "Gare Lille Flandres interchange" exits and seven bus routes and long-distance coaches stop at the "Gare Lille Europe interchange". Self-service bikes in free access stations named V'Lille are placed near the two interchanges' main exits and a supervised bike garage which is free-of-charge for public transport passengers and train users is located at "Gare Lille Flandres interchange". For these two interchanges paid parking facilities exist.
Located between the two interchanges, a special "shuttle" bus links the Lille airport and another one the Charleroi airport in Belgium. Car-sharing possibility is offered at "Lille Flandres Interchange". The "Gare Lille Flandres" railway station passengers mainly come from the metro (44%) or tramway (4%) and flows are mainly coming from the underground access; or are walking or cycling (35%). The facts that there is a supervised free of charge garage for private bikes eases the use of that non-motorised mode. The "Gare Lille Europe" railway station passengers mainly come by car and taxis (42%) or metro and tramway (31%). In figure 5 we have regrouped the metro and tramway figures because tramway access trips to the railway stations are very low. We notice the differences in access modes for the two railway stations. This could be explained by travel motives. Effectively in "Gare Lille Flandres" railway station, there is more travel for work or study motives. The free access garage for private bikes could explain the cycling practice. In "Gare Lille Europe" railway station there is more travel for leisure or business motives that could explain partly the importance of access by cars or taxis. Daily travellers taking the TGV for work, can park their cars underneath this railway station and access directly to the platforms to take their trains.
There is a big issue for these interchanges to make possible the modal shift to public transport modes.
"Euraflandres": a city-HUB typology implementation
Urban policy plays an important role in shaping and changing the regional, national and even global linkages of cities. As described by [START_REF] Nam | Smart city as urban innovation: Focusing on Management, Policy and context[END_REF] coordination of policies-across a variety of spatial scales, across organizational practices, and across all levels of governance-is of vital importance to innovation in a city. They remark that "integration is not merely for technologies, systems, infrastructure, services or information but for policies". Governance involves the implementation of processes to share information and set up policies and actions to develop common projects [START_REF] Chourabi | Understanding smart cities: an integrative framework, 45 th Hawaii International Conference on System Science.Batty et al; Smart cities of the future Eur[END_REF].
Within the Lille metropolitan area, transport services' implementation is under the responsibility of different transport authorities at different institutional scales (national, regional and local). The regional express trains (TER) supply (investments, timetables, frequency, quality of service …) is ordered and financed by the Hauts de France regional Council. It is composed of TER trains in "Gare Lille Flandres" railway station and TER-GV (High speed TER) in "Gare Lille Europe" railway station. The urban public transport modes are operated by Transpole (Keolis) and the the MEL is the public authority for urban transport. The MEL elaborates urban sustainable mobility plans (SUMP) to enhance its inhabitants' mobility [START_REF] Lmcu | Euralille 3000 -Bilan de la concertation préalable-arrêt du projet et lancement de la procédure de déclaration de projet[END_REF]. The railway stations are run by Gares & Connexions and "SNCF-mobilités" is the rail operator. The tracks are under the property of previously RFF (Réseau Ferré de France) that became "SNCF Réseau" on 1 st July 2015. In the "Gare Lille Europe" railway station, national trains are scheduled and operated by "SNCF-mobilités" and international trains belong to the Railteam Company (SNCF, Eurostar, Thalys…).
All these different stakeholders meet regularly in common committees in order to provide better supply and services to passengers and citizens. As seen above, the joint "Pass Pass Boutique" is the result of a good cooperation and governance between transport authorities and operators at these different scales in order to share data, give information and purchase possibilities in the same site. This contributes to ease and facilitate passenger's trips. Joint committees are also organised between these previous stakeholders and other policy makers in urbanism and business domains. All this, to say that the governance issue for the possible "Euraflandres" interchange is complex to understand and operatively to manage.
We have conducted interviews in 2017 to understand the "Euraflandres" interchange's development as a MEL landmark.
We met the political and decision makers, the MEL's transports, roads and urbanism directors. They are in charge of planning transports' investments and organisation within the MEL urban public transport territory. They consider the future "Euraflandres" interchange achievement as a priority to ease the inhabitants and visitors' travels and accessibility. The Haut de France' TER service responsible plans and organises the TER supply and is in charge of investments in the regional railway stations. He supervises the future "Euraflandres" interchange accessibility by trains.
The City of Lille director for circulation organisation plan was implemented in August 2016 with three loops that serve the "Euralille CBD" but cars can no more cross it.
The Keolis operator Transpole services told us how they were reorganising the urban public transport access in the vicinity of the future "Euraflandres" interchange.
The regional railway stations director received us to show the transformations undertaken inside the "Gare Lille Flandres" and "Gare Lille Europe" railway stations in cooperation with the other stakeholders involved in the future "Euraflandres" development.
The "Euralille spl" director told us that they are willing to develop the "Euralille CBD" with a greater consideration of the neighbourhood quality of life and give its inhabitants more urban facilities such as restaurants, cafés, and leisure facilities. The facility/estate owners and/or estate companies who are in charge of the construction and maintenance of the interchange were also questioned about the role of interchanges in local economies and their potential impact on that.
They all together work in order to realise the renewing of the two interchanges. Within the "Gare Lille Flandres interchange", a first refurbishment was made in 2008 for the "Gare Lille Flandres" metro and tram stations. The "Gare Lille Flandres" railway station is under refurbishment since 2013 with works on the tracks for a better train supply. Since 2014 further infrastructural work were undertaken to provide new concourse, new joint Public Transport and rail ticket office (purchase of rail and/or public transport tickets, joint information, network maps, etc.). These works will also give a wider access to public transports, linking the ground floor level with bus access to underground levels where are located the "Gare Lille Flandres" tramway station and underneath the "Gare Lille Flandres" metro station. The first two floors of the railway station will be converted into a new 1,300 m² business centre with commercial and office spaces. The refurbishment budget is €18 million (in € 2013). It is shared between SNCF Gares & Connexions (€14,019,862), Nord Pas-de-Calais Region (€2,000,000), European Union (€1,320,138), the State (€660,000) and the City of Lille (€115,500).
According to the stakeholders' interviews, the "Gare Lille Flandres interchange" and "Gare Lille Europe interchange" refurbishment occurred after identifying the needs of the operators and clients. They also consulted the reduced mobility persons association. Moreover, the "Gare Lille Flandres" railway station stakeholders have set up discussion groups for the operation of the 13 regional TER routes involving elected members, operators and passengers for resolving problems and enhancing travel quality.
The SNCF director who is in charge of the two "Gare Lille Flandres" and "Gare Lille Europe" railway stations said during our interview that "before the railway stations were considered to be places where we took the train and it is everything with just a waiting room and a few services. There is nowadays a strong expectation of customers before taking the train to be able to get more services and shops. The second aspect is to include the railway station in the city. From this point of view, there are several functions which the railway station must complete: there is a function for connecting to different modes of transport. There is also an ambition to offer shops to the inhabitants of the district, a range of restaurants. So it's really a willingness to see the railway station as being an address in the city, really turned outwards and not towards the platforms".
4.5
The future "Euraflandres" interchange as a city-HUB landmark
Based on the typology described in section 3.3, the "Euraflandres" interchange gets a score of 9 because its demand is higher than 120,000 daily passengers, includes 13 public and private transport modes (several PT (metro, tram and urban buses), long distance coaches, car and bike sharing, taxis and even bike taxis), and is located in the inner city centre in the new "Euralille CBD". It is the result of reflexions included into local plans of urban development such as the SUMP and SCoT documents and TOD definition. The "Euraflandres" interchange has all the characteristics for becoming an urban landmark for the city of Lille and the MEL.
As highlighted by [START_REF] Banister | Transport investment and the promotion of economic growth[END_REF] three conditions must be present in order to induce economic development.
1) The first condition is the existence of political willingness to implement complementary policies in order to provide a better environment, boost the transport investment and obtain economic development.
2) The second condition is the significant size of the transport investment. A new interchange must provide new accessibility and new connections between transport modes. 3) The third condition is the economic context. It must reach high quality of labour forces and present underlying dynamics economic externalities at a local, regional or national level.
These complementary conditions can be useful to implement a transport hub as the part of an overall larger integrated policy and/or plan aiming at (re)developing linked economic activities and urban function (re)development. Moreover they conclude that "policy design also has a crucial role in influencing and strengthening the potential impact of transport infrastructure investment on local economic development."
For the "Euraflandres" interchange, a huge urban development project "Euralille CBD" was created.
The political willingness, first condition for economic development, can be illustrated by Pierre Mauroy the President of the MEL and former Prime Minister. His, ambition was to modernise the Lille city centre by constructing an entirely new district "Euralille CBD" and he gave it the name of "tertiary turbine". It was constructed from a non aedificandi zone corresponding to the ancient walls of the city and military lands (Prevot, 2011). A political decision was taken to create a completely new neighbourhood named "Euralille" seen as an important factor of the new social and economic policy for the MEL. This new neighbourhood was firstly developed as a "complex of economic functions rather than a neighbourhood" [START_REF] Moulaert | Euralille: Large-Scale Urban Development and Social Polarization[END_REF].
The second condition for economic development and the landmark city-HUB label is a transport investment of significant size. In the context of the Channel tunnel (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994) and the Northern high speed train network (1987)(1988)(1989)(1990)(1991)(1992)(1993) constructions, important urban investments were realised in Lille [START_REF] Heddebaut | La stratégie d'accompagnement transmanche dans le Kent et le Nord-Pas de Calais : entre contextes institutionnels spécifiques et temporalités décalées[END_REF].
As seen above, the "Gare Lille Europe" railway station of international dimension represents an investment of € 146 million (in 1994) and was planned to host the Eurostar trains from London to Paris and to Brussels. It also host the French TGV, High Speed Trains, linking the main French cities in different regions: eastwards (Strasbourg), south wards (Lyon and Marseille) westwards (Bordeaux and Toulouse) and the Brittany (Nantes and Rennes). All these TGV serve and stop at the international Charles de Gaulle airport [START_REF] Heddebaut | Does the "tunnel effect" still remains in 2016?[END_REF].
The third condition for obtaining economic development is the high quality of the labour force. The new jobs created in the "Euralille CBD" correspond to these criteria. They all belong to the tertiary sector with a high representation of commerce (46%) and high skilled jobs in data processing ((18%), insurances (13%) and finance (12%) (Figure 6). Based on the [START_REF] Banister | Transport investment and the promotion of economic growth[END_REF] three conditions to induce economic development we can summarise by saying that the strong political will, first condition, was personified by Pierre Mauroy, who convinced all the regional political stakeholders to develop a new neighbourhood in Lille. The sufficient size of the transport investment, second condition, is represented by the new "Gare Lille Europe" railway station and the successive refurbishments of the "Gare Lille Flandres interchange" (in the railway, tramway and metro stations). The high quality of labour force, third condition, is illustrated by the distribution of jobs in this new neighbourhood of "Euralille CBD".
Different phases have been realised providing this new "Euralille CBD" neighbourhood with housing for local population, local culture, conviviality, etc. The Euralille shopping mall was created between the two "Gare Lille Flandres interchange" and "Gare Lille Europe interchange". New office buildings and housing have been constructed in the vicinity of the "Euraflandres" interchange.
Further extensions are foreseen to enlarge the "Euralille CBD" under the "Euralille 3000 project". The finding after the different interviews with the stakeholders shows that there is a greater ambition in Lille. To link transport and land-use planning, all of the planning stakeholders are involved in a Metropolitan development view (the regional council in charge of the regional planning and regional express trains development, the City of Lille, the Nord Département, the MEL that is the urban transport authority, the SNCF and Transpole the transport operators and Euralille Spl in charge of developing this area [START_REF] Heddebaut | Multimodal city-hubs and their impact on economic and land use planning[END_REF].
Three development areas are still under construction within the actual "Euralille CBD". Euralille 1 or the CIAG international centre for businesses included into the future "Euraflandres" interchange, and two other developments zones a little further, but linked to the two railway stations by the metro line 2 as explained during the Euralille Spl Director's interview.
Two other development projects are under construction [START_REF] Estienne | Euralille, fiche projet, extrait du tome 3, POPSU, Plate-forme d'observation des projets et stratégies urbaines[END_REF]. The first one of 22 hectares constructed on the wastelands of the former Lille International Fair is called "Euralille 2" close to "Euralille CBD" and will create a neighbourhood called "inhabited wood" with 600 housing programme; offices and activities; the extensions of Lille Grand Palais the international congress centre and the headquarters of the Region and the implementation of 13,000 m² of public facilities (schools, nurseries, kindergartens, sports equipment). The second one called "Porte de Valenciennes" is a 18 hectares program creating 1000 housing units including 360 social housing (120 social rental and 240 free rental and accessions) carried by CMH and LMH that are companies for social housing development in the Lille metropolis. 30,000m² of offices and 6,600m² of businesses and shops are being constructed and a part of this land will be converted into public spaces and squares.
New coordinated planned actions are undertaken to make this development within the further development and enlargement of the "Euralille CBD" under the name of "Euralille 3000 project". The new interviews realised in 2017 explained how "Euraflandres" is becoming a city-HUB landmark taking into account the communities and residents desires to make a liveable neighbourhood.
"Euraflandres" is part of the "Euralille 3000 project". Two periods of consultation took place in 2013 and 2015 with the involvement and the participation of the MEL's citizens and "Euralille CBD" residents prior to the launch of the "Euralille 3000 project" in 2016. A MEL decision (2015) taken after this consultation stipulates that "the mobility project will aim to reorganise the flow of travel through traffic loops that will improve access conditions to "Euraflandres". It will allow the development of soft modes and protect the city centre from car flow. The future "Euraflandres" interchange will host new programs and refurbishment to make it more comfortable for residents and users of the various modes of transport present on the sector" (MEL, 2015).
There are nowadays 6,000 persons per day walking between the "Gare Lille Flandres interchange" and "Gare Lille Europe interchange". More pedestrian and bike lanes will be built. A new signage to find its way will be implemented. Urban parks will be set to offer a better connected neighbourhood networks.
Effectively, "this gateway must be prepared to accommodate 50% of additional passenger flow in the coming years. The urban project must also accompany these developments. The hub must maintain or improve its effectiveness. Not only accommodate more travellers but welcome them in better conditions (Euralille SPL, 2015)".
All this contributes to smarter the city by providing facilities to city-hub users as passengers but also to the MEL citizens and the "Euralille CBD" inhabitants. The "Euraflandres" interchange implementation is creating economic development in the MEL and moreover, following the City-UB typology, is becoming an urban landmark to smarter the city of Lille
CONCLUSION
We have seen that the opportunity of joining together the two "GareLille Flandres" and "Gare Lille Europe" interchanges in a unique interchange under the name of "Euraflandres" will contribute to build a great urban interchange. It will procure the advantage for increasing the accessibility for all destinations at urban Lille metropolis and regional levels, but also at the national and international levels by the possibility offered by the French TGVs running on national network and the Rail team high speed trains such as Eurostar and Thalys.
Each type of interchange, according to the identified functions and local constraints, should require the involvement of different interchange stakeholders. "Euraflandres" management with its stakeholders' committees seems to be oriented to make effort to find an agreement to reduce conflicts, in order to plan outcomes better and to allow communities to have an influence over the future shape of the places where they live. The community-led participation intended here [START_REF] Spl | Les échos de la concertation Euralille 3000[END_REF] is the first step to identify operators and users' requirements and needs (i.e. transport activities including services and facilities) and who will perceive "Euraflandres" as a transport node where to have access to their mobility mode and a place to carry out some other activities during their waiting time or leisure activities.
Following, the Stated Preference results of the City-HUB [START_REF] Di Ciommo | Using hybrid latent class model for city-HUBs´users behaviour analysis[END_REF]), a good interchange could increase between 7% and 20% the use of intermodal and public transport modes. The development of "Euraflandres" could attract additional public and active modes users with a real decreasing of car use that contributes to an eco-frienldly city. Further research is however required to measure the current participation of the "Euraflandres" customers and "Euralille CBD" residents to the smart city development even at metropolitan level with the MEL.
Actually the smart aspects of a transport interchange are deeply related with the modal shift, the environmental and health impacts, and the potential use of ITS for smooth intermodal changes. All these aspects are key characteristics for a city-HUB such as schedules consistency, the wayfinding, the use of the waiting time, the comfort during the waiting time, all aspects that make a City-HUB smarter and contribute to a smart city. This article has shown how "Euraflandres" potentially includes all these smart elements.
We have demonstrated that the "Euraflandres" interchange is able to induce economic development but also to play the role of developing activities and regenerating the urban environment, by transforming the surrounding area features. The extension of the "Euralille CBD" is part of the Lille urban regeneration and still under construction. It will provide new housing and also social housing for low incomes and new city amenities transforming it in a new place to live. The transformation of the two current separated interchanges towards "Euraflandres" will achieve a landmark interchange with a higher share of sustainable and affordable public transport modes share. All this will make Lille more convivial and fluid, two key aspects of the Smart City.
Figure 1 :
1 Figure 1: The three functions of a city-hub.
Figure 2 :
2 Figure 2: Place of the "Euraflandres" interchange in Lille within the French and international railway network.
Figure 3 :
3 Figure 3: "Euraflandres": linking the "Gare Lille Flandres Interchange" and "Gare
Figure 4 :
4 Figure 4: Inside the "Gare Lille Flandres interchange" (left) and the "Gare Lille Europe interchange" (right)
Figure 5 :
5 Figure 5: Modal share of both "Gare Lille Flandres" and "Gare Lille Europe" railway stations.
Figure 6 :
6 Figure 6: Distribution of jobs according to the sectors in the "Euralille CBD" area
Table 1 Interchange Dimensions: Function and Logistics, Local Constraints Dimension A Function and Logistics Levels Need for space Score
1
< 30,000 Low 1
Demand (users/day) 30-120,000 Medium 2
> 120,000 High 3
Dominant -bus Low 1
Modes of transport Dominant -rail Medium 2
Several modes and lines High 3
Kiosks, vending machines Low 1
Services and facilities Several shops and basic facilities Medium 2
Integrated shopping mall with all facilities High 3
Dimension B Local constraints Levels Upgrading level Value
Suburbs Less -
Location in the city City access Neutral O
City centre More +
Non-supporting activities Less -
Surrounding features area Supporting activities Neutral O
Strongly supporting activities More +
None Less -
Development plan Existing Neutral O
Existing and including intermodality in the area More +
Source
: Di Ciommo et al., 2016
It opened in 1994 with the opening of the Channel tunnel and the northern TGV network. It also serves the other French regions by TGV trains that go directly to the South East (Lyon, Marseille, Nice), or the East (Strasbourg). It connects the Western parts of France (Brittany to Nantes, Rennes, and South West to Bordeaux…) by means of trains that go around Paris and after westwards. All the TGVs stop at the international airport of Roissy Charles De Gaulle when going southwards. The "Gare Lille Europe" railway station counts 4 platforms and 2 central railway lines for Eurostar trains coming from Paris and going directly to London. It has a daily traffic of 8,500 passengers. It connects Lille to Paris by TGV (1 hour), and to Brussels (38 minutes) and London (1 hour 20 minutes) with the Eurostar trains. Since 2014 works have been undertaken on the departure concourse for the cross-Channel trains (Eurostar): it will provide new control desks, and a bigger boarding area.
Table 2 : Surfaces constructed and functions in "Euralille CBD"
2
M² of SHON* Euralille 1 Further extensions
Housing 138,000 +75,000
Offices 240,000 +140,000
Commerces 110,000 +30,000
Hotels 28,000 +5,000
This article is part of the Topical Collection on Smart Cities and transport infrastructures |
01768819 | en | [
"info.info-ai"
] | 2024/03/05 22:32:16 | 2010 | https://hal.science/hal-01768819/file/LREC_Tahon_2010.pdf | Marie Tahon
Agnès Delaborde
Claude Barras
Laurence Devillers
A corpus for identification of speakers and their emotions
This paper deals with a new corpus, called corpus IDV for "Institut De la Vision", collected within the framework of the project ROMEO (Cap Digital French national project founded by FUI6). The aim of the project is to construct a robot assistant for dependent person (blind, elderly person). Two of the robot functionalities are speaker identification and emotion detection. In order to train our detection system, we have collected a corpus with blind and half-blind person from 23 to 79 years old in situations close to the final application of the robot assistant. This paper explains how the corpus has been collected and shows first results on speaker identification.
Introduction
The aim of the project ROMEO (Cap Digital French national project founded by FUI6, http://www.projetromeo.com) is to design a robotic companion (1m40) which can play different roles: a robot assistant for dependent person (blind, elderly person) and a game companion for children. The functionalities that we aim to develop are speaker identification (one speaker among N, impostor) and emotion detection in every day speech. The main challenge is to develop a strong speaker detection system with emotional speech and an emotion detection system knowing the speaker. All our systems are supposed to be real time systems.
In the final demonstration, the robot assistant will have to execute some tasks as defined in a detailed scenario. The robot is in an apartment with its owner, an elderly and blind person. During the whole day, the owner will have some visitors. The robot will have to recognize who are the different characters: his little children (two girls and a boy), the doctor, the house-keeper and an unknown person. In the scenario the robot will also have to recognize emotions. For example, Romeo would be able to detect how the owner feels when he wakes up (positive or negative) and to detect anger in the little girl's voice.
To improve our detection systems (speaker and emotion)
we need different corpora, the closer to final demonstration they are, the better the results will be. We focused on blind or half-blind speakers (elderly and young person) and children voices while they interact with a robot [START_REF] Delaborde | A Wizard-of-Oz game for collecting emotional audio data in a children-robot interaction[END_REF] in order to have real-life conditions. However, emotions in real-life conditions are complex and the different factors involved in the emergence of an emotional manifestation are strongly linked together [START_REF] Scherer | Vocal communication of emotion: a review of research paradigms[END_REF].
In this paper, we will describe the IDV corpus which was collected with blind and half-blind person: acquisition protocol, scenarii involved. Then we explain the annotation protocol. And in section 4, we give our first results on speaker identification (identify a speaker from a set of known speakers).
IDV corpus
The part of the final scenario that concerns IDV corpus, we aim to demonstrate at the end of the project consists in:
identify a speaker from a set of known speakers (children or adults), recognize a speaker as unknown and in this case, provide its category (children, adult, elderly) and gender (for adults only), and detect positive or negative emotion.
Speaker identification and emotion detection are real time tasks. For that objective, we have collected a first corpus called IDV corpus with blind and half-blind French people from 23 to 79 years old. This corpus has been collected without any robot but a Wizard-of-oZ which simulates an emotion detection system. This corpus is not fully recorded yet; further records are scheduled with the IDV. A second corpus will be collected in the context of the scenario: at the IDV (Institut de la Vision in Paris) with the robot ROMEO.
Corpus characteristics
So far, we recorded 10h48' of French emotional speech.
28 speakers (11 males and 17 females) were recorded with a lapel-microphone at 48kHz. In accordance with the Romeo Project target room, the recordings took place in an almost empty studio (apart from some basic pieces of furniture), which implies a high reverberation time.
The originality of this corpus lies in the selection of speakers: for a same scientifically controlled recording protocol, we can compare both young voices (from 20 years old) to voices of older person (so far, the oldest in this corpus is 89).
Acquisition protocol
Before the recording starts, the participant is asked some profile data (sex, location, age, type of visual deficiency, occupation and marital status). An experimenter from the LIMSI interviews the volunteer following three sequences described below in 2.3. Some parasite noise happened to be audible in the studio (guide dog walking around, people working outside, talking, moving in the corridor, …). When overlapping the speaker's speech, these parts were discarded.
Sequences description
Each recording is divided into three sequences.
The first one is an introduction to the Romeo project: we explain the participant that we need him to provide us with emotional data, so that we can improve our emotion detection system in a future robot. We take advantage of this sequence to calibrate the participant's microphone.
Since there is no experimental control over the emotions that could be expressed by the participant, this part is discarded in the final corpus and will not be annotated.
In the second sequence, called "words repetition" (table 1), the experimenter asks the participant to repeat after him orders that could be given to the robot. The participant is free to choose the intonation and the expression of his or her production. This sequence gives us a sub-corpus where lexicon is determined and emotions mainly neutral.
Viens par ici! (come here!) Mets le plat au four! (put the dish in the oven!) Arrête-toi! (stop here!) Descends la poubelle! (Bring down the bin!) Stop! (stop!)
Va chercher le courrier! (Bring back the mails!) Ecoute-moi! (listen to me!) Va chercher à boire! (Bring back some water!) Approche! (come near!)
Aide-moi à me lever! (help me to get up!) Va-t-en! (go away!)
Aide-moi à marcher! (help me to walk!) Donne! (give it!) Roméo, réveille-toi! (Romeo, wake up!) Ramasse ça! (pick it up!)
Table 1: List of words and expressions in French
In the third sequence, called "scenarii", the experimenter presents six scenarii (see table Scenarii) in which the participant has to pretend to be interacting with a domestic robot called Romeo. For each presented scenario, the experimenter asks the participant to act a specific emotion linked to the context of the scenario :
for instance Joy, "Your children come to see you and you appreciate that, tell the robot that everything is fine for you and you don't need its help", or Stress, "You stand up from your armchair and hit your head in the window, ask Romeo to come for help", or Sadness, "You wake up and the robot comes to ask about your health. You explain it that you're depressed". The participant has to picture himself or herself in this context and to speak in a way that the emotions are easily recognizable. He or her knows that the lexicon he uses is not taken into account;
the emotion has to be heard in his or her voice.
At the end of each of his or her performance, the experimenter runs a Wizard-of-Oz emotion detection tool, that tells aloud the recognized emotion.
Corpus annotations
Emotion labels
Segmentation and annotation of the data are done with the Transcriber annotation tool1 on the scenario sequences.
The participant utterances are split into emotional segments. These segments mark the boundary of the emotion: when a specific emotion expression starts, and when it comes to an end.
On each segment, three labels describe the emotion. The first label corresponds to the most salient perceived
IDV emotional content
As the emotional annotation of the IDV corpus is not finished yet, all results on emotion annotation are based on a set of 15 speakers.
IDV corpus is divided in two different corpora: spontaneous and acted, according to the task (as defined in part 3). The results of the emotion scores are reported in table 4.
The spontaneous corpus contains 736 instances of 0.5s to 5s. The most important emotional label is "interest" (51%). This corresponds to the agreement of the volunteer with what the interviewer asked him to do.
Positive emotions (18%) are more numerous than negative emotions (6%). The volunteer has accepted to be recorded, so he is not supposed to express displeasure, he will more probably be nice with the LIMSI team.
Macro-class "fear" is also quite important (10%). It corresponds to embarrassment or anxiety, playing the actor is not an easy task.
The acted corpus contains 866 instances of 0.5s to 6s.
The results corresponds to what was expected: the main emotions are well represented. Positive emotion (21%, mainly "satisfaction"), negative emotion (24%, mainly "irritation"), fear (10%, mainly anxiety) and sadness (8%, "deception" and "sadness").
Label emotion
IDV first results
In this section, speaker identification scores are presented. All the results presented here were obtained with the same method based on GMM (Gaussian Mixture Models) speaker models [START_REF] Reynolds | Speaker verification using adapted Gaussian mixture models[END_REF].
First we have studied the different parameters of the GMM model, then the evolution of scores in function of the sex and the age of speakers.
Global speaker identification scores
This section aims at choosing the experimental setup for
Age influence
In this part, we show that speaker identification is easier A female voice is recognized as well at 96%, a male voice is recognized as well at 82%. Female voices have better identification scores.
Sex influence
Emotional speech influence
The results below are based on the corpus "repeating words" which contains 28 speakers. The results presented in this part are based on both sequences "repeating words" and "scenario", with the 15 speakers corresponding to the emotional annotation of the sequence "scenario".
Conclusion
This corpus IDV is interesting for many reasons. 6.
studying the influence of the age, gender and emotional expression. Experiments are performed with the "repeating words" sequence of the corpus. It contains 458 audio segments of varied duration. 26-dimensional acoustic features (13 MFCC and their first-order temporal derivatives) are extracted from the signal every 10ms using a 30ms analysis window. For each speaker, a training set is constructed by the concatenation of segments up to a requested duration Ntrain; a Gaussian mixture model (GMM) with diagonal covariance matrices is then trained on this data through maximum likelihood estimation with 5 EM iterations. The remaining segments, truncated to a Ntest duration, are used for the tests. For a given duration, the number of available segments is limited by the number of segments already used for training and the minimal test duration necessary (the higher duration is, the less audio files there are). For each test segment, the most likely speaker is selected according to the likelihood of the speaker models. In order to optimize the number of files of train and test, we have chosen the following set of parameters: -test duration: 1s (225 files), -train duration: 10s (179 files), -speaker model: mixture of 6 Gaussians.The error rate is 34.7% (+/-6.5%) when recognizing one speaker among 28.This extremely short test segment duration is due to constraints on segment counts in the database, and improvement of the performance as a function of the segment length will be studied later in the course of the project.
on elderly person voices than on young voices. Two subcorpora from IDV corpus composed of the 8 older volunteers (4 male, 4 female, from 52 to 79 years old), respectively the 8 younger volunteers (4 male, 4 female, from 23 to 46 years old) are studied separately. Of course, the number of segments is quite low, which may be a bias of the experiment.
number of segment and trust interval As a result speaker identification (one speaker among N) is better with elderly person voices. Our hypothesis is that voice qualities are much more different with elderly person voices than with young voices. In figure 1, we have plotted the MFCC2 gaussian model for the first four older person (blue) and for the first four younger person (red). As the red curves are quite the same, the blue one are more separated one from another.
Figure 1 :
1 Figure 1: Distribution of the 4 th MFCC coefficient according to a gaussian model for old (blue, plain) and young speaker (red, dashed)
Figure 2 :
2 Figure 2 : Confusion matrix between male (1) and female (2) Based on the whole IDV corpus, we compute the confusion matrix sorted by sex without taking into account the age of the speakers anymore.
three corporaIdentification scores are better with the "words" corpus (lexically controlled) than with the "acted" corpus. The "spontaneous" corpus gives intermediate results. The scores are always better when the train and the test are made on the same corpus.Speaker models were tested directly in mismatch conditions without any specific adaptation. The very high error rates observed are of course due to the very short train and test durations constraints in our experiments, but also highlight the necessity of an adaptation of the speaker models to the emotional context which will be explored during the ROMEO project.
First, as it presents a sequence of words, lexically determined by the protocol and quite neutral, and a sequence of emotional speech, with the same speakers, recorded in the same audio conditions, it allows us to compare scores for speaker identification between neutral speech and emotional speech. Secondly, the corpus collection has been made with blind and half-blind volunteers from 23 to 79 years old. Thus we can compare scores across speaker age. Moreover we have the opportunity to work with elderly person who often have specific voice qualities.
Table 2 :
2 ScenariiTable2summarizes the 6 different scenarii and the emotions asked to the participant.
can recognize an emotion that is of the opposite valence
of what the participant was supposed to express (the
experimenter selects Anger when Joy has been acted); it
can recognize no emotion at all (the experimenter selects
Neutral when a strong Anger was expressed, or when the
emotion has not been acted intensely enough); it can
recognize an emotion that is close to what is expected,
but too strong or too weak (Sadness instead of
Disappointment). The participant is asked to act the
emotion again, either until it is correctly recognized by
the system, or when the experimenter feels that the
participant is tired of the game.
Emotional data acquired through acting games obviously
do not reflect real-life emotional expressions. However,
We can also question the relevancy of having the
participant imagine the situation, instead of having him
live it in an experimental setting. We should note that for
obvious ethical reasons we can not put them in a situation
of emergency such as "being hurt, and ask for immediate
help": we can only have them pretend it. Another obvious
reason for setting this kind of limited protocol is a matter
of credibility of the settings: currently, the only available
prototype does not fit the target application
characteristics (Nao is fifty centimeters high, and its
motion is still under development).
Scenarii Emotions
Medical emergency Pain, stress
Suspicious noises Fear, anguish, anxiety
Awaking (good mood) Satisfaction, joy
Awaking (bad health) Pain, irritation, anger
Awaking (bad mood) Sadness, irritation
Visit from close relations Joy
The system is presented as being under-development, and most of the times it does not correctly recognize the emotion: it the strategies that are being used through our Wizard-of-Oz emotion detection tool allow us to elicit emotional reaction in the participant. An example: the participant is convinced that he expressed Joy, but the system recognizes Sadness. The participant's emotional reactions are amusement, or frustration, boredom, irritation. Our corpus is then made of both acted emotions, and spontaneous reactions to controlled triggers. The distinction between acted and spontaneous expressions will be spotted in our annotations; this distinction is really important to have an estimation of how natural the corpus is
[START_REF] Tahon | Acoustic measures characterizing anger across corpora collected in artificial or natural context[END_REF]
.
Table 5 :
5 The results are referred in the table 5, error rate, number of segments for test and trust interval (binomial distribution test). Speaker identification, age influence: error rate,
Old person Young person
Error rate 17.00% 38.00%
Number of segment 66 63
Trust interval 9.18% 12.24%
Table 5
5
below shows the error rate
for speaker identification (1 among 15) across the 3
corpora: "repeating words", "scenario spontaneous" and
"scenario acted". The parameters we have chosen for the
gaussian model are the followings: 5 gaussians, train
duration: 10s, test duration: 1s.
TEST
"Words" "Spontane "Acted"
ous"
"Words" 28.60% 78.60% 88.00%
TRA IN "Spontaneous" "Acted" X X 45.10% X 60.20% 56.30%
Table 5 :
5 Error rates for speaker identification across the
http://trans.sourceforge.net/en/presentation.php |
01768827 | en | [
"info.info-ai"
] | 2024/03/05 22:32:16 | 2012 | https://hal.science/hal-01768827/file/LREC_Tahon_2012.pdf | Marie Tahon
email: [email protected]
Agnes Delaborde
Laurence Devillers
Corpus of Children Voices for Mid-level Markers and Affect Bursts Analysis
Keywords: Audio Signal Processing, Emotion Detection, Human-Robot Interaction
This article presents a corpus featuring children playing games in interaction with the humanoid robot Nao: children have to express emotions in the course of a storytelling by the robot. This corpus was collected to design an affective interactive system driven by an interactional and emotional representation of the user. We evaluate here some mid-level markers used in our system: reaction time, speech duration and intensity level. We also question the presence of affect bursts, which are quite numerous in our corpus, probably because of the young age of the children and the absence of predefined lexical content.
Introduction
In the context of Human-Robot Interaction, the robot usually evolves in real-life conditions and then faces a rich multimodal contextual environment. While spoken language constitutes a very strong communication channel in interaction, it is known that lots of information is conveyed nonverbally simultaneously to spoken words [START_REF] Campbell | On the use of nonverbal speech sounds in human communication[END_REF]. Experimental evidence shows that many of our social behaviours and actions are mostly determined by the display and interpretation of nonverbal cues without relying on speech understanding. Among social markers, we can consider three main kinds of markers: interactional, emotional and personality markers. Generally-speaking, social markers are computed as long-term markers which include a memory management of the multi-level markers during interaction. In this paper, we focus on specific mid-level and short-time acoustic markers: affect bursts, speech duration, reaction time and intensity level which can be used for computing the interactional and emotional profile of the user. In a previous study, we have collected a realistic corpus (Delaborde, 2010a) of children interacting with the robot Nao (called NAO-HR1). In order to study social markers, we have recorded a second corpus (called NAO-HR2), featuring children playing an emotion game with the robot Nao. The game is called interactive story game (Delaborde, 2010b). So far, there exist few realistic children voices corpora. The best known being the AIBO corpus [START_REF] Batliner | You stupid tin box"children interacting with the AIBO robot: A cross-linguistic emotional speech corpus[END_REF], in which children give orders to the Sony's pet robot Aibo. Two corpora were collected for studying speech disorders in impaired communication children [START_REF] Ringeval | Automatic prosodic disorders analysis for impaired communication children, 1st Workshop on Child, Computer and Interaction (WOCCI)[END_REF]. In both studies, there are no spoken dialogs with robots; only the children are speaking.
Many previous studies focus on one of the three social markers. Interactional markers can be prosodic as in [START_REF] Breazeal | Recognition of affective communicative intent in Robot-Directed speech[END_REF]: five different pitch contours (praise, prohibition, comfort and attentional bids and neutral) learnt from infant-mother interaction are recognised by the Kismet robot. Mental state markers can also be only linguistic as the number of words, the speech rate (Kalman, 2010). Personality markers can be linguistic and prosodic cues [START_REF] Mairesse | Using linguistic cues for the automatic recognition of personality in conversation and text[END_REF]. Emotional markers can be prosodic, affect bursts and also linguistic. The concept of "affect bursts" has been introduced by Scherer. He defines them as "very brief, discrete, nonverbal expressions of affect in both face and voice as triggered by clearly identifiable events" [START_REF] Scherer | Affect Bursts, in Emotions[END_REF]. Affect bursts are very important for real-life interactions but they are not well recognized by emotion detection systems because of their particular temporal pattern. [START_REF] Schröder | Experimental study of affect bursts[END_REF] shows that affect bursts have a meaningful emotional content. Our hypothesis is that non verbal events and specific affect bursts production are important social cues during a spontaneous Human-Robot Interaction and probably even more with young children.
Section 2 presents the protocol for collecting our second children emotional voices corpus. The content of the corpus NAO-HR2 is described in Section 3: affect bursts, speakers and other interactional information. Section 4 summarizes the values we can expect for some mid-level social cues. Finally, Section 5 presents our conclusion and future work.
Data collection 2.1 Interactive Story Game
We have collected the voices of children playing with the robot Nao and recorded with lapel-microphone. Nao told a story, and two children in front of it where supposed to act the expected emotions in the course of the story. A game session consists in 3 phases: first the robot explains the rules and suggests some examples, the second part is the game itself, and the last part is a questionnaire proposed by an experimenter. The children are presented a board, on which words or concepts are drawn and written (such as "house", or "poverty"). Emotion tags are written in correspondence for each of this word. The player number one knows that, for example, if the notion "poverty" occurs in the course of the story, he will have to express sadness. He can express it the way he wants: he can speak sadly, or do as though he was weeping; children were free to interpret the rules as they wanted to. Once the rules are understood by the two players, Nao starts to tell the story. When it stops speaking, one of the players is supposed to have spotted a concept in the previous sentence, and is expected to play the corresponding emotion. If the robot detects the right emotion, the child wins one point.
Semi-automatic Human-Robot Interaction System
The behaviour of the robot changes in the course of the game. It can be neutral, just saying "Your answer is correct", or "not correct". It can also be empathic "I know this is a hard task", etc. Fuzzy logic rules select the most desirable behaviour for the robot, according to the emotional and interactional profile of each child, and their sex. This profile is built according to another set of fuzzy logic rules which process the emotional cues provided manually by the Wizard experimenter. The latter provides the system with the emotion expressed by the child (a label such as "Happiness", "Anger", "Sadness", etc.), the strength of the emotion (low, average or high activation), the elapsed time between the moment when the child is expected to speak and the time he starts speaking, and the duration of the speaking turn (both in seconds). From these manually captured cues, the Human-Robot Interaction system builds automatically an emotional and interactional representation of each child, and the behaviour of the robot changes according to this representation.
The dynamic adaptation of the behaviour of the robot and the design of the profile, based on a multi-level processing of the emotional audio cues, are explained in (Delaborde, 2010b). Table 1 The collected audio data is subsequently processed by expert labellers. On each speaker's track, we define speaker turns called instances. The annotation protocol is described in detail in (Delaborde, 2010b). The annotation scheme consists in emotional information (labels, dimensions and affect bursts), but also mental-state and personality information based on different time windows. In this paper, we focus on the study of affect bursts and others mid-level markers such as reaction time, duration but also the low-level marker intensity.
Contents of NAO-HR2 corpus
Description of the corpus
The NAO-HR2 corpus is made up of 603 emotional segments for a total amount of 21mn 16s. Twelve children (from six to eleven years old) and four adults have been recorded (five boys, seven girls, one woman and three men). For this study, we have selected only the speech instances which occur during the story game (not during the questionnaire). In consequence, we obtain 20 emotional answers per gaming session: 10 emotional answers for each speaker. In that way the number of speaker turns is quite similar from one speaker to another.
Affect bursts
An annotation tag indicates the presence or absence of an affect burst in the instances. We notice that a large majority of the corpus is made up of affect bursts. Table 2 summarizes the number of affect bursts (AB) over the total number of instances (TT) for each group of speaker. We have separated the children in two groups of 5 according to their age: the younger are from 6 to 7 years old, the older over 8 year old.
# AB (TT)
Mean From these results we can conclude that asking a participant to express an emotion without any predefined lexical content leads to a high number of affect bursts.
Children seem to use more often affect bursts than adults and young children even more. It seems that they are not at ease with finding words to express an emotion. Both children and adults express happiness laughing, but only children use "grr" affect bursts for anger in our corpora. Expressions of fear are usually more affect bursts for children than for adults. Affect bursts usually contain only a single phoneme; it is not possible to compute easily a speaking rate.
Results on Social Markers
In this section, we have manually measured the different markers in all game sessions.
An example is shown in Figure 1. Nao says: "a lot of sadness", the word "sadness" is one of the keywords written on the board and the child has to express the corresponding emotional state which is sadness. The four social markers we are studying, are represented in red: reaction time is 4.42s, speech duration is 2.17s, mean intensity is 52.83dB (after normalization: 28.67dB) and mean Harmonics-to-noise Ratio is 10.95dB. Reaction Time is important for this turn; the mean value of this 10 year old boy is 3.07s. Intensity and HNR are also lower than the mean values obtained on his whole session (Intensity mean is 32.43dB and HNR mean is 12.56dB).
Intensity and HNR values correspond to what is expected when acting sadness; a high reaction time probably means that the boy was not at ease with this specific turn.
Reaction Time
The reaction time (RT) represents the interval between the time when the speaker is expected to speak (when Nao stops telling the story), and the time he indeed starts to speak. In the context of our game, the children were not supposed to call up their knowledge, or to think about the best answer. They were supposed to act the emotion written on the board. The longer the reaction time, the more the speaker postpones the time of his oral production. This parameter is one of the parameters used for the definition of the dimension "self-confidence" of the emotional profile. The shorter the reaction time, the more the speaker tends to be self-confident. Table 3 presents the mean and standard deviation of mean reaction times for each child.
Mean RT (s) Std RT (s) 4.62 2.00 Table 3: Reaction Time Some children are not at ease with the game, and their RT is much more important than the other (RT = 7.73 for children n°12, 6 year old). When the RT value is so high it often means that the children did not find any answer to give to NAO in the time he has to (if the child did not answer after 12.5s, the robot continues the story). Hesitation is quite used by children who have an important RT.
Estimation of Speech Duration
The speech duration (SD) is another parameter used for the emotional profile of the speaker. It corresponds to the duration of speech of the speaker, for each speaking turn. Children included small pauses (from 850ms to 1.40s) in their speech. These short silences are not considered as ends of speaking turn: it can be breathing, hesitating, thinking, and the speaker resumes speaking. Mean SD (s) Std SD (s) 2.01 1.30 Table 4: Speech Duration for each turn We notice in table 4 that the mean SD is generally quite short. The turns are mostly composed of one single syllable. As we have seen before the proportion of affect bursts is quite important and most of them have short durations. As the players do not have any lexical support except what Nao have just said, they are not simulated to speak a lot.
Estimation of Intensity
For each session, both children were recorded with separate microphones which have their own gain. We compute the mean intensity (Int) normalized to the noise value for each session. It is also possible to estimate the HNR value on voiced parts only. Hesitation is often expressed with a lower intensity: on hesitation turns, mean intensity is from 45% to 70% lower than the mean intensity for the same child. Figure 2 shows that mean Intensity seems to decrease with RT and HNR to increase with RT. As we have said, a small RT generally signifies a good self-confidence; our data show that it is correlated with a high Intensity and a small HNR. When the child is at ease, he will speak loud.
Conclusion and Future Works
The NAO-HR2 children voices corpus is composed of French emotional speech collected in the course of a game between two children and the robot Nao. A semi-automatic Human-Robot Interaction system built the emotional and interactional representation of each child and selected the behaviour of the robot, based on the emotions captured manually by an experimenter. The data we collected allow us to study some parameters which take part in the setting up of the emotional and interactional profile. We have analysed some of the mid-level cues which are used in our Human-Robot Interaction system. Among those cues, reaction time, intensity level and speech duration do make sense in our child-robot interaction game, but speaking rate does not seem to be relevant in that particular context. Indeed, as the children are quite young (from six to eleven years old), and as they are not given any predefined lexical content, they usually express their emotions with affect bursts. The younger the child, the more he/she will use affect bursts. In a future work, we will also study the speaking rate in longer turns of child speech. For the needs of our data collection, the affective interactive system was used in Wizard-of-Oz (an experimenter captured manually the emotional inputs); in a next collection, we will use it with automatic detection of the emotions in speech, and then collect more data to confirm our analysis.
Acknowledgement
This work is financed by national funds FUI6 under the French ROMEO project labelled by CAP DIGITAL competitive centre (Paris Region).
Figure 1 :
1 Figure 1: An example of social markers during the story game, the markers are collected with Praat
Figure 2 :
2 Figure 2: Intensity and HNR in function of the reaction time for the 12 children
Table 1 :
1 Multi-level cues and social markers
gives an overview of the different level of
processing of the emotional audio signal: from low level
cues computed from the audio signal, to high level
markers such as emotions, emotional tendencies, and
interactional tendencies.
Low-level Cues Mid-level Cues High Level Social Markers
• Intensity level • Prosody • Spectral envelope • Affect bursts (Laughs, hesitation, 'grr') • Speech duration • Reaction Time • Speaking rate • Emotion (label, dimension) • Interactional tendencies (e.g. dominance) • Emotional tendencies (e.g.
extraversion)
Table 5 :
5 Intensity and HNR means and std
The |
01672101 | en | [
"math.math-gm"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/tel-01672101v2/file/these_DHAYNI.pdf | Keywords:
been proved in [15] that f (S) is not algebraic in the set of generators of S.
1978 H. S. Wilf proposed a conjecture suggesting a regularity in the set N \ S. It says the following :
n(S).
), only special cases have been solved and it remains wide open. In [9], D. Dobbs and G. Matthews proved Wilf's Definition 1.0.9. Let n ∈ S * . We define the Apéry set of S with respect to n, denoted by Ap(S, n), to be the set Ap(S, n) = {s ∈ S : sn / ∈ S}.
Remark 1.0.10. Given a non zero integer n and two integers a and b, we write a ≡ b mod (n) to denote that n divides a -b. We denote by b mod n the remainder of the division of b by n.
From Definition 1.0.9, we can easily see the following.
Lemma 1.0.11. Let S be a numerical semigroup and let n ∈ S * . For all 1 ≤ i ≤ n, let w(i) be the smallest element of S such that w(i) ≡ i mod (n). We have the following :
Ap(S, n) = {0, w(1), . . . , w(n -1)}.
Proposition 1.0.12. Let S be a numerical semigroup. Let n ∈ S * and let Ap(S, n) = {w 0 < w 1 . . . < w n-1 } be the Apéry set of S with respect to n. We have the following :
Proposition 1.0.13. (See Lemma 2.6 in [18]) Let S be a numerical semigroup and let n ∈ S * . For all s ∈ S, there exists a unique (k, w) ∈ N × Ap(S, n) such that s = kn + w.
As a consequence of Proposition 1.0.13, we obtain the following property.
Corollary 1.0.14. (Theorem 2.7 in [18]) Let S be a numerical semigroup. Then, S is finitely generated.
Definition 1.0.15. Let S be a numerical semigroup and let A ⊆ S * . We say that A is a minimal set of generators of S if S =< A > and for all x ∈ A, x cannot be written as a linear combination with nonnegative integer coefficients of other elements in A.
Corollary 1.0.16. (See Corollary 2.8 in [18]) Let S be a numerical semigroup. Then, S has a minimal set of generators. This set is finite and unique.
Definition 1.0.17. Let S be a numerical semigroup. We define the following invariants :
• The embedding dimension of S denoted by ν(S), or ν for simplicity, is the cardinality of the minimal set of generators of S.
• The multiplicity of S denoted by m(S), or m for simplicity, is the smallest non zero element of S.
Lemma 1.0.18. (See Proposition 2.10 in [18]) Let S be a numerical semigroup with multiplicity m and embedding dimension ν. We have ν ≤ m.
Let us recall some basic and important invariants of numerical semigroups.
Definition 1.0.19. Let S be a numerical semigroup. We introduce some invariants associated to a numerical semigroup S :
• We define the Frobenius number of S, denoted by f or f (S) to be max (Z \ S).
• We define the conductor of S, denoted by c or c(S) to be f (S) + 1.
Definition 1.0.22. Let S be a numerical semigroup. We say that x ∈ N is a pseudo-Frobenius number if x / ∈ S and x + s ∈ S for all s ∈ S * . We denote by P F (S) the set of all pseudo-Frobenius numbers of S. We denote the cardinality of P F (S) by t(S) and we call it the type of S. It results from the definition of f (S) that f (S) ∈ P F (S), and also f (S) = max (P F (S)).
Corollary 1.0.23. (See Theorem 20 in [11]) Let S be a numerical semigroup with Frobenius number f (S), type t(S) and n(S) = |{s ∈ S : s < f (S)}|. Then, we have
. For some families of numerical semigroups this conjecture is known to be true, but the general case remains unsolved. Remark 1.0.24. By Corollary 1.0.23, if t(S) ≤ ν(S) -1, then S satisfies Wilf's conjecture. Definition 1.0.25. Let a, b ∈ N. We define ≤ S as follows : a ≤ S b if and only if b -a ∈ S.
Remark 1.0.26. As S is a numerical semigroup, it easily follows that ≤ S is an order relation over S (reflexive, transitive and anti symmetric).
Definition 1.0.27. Let S be a numerical semigroup and n ∈ S * . Let Ap(S, n) = {w 0 = 0 < w 1 < w 2 < . . . < w n-1 } be the Apéry set of S with respect to n. Then, define the following sets :
Lemma 1.0.28. (See Lemma 6 in [11]) Let S be a numerical semigroup, n ∈ S * and Ap(S, n) be the Apéry set of S with respect to n. Let w ∈ Ap(S, n) and u ∈ S. If there exist v ∈ S such that w = u + v, then u ∈ Ap(S, n).
)) if and only if x = w i + w j for all w i , w j ∈Ap(S, n) * .
• x ∈ max ≤ S (Ap(S, n)) if and only if w i = x + w j for all w i , w j ∈Ap(S, n) * .
Proof. Let x ∈ Ap(S, n) * .
• Let x ∈ min ≤ S (Ap(S, n)). Suppose by the way of contradiction that x = w i + w j for some w i , w j ∈ Ap(S, n) * . Then, x = w i + w j with w i ∈ Ap(S, n) and w j ∈ S which implies that x / ∈ min ≤ S (Ap(S, n)) and we get a contradiction. Conversely, suppose that x = w i + w j for all w i , w j ∈Ap(S, n) * . Suppose by the way of contradiction that x / ∈ min ≤ S (Ap(S, n)), then there exist w i ∈ Ap(S, n) and s ∈ S such that x = w i + s. By Lemma 1.0.28, it follows that s ∈ Ap(S, n). Thus, x = w i + s such that w i , s ∈ Ap(S, n) which gives a contradiction.
• Let x ∈ max ≤ S (Ap(S, n)). Suppose by the way of contradiction that w i = x + w j for some w i , w j ∈ Ap(S, n) * . Then, w i = x + w j with w i ∈ Ap(S, n) and w j ∈ S which implies that x / ∈ max ≤ S (Ap(S, n)) and we get a contradiction. Conversely, suppose that w i = x + w j for all w i , w j ∈ Ap(S, n) * . Suppose by the way of contradiction that x / ∈ max ≤ S (Ap(S, n)), then there exist w i ∈ Ap(S, m) and s ∈ S such that w i = x + s. By Lemma 1.0.28, it follows that s ∈ Ap(S, n). Thus, w i = x + s such that w i , s ∈ Ap(S, n) which gives a contradiction.
Thus, the proof is complete.
Proposition 1.0.30. (See Lemma 3.2 in [7] ) Let S be a numerical semigroup with multiplicity m and embedding dimension ν and let n ∈ S * . Let Ap(S, n) be the Apéry set of S with respect to n and let {g 1 < g 2 < . . . < g ν } be the minimal set of generators of S. We have the following :
From Proposition 1.0.30, it follows Corollary 1.0.31.
Corollary 1.0.31. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and {g 1 = m, g 2 , . . . , g ν } the minimal system of generators of S. Let n ∈ S * and Ap(S, n) be the Apéry set of S with respect to n. We have the following :
We introduce in Definitions 1.0.32 and 1.0.33 special kind of numerical semigroups and give some properties of this kind in Lemma 1.0.34. Definition 1.0.32. A numerical semigroup is said to irreducible if and only if S cannot be expressed as the intersection of two numerical semigroups S 1 , S 2 such that S ⊂ S 1 , S ⊂ S 2 . Definition 1.0.33. Let S be a numerical semigroup. We have the following :
• S is said to be symmetric if and only if S is irreducible and f (S) is odd.
• S is said to be pseudo-symmetric if and only if S is irreducible and f (S) is even.
Lemma 1.0.34. (See Corollary 4.5 in [18]) Let S be a numerical semigroup with Frobenius number f (S) and genus g(S). We have the following :
ν ≥ m and w m-1 -m ≥ w x + w y 2. [START_REF][END_REF] Numerical semigroups with m -ν > (n-2)(n-3)
Introduction
Let N denote the set of natural numbers, including 0. A semigroup S is an additive submonoid of (N, +), that is 0 ∈ S and if a, b ∈ S, then a + b ∈ S. A numerical semigroup S is a submonoid of N of finite complement, i.e., N \ S is a finite set. It can be shown that a submonoid of N is a numerical semigroups if and only if the group generated by S in Z (namely the set of elements s i=1 λ i a i , λ i ∈ Z, a i ∈ S) is Z. There are many invariants associated to a numerical semigroup S. The Apéry set of S with respect to an element a ∈ S is defined as Ap(S, a) = {s ∈ S; s -a / ∈ S}.
The elements of N \ S are called the gaps of S. The largest gap is denoted by
f = f (S) = max(N \ S)
and is called the Frobenius number of S. The number f (S)+1 is known as the conductor of S and denoted by c or c(S).The number of gaps g = g(S) = |N \ S| is known as genus of S. The smallest non zero element m = m(S) of S is called the multiplicity of S and the set {s ∈ S; s < f (S)} is denoted by n(S). Every numerical semigroup S is finitely generated, i.e., S is of the form S =< g 1 , . . . , g ν >= Ng 1 + . . . + Ng ν for suitable unique coprime integers g 1 , . . . , g ν . The number of minimal set of generators of S is denoted by
ν = ν(S)
and is called the embedding dimension of S. An integer x ∈ N\S is called a pseudo-Frobenius number if x + S \ 0 ⊆ S. The type of the semigroup, denoted by t(S) is the cardinality of set of pseudo-frobenius numbers. We have formulas linking these invariants.
Frobenius in his lectures proposed the problem of giving a formula for the largest integer that is not representable as a linear combination with nonnegative integer coefficients of a given set of positive integers whose greater common divisor is one. He also threw the question of determining how many positive integers do not have such a representation. This problem is known as Diophantine Frobenius Problem. Using the terminology of numerical semigroups, the problem is to give a formula, in terms of the elements in a minimal system of generators of a numerical semigroup S, for the greatest integer not in S. This problem, introduced and solved by Sylvester for the case ν = 2 (see [START_REF] Sylvester | Mathematical questions with their solutions[END_REF]), has been widely studied. For ν = 3, in 1962 Brauer and Shockly (see [START_REF][END_REF]) found a formula for the Frobenius number but their solution was not a polynomial in the generators and it involved magnitudes which could not be expressed by the generators. Later on, more solutions to this case were found by using different methods (for example see [START_REF] Selmer | On the linear diophantine problem of frobenius in three variables[END_REF]). However, all of these methods do not give explicit formula of the Frobenius number in terms of the generators. Generally, it has conjecture for ν ≤ 3. In [14], N. Kaplan proved it for c ≤ 2m and in [START_REF] Eliahou | Wilf's conjecture and macaulay's theorem[END_REF] S. Eliahou extended Kaplan's work for c ≤ 3m.
In Chapter 1, we recall some basics about numerical semigroups that will be used through the thesis.
In Chapter 2, we generalize the case covered by A. Sammartano in [START_REF] Sammartano | Numerical semigroups with large embedding dimension satisfy wilf ?s conjecture[END_REF], who showed that Wilf's conjecture holds for 2ν ≥ m, and m ≤ 8, based on the idea of counting the elements of S in some intervals of length m. We use different intervals in order to get an equivalent form of Wilf's conjecture and then we prove it in some relevant cases. In particular our calculations cover the case where 2ν ≥ m, proved by Sammartano in [START_REF] Sammartano | Numerical semigroups with large embedding dimension satisfy wilf ?s conjecture[END_REF].
Here are few more details on the contents of this Chapter. Section 2.1 is devoted to give some notations that will enable us in the same Section to give an equivalent form of Wilf's conjecture. In Section 2.2, we give some technical results needed in the Chapter. Let Ap(S, m) = {0 = w 0 < w 1 < • • • < w m-1 }. In Section 2.3, first, we show that Wilf's conjecture holds for numerical semigroups that satisfy w m-1 ≥ w 1 + w α and
(2 + α-3 q )ν ≥ m for some 1 < α < m -1 where c = qm -ρ for some q ∈ N, 0 ≤ ρ ≤ m -1. Then, we prove Wilf's conjecture for numerical semigroups with m -ν ≤ 4 in order to cover the case where 2ν ≥ m. We also show that a numerical semigroup with m -ν = 5 verify Wilf's conjecture in order to prove the conjecture for m = 9. Finally, we show in this Section, using the previous cases, that Wilf's conjecture holds for numerical semigroups with (2 + 1 q )ν ≥ m. In Section 2.4, we prove Wilf's conjecture for numerical semigroups with w m-1 ≥ w α-1 + w α and ( α+33 )ν ≥ m for some 1 < α < m -1. In Section 2.5, we show Wilf's conjecture holds for numerical semigroups with w m-1 -m ≥ w x + w y and 2 + ν ≥ m. The last Section 2.6 aims to verify the conjecture in the case m -ν > (n-2)(n-3) 2 and also in the case n ≤ 5.
Exact determination of Ap(S, m), f (S), g(S) and P F (S) is a difficult problem. When S is generated by an arithmetic sequence < m, m + 1, . . . , m + l >, Brauer [START_REF][END_REF] gave a formula for f (S). Roberts [17] extended this result to generators in arithmetic progression (see also [3], [START_REF] Bras-Amorós | Fibonacci-like behavior of the number of numerical semigroups of a given genus[END_REF]). Selmer [START_REF] Selmer | On the linear diophantine problem of frobenius in three variables[END_REF] and Grant [START_REF] Bateman | Remark on a recent note on linear forms[END_REF] generalized this to the case S =< m, hm + d, hm + 2d, . . . , hm + ld >. In [16], it has been considered the case of semigroups generated by {m, m + d, . . . , m + ld, c} (called almost arithmetic semigroups) where it has been given a method to determine Ap(S, m) and also symmetric almost arithmetic semigroups. In [START_REF] García-Marco | Numerical semigroups ii : pseudo-symmetric aa-semigroups[END_REF], pseudo symmetric almost arithmetic semigroups have been characterized. In Chapter 3, we focus our attention on numerical semigroup consisting of all non-negative integer linear combinations of relatively prime positive integers m, m + 1, . . . , m + l, k(m + l) + r where k, m, l, r are positive integers and r ≤ (k + 1)l + 1. We give formulas for Ap(S, m), f (S), g(S) and P F (S). We also determine the symmetric and the pseudo symmetric numerical semigroups of this form. Note that our semigroups < m, m + 1, . . . , m + l, k(m + l) + r > are almost arithmetic semigroups. The advantage is that we are able for this class of semigroups to determine all the invariants with simple formulas. Good references on numerical semigroups are [18] and [1].
Basics and notations
Definition 1.0.1. Let S be a subset of N. We say that S is a submonoid of (N, +) if the following holds : • {0} and N are trivially submonoids of N.
• 0 ∈ S. • If a, b ∈ S, then a + b ∈ S.
• Let d be an element of N, the set dN = {da : a ∈ N} is a submonoid of N. Definition 1.0.4. Let S be a submonoid of N. If N \ S is a finite set, then S is said to be a numerical semigroup.
We have the following characterization of numerical semigroups : Proposition 1.0.5. (See Lemma 2.1 in [18]) Let S = {0}, and S = N be a semigroup of N and let G be the subgroup of Z generated by S, i.e., (G = { s i=1 λ i a i , s ∈ N, λ i ∈ Z, a i ∈ S}). Then, S is a numerical semigroup if and only if G = Z, i.e., (gcd(S)=1). Proposition 1.0.6. (See Proposition 2.2 in [18]) Let S be a semigroup of N. Then, S is isomorphic to a numerical semigroup. Definition 1.0.7. Let S be a numerical semigroup and let A ⊆ S. We say that S is generated by A and we write S =< A > if for all s ∈ S, there exist a 1 , . . . , a r ∈ A and λ 1 , . . . , λ r ∈ N such that a = r i=1 λ i a i . We say that S is finitely generated if S =< A > with A ⊆ S and A is a finite set. Remark 1.0.8. Through this thesis X * will stand for X \ {0}.
Next, we introduce an important tool associated to a numerical semigroup.
• We define the set of gaps of S, denoted by G(S) to be N \ S.
• We define the genus of S, denoted by g(S) to be the cardinality of G(S).
• We denote by n(S), the cardinality of {s ∈ S : s ≤ f (S)}.
Remark 1.0.20. Note that f (S) ≥ 1 for all non trivial numerical semigroups. Lemma 1.0.21. (See in [START_REF][END_REF], [START_REF] Selmer | On the linear diophantine problem of frobenius in three variables[END_REF]) Let S be a numerical semigroup and let n ∈ S. Then,
• f (S) = max(Ap(S, n)) -n.
• g(S) = 1 n w∈Ap(S,n) w -1 2 (n -1).
• S is symmetric if and only if g(S) = f (S) + 1 2 .
• S is pseudo-symmetric if and only if g(S) = f (S) + 2 2 .
Remark 1.0.35. Consider the following notation that will be used throug this thesis :
• We denote by floor (x) = x the largest integer less than or equal to x.
• We denote by ceil (x)= x the smallest integer greater than or equal to x.
Wilf's conjecture
In this chapter, we give an equivalent form of Wilf's conjecture in terms of the elements of the Apéry set of S, embedding dimension and the multiplicity. We also give an affirmative answer to Wilf's conjecture in some cases.
Equivalent form of Wilf's conjecture
Let the notations be as in the introduction. For the sake of clarity we shall use the notations ν, f, n, c... for ν(S), f (S), n(S), c(S).... In this Section, we will introduce some notations and family of numbers that will enable us to give an equivalent form of Wilf's conjecture at the end of this Section.
Notation. Let S be a numerical semigroup with multiplicity m and conductor c = f + 1. Denote by q = c m .
Thus, qm ≥ c and c = qm -ρ with 0 ≤ ρ < m.
Given a non negative integer k, we define the kth interval of length m,
I k = [km -ρ, (k + 1)m -ρ[= {km -ρ, km -ρ + 1, . . . , (k + 1)m -ρ -1}.
We denote by
n k = |S ∩ I k |.
For j ∈ {1, . . . , m -1}, we define η j to be the number of intervals I k with n k = j. Proposition 2.1.1. Under the previous notations, we have :
i) 1≤ n k ≤ m -1 for all 0 ≤ k ≤ q -1.
ii) n k = m for all k ≥ q.
iii)
q-1 k=0 n k = n(S) = n. iv) m-1 j=1 η j = q. v) m-1 j=1 jη j = q-1 k=0 n k = n.
Proof.
i)
We can easily verify that if S contains m consecutive elements a, a + 1, . . . a + m -1, then for all
n ≥ a + m, n ∈ S. Since (q -1)m -ρ < f < qm -ρ, then it follows that n k ≤ m -1 for all 0 ≤ k ≤ q -1. Moreover, km ∈ S ∩ I k for all 0 ≤ k ≤ q -1, thus n k ≥ 1. ii) We have f = qm -ρ -1 ∈ I q-1 .
From the definition of the Frobenius number, it follows that n k = m for all k ≥ q.
iii)
q-1 k=0 n k is nothing but the cardinality of {s ∈ S; s < f } which is n(S) by definition. iv) We have 1 ≤ S ∩ I k ≤ m -1 if and only if 0 ≤ k ≤ q -1
. This implies our assertion.
v) The sum m-1 j=1 jη j is nothing but the cardinality of | ∪ q-1 k=0 S ∩ I k | = n. This proves our assertion. Thus, the proof is complete.
Next, we will express η j in terms of th Apéry set.
Proposition 2.1.2. Let Ap(S, m) = {w 0 = 0 < w 1 < w 2 < . . . < w m-1 }. Under the previous notations, for all 1 ≤ j ≤ m -1, we have
η j = w j + ρ m - w j-1 + ρ m .
Proof. Fix 0 ≤ k ≤ q -1 and let 1 ≤ j ≤ m -1. We will show that the interval I k contains exactly j elements of S if and only if w j-1 < (k + 1)m -ρ ≤ w j .
Suppose that I k contains j elements. Suppose, by contradiction, that w j-1 ≥ (k + 1)m -ρ. We have
w m-1 > . . . > w j-1 ≥ (k + 1)m -ρ, thus w m-1 , . . . , w j-1 ∈ ∪ q t=k+1 I t .
Hence, I k contains at most j -1 elements of S (namely w 0 +km = km, w 1 +k 1 m, w 2 +k 2 m, . . . , w j-2 +k j-2 m for some k 1 , . . . , k j-2 ∈ {0, . . . , k -1}). This contradicts the fact that I k contains exactly j elements of S. Hence,
w j-1 < (k + 1)m -ρ. If w j < (k + 1)m -ρ, then w 0 < . . . < w j < (k + 1)m -ρ, thus w 0 , . . . , w j ∈ ∪ k t=0 I t .
Then, I k contains at least j + 1 elements of S which are :
w 0 + km = km, w 1 + k 1 m, w 2 + k 2 m, . . . , w j + k j m
for some k 1 , . . . , k j ∈ {0, . . . , k -1}, which contradicts the fact that I k contains exactly j elements of S. Hence, w j ≥ (k+1)m-ρ. Consequently, if I k contains exactly j elements of S, then w j-1 < (k+1)m-ρ ≤ w j .
Conversely,
w j-1 < (k + 1)m -ρ implies that w 0 < . . . < w j-1 < (k + 1)m -ρ, then w 0 , . . . , w j-1 ∈ ∪ k t=0 I t .
Hence, I k contains at least j elements of S which are
w 0 + km = km, w 1 + k 1 m, w 2 + k 2 m, . . . , w j-1 + k j-1 m for some k 1 , . . . , k j-1 ∈ {0, . . . , k -1}. On the other hand, w j ≥ (k + 1)m -ρ implies that w m-1 > . . . > w j ≥ (k + 1)m -ρ, then w m-1 , . . . , w j ∈ ∪ q t=k+1 I t .
Thus, I k contains at most j elements of S which are :
w 0 + km = km, w 1 + k 1 m, w 2 + k 2 m, . . . , w j-1 + k j-1 m
for some k 1 , . . . , k j-1 ∈ {0, . . . , k -1}. Hence, if w j-1 < (k + 1)m -ρ ≤ w j , then I k contains exactly j elements of S and this proves our assertion. Consequently,
η j = |{k ∈ N such that |I k ∩ S| = j}| = |{k ∈ N such that w j-1 < (k + 1)m -ρ ≤ w j }| = |{k ∈ N such that w j-1 +ρ m < (k + 1) ≤ w j +ρ m }| = |{k ∈ N such that w j-1 +ρ m -1 < k ≤ w j +ρ m -1}| = |{k ∈ N such that w j-1 +ρ m ≤ k ≤ w j +ρ m -1}| = w j +ρ m - w j-1 +ρ m .
Thus, the proof is complete.
Proposition 2.1.3 gives an equivalent form of Wilf's conjecture using Propositions 2.1.1 and 2.1.2.
Proposition 2.1.3. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and conductor f + 1 = qm -ρ for some q ∈ N and 0 ≤ ρ ≤ m -1. Let w 0 = 0 < w 1 < w 2 < . . . < w m-1 be the elements of Ap(S, m). Then, S satisfies Wilf's conjecture if and only if
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ 0.
Proof. By Proposition 2.1.1, we have
f + 1 ≤ nν ⇔ qm -ρ ≤ ν q-1 k=0 n k ⇔ q-1 k=0 m -ρ ≤ q-1 k=0 n k ν ⇔ q-1 k=0 (n k ν -m) + ρ ≥ 0.
Equivalently, we obtain
m-1 j=1 η j (jν -m) + ρ ≥ 0.
By applying Proposition 2.1.2, we get
m-1 j=1 η j (jν -m) + ρ ≥ 0 ⇔ m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ 0.
Thus, the proof is complete.
Technical results
Let S be a numerical semigroup and let the notations be as in Section 2.1. In this Section, we give some technical results will be used through the Chapter.
Remark 2.2.1. Let Ap(S, m) = {w 0 = 0 < w 1 < . . . < w m-1 }. The following technical remarks will be used through the Chapter :
i) w 0 + ρ m = 0.
ii) For all 1 ≤ i ≤ m -1, we have
w i + ρ m ≥ 1.
iii) For all 1 ≤ i ≤ m -1, we have either
w i + ρ m = w i m or w i + ρ m = w i m + 1. iv) If w i + ρ m = w i m + 1, then w i + ρ m ≥ 2 and ρ ≥ 1. v) For all 0 ≤ i < j ≤ m -1, we have w i + ρ m ≤ w j + ρ m . vi) w m-1 + ρ m = q.
Proof.
i) This is because w 0 = 0 and 0 ≤ ρ < m.
ii) We have m < w i for all 1 ≤ i ≤ m -1. This implies the result since ρ ≥ 0.
iii) For all 1 ≤ i ≤ m -1, let w i = q i m + r i such that q i , r i ∈ N and r i < m. We have w i m = q i . Therefore,
w i + ρ m = q i m + r i + ρ m = q i + r i + ρ m = q i + r i + ρ m = w i m + r i + ρ m . Since 0 ≤ ρ, r i < m, it follows that 0 ≤ r i +ρ m < 2. Consequently, 0 ≤ r i +ρ m ≤ 1. Hence, w i m ≤ w i + ρ m ≤ w i m + 1.
Equivalently, Thus, the proof is complete.
w i + ρ m = w i m or w i + ρ m = w i m + 1. iv) Suppose that w i +ρ m = w i m + 1. By using part ii), we get w i +ρ m ≥ 2. In this case ρ ≥ 1 (as ρ ≥ 0).
Let 1 < α < m -1. Using Remark 2.2.1, we get the following inequalities which will be used later in the Chapter :
α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) = α j=1 w j + ρ m (jν -m) - α j=1 w j-1 + ρ m (jν -m) = α j=1 w j + ρ m (jν -m) - α-1 j=0 w j + ρ m (j + 1)ν -m = α-1 j=1 w j + ρ m (jν -m) + w α + ρ m (αν -m)- w 0 + ρ m (ν -m) - α-1 j=1 w j + ρ m (j + 1)ν -m = w α + ρ m (αν -m) - w 0 + ρ m (ν -m) - α-1 j=1 w j + ρ m ν = w α + ρ m (αν -m) - w 0 + ρ m (ν -m) - w 1 + ρ m ν- α-1 j=2 w j + ρ m ν = w α + ρ m (αν -m) - w 1 + ρ m ν - α-1 j=2 w j + ρ m ν (as w 0 + ρ m = 0).
From Remark 2.2.1 (v), we have
w j +ρ m ≤ wα+ρ m ∀ 2 ≤ j ≤ α -1. Hence, α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w α + ρ m (αν -m) - w 1 + ρ m ν - α-1 j=2 w α + ρ m ν = w α + ρ m (αν -m) - w 1 + ρ m ν - w α + ρ m (α -2)ν = - w 1 + ρ m ν + w α + ρ m (2ν -m).
Consequently, we have
α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ - w 1 + ρ m ν + w α + ρ m (2ν -m). (2.2.1) Therefore, m-1 j=α+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ m-1 j=α+1 w j + ρ m - w j-1 + ρ m (α + 1)ν -m (as j ≥ α + 1 and w j + ρ m ≥ w j-1 + ρ m ) = (α + 1)ν -m m-1 j=α+1 w j + ρ m - w j-1 + ρ m = (α + 1)ν -m m-1 j=α+1 w j + ρ m - m-1 j=α+1 w j-1 + ρ m = (α + 1)ν -m m-1 j=α+1 w j + ρ m - m-2 j=α w j + ρ m = (α + 1)ν -m m-2 j=α+1 w j + ρ m + w m-1 + ρ m - w α + ρ m - m-2 j=α+1 w j + ρ m = w m-1 + ρ m - w α + ρ m (α + 1)ν -m .
Hence, we obtain
m-1 j=α+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w m-1 + ρ m - w α + ρ m (α + 1)ν -m . (2.2.2)
The following technical Lemma will be used through the Chapter :
Lemma 2.2.2. Let Ap(S, m) = {w 0 = 0 < w 1 < . . . < w m-1 } and suppose that w i ≥ w j + w k . We have the following :
i) w i + ρ m ≥ w j + ρ m + w k + ρ m -1. ii) If w i + ρ m - w j + ρ m - w k + ρ m = -1, then w j + ρ m = w j m + 1, w k + ρ m = w k m + 1 and ρ ≥ 1.
In particular,
w j + ρ m ≥ 2, w k + ρ m ≥ 2 and ρ ≥ 1.
Proof.
i) Assume that w i ≥ w j + w k . Then, w i + ρ ≥ w j + w k + ρ. Consequently,
w i + ρ m ≥ w j + w k + ρ m ⇒ w i + ρ m ≥ w j + w k + ρ m .
Therefore, we have
w i + ρ m ≥ w j + ρ m + w k m . By Remark 2.2.1 (iii), w k m ≥ w k +ρ m -1. Hence, w i + ρ m ≥ w j + ρ m + w k + ρ m -1.
ii) Suppose that w i ≥ w j + w k and that w i +ρ m -
w j +ρ m -w k +ρ m = -1.
Suppose by the way of contradiction that
w j +ρ m = w j m + 1 or w k +ρ m = w k m + 1 or ρ < 1. By Remark 2.2.1 (iii) and that ρ ≥ 0, it follows that w j + ρ m = w j m or w k + ρ m = w k m or ρ = 0.
Since w i ≥ w j + w k , we have
w i + ρ m ≥ w j + w k + ρ m .
Since
w j +ρ m = w j m or w k +ρ m = w k m or ρ = 0, it follows that w i + ρ m ≥ w j + ρ m + w k + ρ m ,
which contradicts the hypothesis. Hence,
w j + ρ m = w j m + 1, w k + ρ m = w k m + 1 and ρ ≥ 1.
Using Remark 2.2.1 (ii), we get that
w j +ρ m = w j m + 1 ≥ 2, w k +ρ m = w k m + 1 ≥ 2 and ρ ≥ 1.
Thus, the proof is complete.
Numerical semigroups with w
m-1 ≥ w 1 + w α and (2 + α-3 q )ν ≥ m
In this Section, we show that Wilf's conjecture holds for numerical semigroups in the following cases :
1. w m-1 ≥ w 1 + w α and (2 + α-3 q )ν ≥ m for some 1 < α < m -1.
2. m -ν ≤ 5. (Note that the case m -ν ≤ 3 results from the fact that Wilf's conjecture holds for 2ν ≥ m. This case has been proved in [START_REF] Sammartano | Numerical semigroups with large embedding dimension satisfy wilf ?s conjecture[END_REF]), however we shall give a proof in order to cover it through our techniques).
Then, we deduce the conjecture for m = 9 and for (2 + 1 q )ν ≥ m. Next, we will show that Wilf's conjecture holds for numerical semigroups with
w m-1 ≥ w 1 + w α and (2 + α -3 q )ν ≥ m.
Theorem 2.3.1. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and conductor f + 1 = qm -ρ for some q, ρ ∈ N ; 0 ≤ ρ ≤ m -1. Let w 0 = 0 < w 1 < w 2 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 ≥ w 1 + w α for some 1 < α < m -1. If (2 + α-3 q )ν ≥ m, then S satisfies Wilf's conjecture.
Proof. We are going to use the equivalent form of Wilf's conjecture given in Proposition 2.1.3. Since w m-1 ≥ w 1 + w α , by Lemma 2.2.2, it follows that
w m-1 + ρ m ≥ w 1 + ρ m + w α + ρ m -1. Let x = w m-1 +ρ m -w 1 +ρ m -wα+ρ m . Then, x ≥ -1 and w 1 +ρ m + wα+ρ m = w m-1 +ρ m -x = q -x (
( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + m-1 j=α+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ - w 1 + ρ m ν + w α + ρ m (2ν -m)+ w m-1 + ρ m - w α + ρ m (α + 1)ν -m + ρ (by (2.2.1) and (2.2.2)) = w 1 + ρ m -ν + (α + 1)ν -m -(α + 1)ν -m + w α + ρ m (2ν -m) + w m-1 + ρ m - w α + ρ m (α + 1)ν -m +ρ = w 1 + ρ m (αν -m) + w α + ρ m (2ν -m)+ w m-1 + ρ m - w α + ρ m - w 1 + ρ m (α + 1)ν -m +ρ = ( w 1 + ρ m + w α + ρ m )(2ν -m) + w 1 + ρ m (α -2)ν + w m-1 + ρ m - w α + ρ m - w 1 + ρ m (α + 1)ν -m +ρ = (q -x)(2ν -m) + w 1 + ρ m (α -2)ν+x (α + 1)ν -m + ρ. Consequently, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ (q -x)(2ν -m) + w 1 + ρ m (α -2)ν+x (α + 1)ν -m + ρ. (2.3.1) Since x = w m-1 + ρ m - w 1 + ρ m - w α + ρ m ≥ -1,
then we have two cases : Note that 3ν < m. We have w 1 = 21, w 14 = 56 and w m-1 = 83 that is w m-1 ≥ w 1 + w 14 . In addition, (2 + α-3 q )ν = (2 + 14-3 4 )6 ≥ 19 = m. Thus, the conditions of Theorem 2.3.1 are valid.
• If x = -1,
( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ (q + 1)(2ν -m) + 2(α -2)ν -(α + 1)ν -m + ρ = ν(2q + 2 + 2α -4 -α -1) -qm + ρ = ν(2q + α -3) -qm + ρ = q ν(2 + α -3 q ) -m + ρ ≥ 0 (
( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ (q -x)(2ν -m) + (α -2)ν + x (α + 1)ν -m + ρ = ν 2q + (α -2)(x + 1) + x -qm + ρ > ν(2q + α -3) -qm + ρ (as x ≥ 0) = q ν(2 + α -3 q ) -m + ρ ≥ 0 (
In the following we shall deduce some cases where Wilf's conjecture holds. We start with the following technical Lemma.
Lemma 2.3.3. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < w 2 < . . . < w m-1 be the elements of Ap(S, m).
If m -ν > ( α 2 ) = α(α-1)
2 for some α ∈ N * , then w m-1 ≥ w 1 + w α .
Proof. Suppose by contradiction that w
m-1 < w 1 + w α . Let w ∈ Ap(S, m) * \ min ≤ S (Ap(S, m)).
Thus, w ≤ w m-1 and w = w i + w j for some w i , w j ∈ Ap(S, m) * this follows from Corollary 1.0.29. Hence, w ≤ w m-1 < w 1 + w α . Thus, the only possible values for w are included in
{w i + w j ; 1 ≤ i ≤ j ≤ α -1}. By Corollary 1.0.31, we have m -ν = |Ap(S, m) * \min ≤ S (Ap(S, m))|. Therefore, m -ν ≤ ( α 2 ) = α(α-1) 2 , which is impossible. Hence, w m-1 ≥ w 1 + w α .
Thus, the proof is complete.
Next, we will deduce Wilf's conjecture for numerical Semigroups with
m -ν > α(α -1) 2 and (2 + α -3 q )ν ≥ m.
It will be used later to show that the conjecture holds for those with (2 + 1 q )ν ≥ m, and in order also to cover the result in [START_REF] Sammartano | Numerical semigroups with large embedding dimension satisfy wilf ?s conjecture[END_REF] saying that the conjecture is true for 2ν ≥ m. Corollary 2.3.4. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and conductor f +1 = qm-ρ for some q ∈ N,
0 ≤ ρ ≤ m-1. Suppose that m-ν > ( α 2 ) = α(α-1)
Proof. It follows from Lemma 2.3.
3 that if m -ν > α(α-1)
2
, then w m-1 ≥ w 1 + w α . Now, use Theorem 2.3.1. Thus, the proof is complete.
As a direct consequence of Theorem 2.3.1, we get the following Corollary.
Corollary 2.3.5. Let S be a numerical semigroup with a given multiplicity m and conductor f +1 = qm-ρ for some q ∈ N, 0 ≤ ρ ≤ m -1. Let w 0 = 0 < w 1 < . . . < w m-1 be the elements of Ap(S, m). If w m-1 ≥ w 1 + w α for some 1 < α < m -1 and m ≤ 8 + 4( α-3 q ), then S satisfies Wilf's conjecture.
Proof. By Theorem 2.3.1, we may assume that (2 + α-3 q )ν < m. Therefore,
ν < qm 2q + α -3 ≤ 8q + α -12 2q + α -3 .
Hence, ν < 4. Consequently, S satisfies Wilf's conjecture (see [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]). Thus, the proof is complete.
In the following Lemma, we will show that Wilf's conjecture holds for numerical semigroups with m -ν ≤ 3. This will enable us later to prove the conjecture for numerical semigroups with (2 + 1 q )ν ≥ m and cover the result in [START_REF] Sammartano | Numerical semigroups with large embedding dimension satisfy wilf ?s conjecture[END_REF] saying that the conjecture is true for 2ν ≥ m. Lemma 2.3.6. Let S be a numerical Semigroup with multiplicity m and embedding dimension ν. If m-ν ≤ 3, then S satisfies Wilf's conjecture.
Proof. We may assume that ν ≥ 4 (ν ≤ 3 is solved [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]). We are going to show that S satisfies Wilf's conjecture by means of Proposition 2.1.3.
Case 1.
If m -ν = 0 (S is said to be a numerical semigroup with maximal embedding dimension). Then,
t(S) = m -1 = ν -1 (Corollary 3.2 [18]). Consequently, S satisfies Wilf's conjecture ( [9] Proposition 2.3).
Case 2. If m -ν = 1, then we may assume that m = ν + 1 ≥ 5 (ν ≥ 4). By taking α = 1 in (2.2.2), we get
m-1 j=2 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ ( w m-1 + ρ m - w 1 + ρ m )(2ν -m). (2.3.2)
Hence, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = 1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m)+ m-1 j=2 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = ( w 1 + ρ m - w 0 + ρ m )(ν -m) + m-1 j=2 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = w 1 + ρ m (ν -m) + m-1 j=2 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ (as w 0 +ρ m = 0) ≥ w 1 + ρ m (ν -m) + ( w m-1 + ρ m - w 1 + ρ m )(2ν -m) + ρ (by (2.3.2)) = w 1 + ρ m ν -m + (2ν -m) -(2ν -m) + ( w m-1 + ρ m - w 1 + ρ m )(2ν -m) + ρ = w 1 + ρ m (3ν -2m) + ( w m-1 + ρ m - w 1 + ρ m - w 1 + ρ m )(2ν -m) + ρ = w 1 + ρ m (m -3) + ( w m-1 + ρ m - w 1 + ρ m - w 1 + ρ m )(m -2) + ρ (as m -ν = 1).
Therefore, we get
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 1 + ρ m (m -3) + ( w m-1 + ρ m - w 1 + ρ m - w 1 + ρ m )(m -2) + ρ. (2.3.3) Since m -ν = 1 > 0 = 1(0)
2 , then by Lemma 2.3.3, it follows that w m-1 ≥ w 1 + w 1 . Consequently, by Lemma 2.2.2 (i), we have
w m-1 + ρ m ≥ w 1 + ρ m + w 1 + ρ m -1. • If w m-1 +ρ m -w 1 +ρ m -w 1 +ρ m = -1.
( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 1 + ρ m (m -3) + ( w m-1 + ρ m - w 1 + ρ m - w 1 + ρ m )(m -2) + ρ ≥ 2(m -3) -(m -2) + ρ (as w 1 + ρ m ≥ 2) ≥ 0 (as m ≥ 5). • If w m-1 +ρ m -w 1 +ρ m -w 1 +ρ m ≥ 0. From (2.3.3), we get m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 1 + ρ m (m -3) + ( w m-1 + ρ m - w 1 + ρ m - w 1 + ρ m )(m -2) + ρ ≥ (m -3) + ρ ≥ 0 (as m ≥ 5).
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture if m -ν = 1.
Case 3. If m -ν ∈ {2, 3}. We have m -ν > 1 = 2(1)
2 . If (2 -1 q )ν ≥ m, then by Corollary 2.3.4 S satisfies Wilf's conjecture. Now, suppose that (2 -1 q )ν < m. Since Wilf's conjecture holds for q ≤ 3 (see [14], [START_REF] Eliahou | Wilf's conjecture and macaulay's theorem[END_REF]), we may assume that q ≥ 4.
• If m -ν = 2. Then, (2 -1 q )ν < ν + 2. Hence, ν < 2( q q-1 ) ≤ 8 3
. By [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF], S satisfies Wilf's conjecture.
• If m -ν = 3. Then, (2 -1 q )ν < ν + 3. Hence, ν < 3( q q-1 ) ≤ 4. By [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF], S satisfies Wilf's conjecture. Thus, Wilf's conjecture holds if m -ν ≤ 3. Thus, the proof is complete.
Proof. If m -ν > 3 = 3(2)
2 and 2ν ≥ m, then by Corollary 2.3.4 Wilf's conjecture holds. If m -ν ≤ 3, by Lemma 2.3.6, S satisfies Wilf's conjecture. Thus, the proof is complete.
In the following Corollary we will deduce Wilf's conjecture for numerical semigroups with m -ν = 4. This will enable us later to prove the conjecture for those with (2 + 1 q )ν ≥ m.
Corollary 2.3.8. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. If m -ν = 4, then S satisfies Wilf's conjecture.
Proof. Since Wilf's conjecture holds for ν ≤ 3 (see [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]), then we may assume that ν ≥ 4. Therefore, ν ≥ m -ν. Consequently, 2ν ≥ m. Hence, S satisfies Wilf's conjecture. Thus, the proof is complete.
The following technical Lemma will be used through the paper.
Lemma 2.3.9. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < . . . < w m-1 be the elements of Ap(S, m).
If m -ν ≥ ( α 2 ) -1 = α(α-1) 2 -1 for some 3 ≤ α ≤ m -2, then w m-1 ≥ w 1 + w α or w m-1 ≥ w α-2 + w α-1 .
Proof. Suppose by the way of contradiction that w m-1 < w 1 + w α and w m-1 < w α-2 + w α-1 . Let
w ∈ A(S, m) * \ min ≤ S (Ap(S, m)).
Then, w ≤ w m-1 and w = w i + w j for some w i , w j ∈Ap(S, m) * (Corollary 1.0.29). In this case, the only possible values of w are included in {w i +w j ; 1
≤ i ≤ j ≤ α-1}\{w α-2 +w α-1 , w α-1 +w α-1 }. Consequently, m -ν = |Ap(S, m) * \ min ≤ S (Ap(S, m))| ≤ α(α-1) 2 -2. But α(α-1) 2 -2 < α(α-1) 2 -1, which contradicts the hypothesis. Hence, w m-1 ≥ w 1 + w α or w m-1 ≥ w α-2 + w α-1 .
In the next theorem, we will show that Wilf's conjecture holds for numerical semigroups with m -ν = 5.
Theorem 2.3.10. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. If m -ν = 5, then S satisfies Wilf's conjecture.
Proof. Let m -ν = 5. Since Wilf's conjecture holds for 2ν ≥ m, then we may assume that 2ν < m. This implies that ν < m 2 = ν+5 2 i.e., ν < 5. Since the case ν ≤ 3 is known [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF], then we shall assume that ν = 4. This also implies that m = ν + 5 = 9.
Since m -ν = 5 = 4(3) 2 -1, by Lemma 2.3.9, it follows that
w 8 ≥ w 2 + w 3 or w 8 ≥ w 1 + w 4 . Case 1. If w 8 ≥ w 2 + w 3 . By taking α = 3 in (2.2.2) (m = 9, ν = 4), we get 8 j=4 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) ≥ ( w 8 + ρ 9 - w 3 + ρ 9 )(7). (2.3.4) Hence, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ (m = 9) = 3 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + 8 j=4 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ = ( w 1 + ρ 9 - w 0 + ρ 9 )(-5)+( w 2 + ρ 9 - w 1 + ρ 9 )(-1)+( w 3 + ρ 9 - w 2 + ρ 9 ) (3)
+ 8 j=4 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ = w 1 + ρ 9 (-4) + w 2 + ρ 9 (-4)+ w 3 + ρ 9 (3) + 8 j=4 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ ( w 0 +ρ m = 0) ≥ w 1 + ρ 9 (-4) + w 2 + ρ 9 (-4)+ w 3 + ρ 9 (3)+( w 8 + ρ 9 - w 3 + ρ 9 )(7) + ρ (Using (2.3.4)).
On the other hand, as w 1 +ρ 9 ≤ w 2 +ρ 9 and w 1 +ρ 9
≤ w 3 +ρ 9 , then 4 w 1 + ρ 9 ≤ 3 w 2 + ρ 9 + w 3 + ρ 9 .
Consequently, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 2 + ρ 9 ( - 3
4 )4 + w 3 + ρ 9 ( - 1
4 )4 + w 2 + ρ 9 (-4)+ w 3 + ρ 9 (3) +( w 8 + ρ 9 - w 3 + ρ 9 )(7) + ρ = w 2 + ρ 9 (-7) + w 3 + ρ 9 (2) + ( w 8 + ρ 9 - w 3 + ρ 9 )(7) + ρ = w 3 + ρ 9 (2) + ( w 8 + ρ 9 - w 2 + ρ 9 - w 3 + ρ 9 )(7) + ρ.
Then,
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 3 + ρ 9 (2) + ( w 8 + ρ 9 - w 2 + ρ 9 - w 3 + ρ 9 )(7) + ρ.
(2.3.5) Since w 8 ≥ w 2 + w 3 , by Lemma 2.2.2, it follows that w 8 +ρ 9
≥ w 2 +ρ 9 + w 3 +ρ 9 -1. • If w 8 + ρ 9 - w 2 + ρ 9 - w 3 + ρ 9 ≥ 0, then (2.3.5) gives 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ ≥ 0. • If w 8 + ρ 9 - w 2 + ρ 9 - w 3 + ρ 9 = -1.
By Lemma 2.2.2, we have ρ ≥ 1. Since for q ≤ 3 Wilf's conjecture is solved (see [START_REF] Eliahou | Wilf's conjecture and macaulay's theorem[END_REF], [14]), then may assume that q ≥ 4. Since
w 2 + ρ 9 ≤ w 3 + ρ 9 and w 2 + ρ 9 + w 3 + ρ 9 = w 8 + ρ 9 + 1 = q + 1,
in this case, it follows that
w 3 + ρ 9 + w 3 + ρ 9 ≥ w 2 + ρ 9 + w 3 + ρ 9 = q + 1 ≥ 5.
Hence,
w 3 + ρ 9 ≥ 3. Now, (2.3.5) gives 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -m) + ρ ≥ 3(2) -7 + 1 ≥ 0.
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture in this case.
Case 2. If w 8 ≥ w 1 + w 4 . We may assume that w 8 < w 2 + w 3 , since otherwise we are back to case 1. Hence, the possible values of w ∈ Ap(S, 9) * \min ≤ S (Ap(S, 9)) are
{w 1 + w j ; 1 ≤ j ≤ 7} ∪ {w 2 + w 2 }. • If Ap(S, 9) * \min ≤ S (Ap(S, 9)) ⊆ {w 1 + w j ; 1 ≤ j ≤ 7}. We have 5 = m -ν = |Ap(S, 9) * \ min ≤ S (Ap(S, 9))|.
Then, there exist five elements in Ap(S, 9) * included in {w 1 + w j ; 1 ≤ j ≤ 7}. By Corollary 1.0.29, an element x of the Apéry set of S belongs to max ≤ S (Ap(S, m)) if and only if w i = x + w j for all w i , w j ∈ Ap(S, m) * , then there exists at least five elements in Ap(S, 9) * that are not maximal (five elements from {w 1 . . . , w 7 }), hence,
t(S) = |{max ≤ S (Ap(S, 9)) -9}| ≤ 3 = ν -1.
Consequently, S satisfies Wilf's conjecture (Proposition 2.3 [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]).
• If w 2 + w 2 ∈ Ap(S, 9) * \ min ≤ S (Ap(S, 9)), then w 2 + w 2 ∈ Ap(S, 9)
namely w 8 ≥ w 2 + w 2 . By Lemma 2.2.2, we have
w 8 + ρ 9 ≥ 2 w 2 + ρ 9 -1.
In particular,
w 2 + ρ 9 ≤ q + 1 2 . (2.3.6) By taking α = 4 in (2.2.2) (m = 9, ν = 4), we get 8 j=5 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) ≥ ( w 8 + ρ 9 - w 4 + ρ 9 )(11). (2.3.7)
We have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ (m = 9) = 4 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9)+ 8 j=5 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ = ( w 1 + ρ 9 - w 0 + ρ 9 )(-5)+( w 2 + ρ 9 - w 1 + ρ 9 )(-1)+( w 3 + ρ 9 - w 2 + ρ m ) (3)
+(
w 4 + ρ 9 - w 3 + ρ 9 )(7) + 8 j=5 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ = w 1 + ρ 9 (-4) + w 2 + ρ 9 (-4)+ w 3 + ρ 9 (-4) + w 4 + ρ 9 (7) + 8 j=5 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ (as w 0 +ρ m = 0) ≥ w 1 + ρ 9 (-4) + w 2 + ρ 9 (-4)+ w 3 + ρ 9 (-4) + w 4 + ρ 9 (7)
+(
w 8 + ρ 9 - w 4 + ρ 9 )(11) + ρ (by (2.3.7)) ≥ w 1 + ρ 9 (-4) + ( q + 1 2 )(-4)+ w 4 + ρ 9 (-4) + w 4 + ρ 9 (7)+( w 8 + ρ 9 - w 4 + ρ 9 ) (11)
+ ρ (by using (2.3.6) and w 3 +ρ 9
≤ w 4 +ρ 9 ) = w 1 + ρ 9 (-4) -2(q + 1) + w 4 + ρ 9 (3)+( w 8 + ρ 9 - w 4 + ρ 9 )(11) + ρ = w 1 + ρ 9 (-4 + 11 -11) -2(q + 1)+ w 4 + ρ 9 (3) +( w 8 + ρ 9 - w 4 + ρ 9 )(11) + ρ = w 1 + ρ 9 (7) -2(q + 1) + w 4 + ρ 9 (3)+( w 8 + ρ 9 - w 4 + ρ 9 - w 1 + ρ 9 )(11) + ρ = ( w 1 + ρ 9 + w 4 + ρ 9 )(3) + w 1 + ρ 9 (4) -2(q + 1) + ( w 8 + ρ 9 - w 1 + ρ 9 - w 4 + ρ 9 )(11) + ρ. Therefore, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ ( w 1 + ρ 9 + w 4 + ρ 9 )(3) + w 1 + ρ 9 (4) -2(q + 1) + ( w 8 + ρ 9 - w 1 + ρ 9 - w 4 + ρ 9 )(11) + ρ.
(2.3.8) We have w 8 ≥ w 1 + w 4 , then by Lemma 2.2.2 (i)
w 8 + ρ 9 ≥ w 1 + ρ 9 + w 4 + ρ 9 -1. • If w 8 +ρ 9 -w 1 +ρ 9 -w 4 +ρ 9 ≥ 0. Let x = w 8 +ρ 9 -w 1 +ρ 9 -w 4 +ρ 9
. Hence, x ≥ 0 and
w 1 +ρ 9 + w 4 +ρ 9 = w 8 +ρ 9 -x = q -x (Remark 2.2.1 (vi)). Then, (2.3.8) gives 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ ≥ (q -x)(3) + 4 -2(q + 1) + 11x + ρ = q + 8x + 2 + ρ ≥ 0. • If w 8 +ρ 9 -w 1 +ρ 9 -w 4 +ρ 9 = -1. Then, w 1 +ρ m + w 4 +ρ 9 = w 8 +ρ 9 + 1 = q + 1 (Remark 2.2.1 (vi)). By Lemma 2.2.2, we have w 1 +ρ 9 ≥ 2 and ρ ≥ 1. Since q ≥ 1 (S = N i.e., f ≥ 1), then (2.3.8) gives 8 j=1 ( w j + ρ 9 - w j-1 + ρ 9 )(4j -9) + ρ ≥ (q + 1)(3) + 8 -2(q + 1) -11 + 1 = q -1 ≥ 0.
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture in this case.
Thus, Wilf's conjecture holds if m -ν = 5. Thus, the proof is complete.
In the next corollary, we will deduce the conjecture for m = 9.
Corollary 2.3.11. If S is a numerical Semigroup with multiplicity m = 9, then S satisfies Wilf's conjecture.
Proof. By Lemma 2.3.6, Corollary 2.3.8 and Theorem 2.3.10, we may assume that m -ν > 5, hence, ν < m -5 = 4. By [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF], S satisfies Wilf's conjecture. Thus, the proof is complete.
The following Lemma will enable us later to show that Wilf's conjecture holds for numerical semigroups with (2 + 1 q )ν ≥ m. Lemma 2.3.12. Let S be a numerical Semigroup with multiplicity m, embedding dimension ν and conductor
f + 1 = qm -ρ for some q ∈ N, 0 ≤ ρ ≤ m -1. If m -ν = 6 and (2 + 1 q )ν ≥ m, then S satisfies Wilf's conjecture. Proof. Since m -ν = 6 ≥ 4(3)
2 -1, by Lemma 2.3.9, it follows that
w m-1 ≥ w 1 + w 4 or w m-1 ≥ w 2 + w 3 . Case 1. If w m-1 ≥ w 1 + w 4 . By hypothesis (2 + 1 q )ν ≥ m and Theorem 2.3.1 Wilf's conjecture holds in this case. Case 2. If w m-1 ≥ w 2 + w 3 . We may assume that w m-1 < w 1 + w 4 , since otherwise we are back to case i. Hence, Ap(S, m) * \min ≤ S (Ap(S, m)) = {w 1 + w 1 , w 1 + w 2 , w 1 + w 3 , w 2 + w 2 , w 2 + w 3 , w 3 + w 3 } (as 6 = m -ν = |Ap(S, m) * \min ≤ S (Ap(S, m))|). Consequently, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = 3 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + m-1 j=4 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = ( w 1 + ρ m - w 0 + ρ m )(ν -m)+( w 2 + ρ m - w 1 + ρ m )(2ν -m)+( w 3 + ρ m - w 2 + ρ m )(3ν -m) + m-1 j=4 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = w 1 + ρ m (-ν)+ w 2 + ρ m (-ν) + w 3 + ρ m (3ν -m)+ m-1 j=4 ( w j + ρ m - w j-1 + ρ m )(jν -m) +ρ (as w 0 +ρ m = 0) ≥ w 1 + ρ m (-ν)+ w 2 + ρ m (-ν)+ w 3 + ρ m (3ν -m)+( w m-1 + ρ m - w 3 + ρ m )(4ν -m)+ρ (by (2.3.4)).
On the other hand, as
w 1 +ρ m ≤ w 2 +ρ m and w 1 +ρ m ≤ w 3 +ρ m , then 2 w 1 + ρ m ≤ w 2 + ρ m + w 3 + ρ m .
Consequently, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 2 + ρ m ( -ν 2 ) + w 3 + ρ m ( -ν 2 ) + w 2 + ρ m (-ν)+ w 3 + ρ m (3ν -m) +( w m-1 + ρ m - w 3 + ρ m )(4ν -m) + ρ = w 2 + ρ m ( -3ν 2 ) + w 3 + ρ m ( 5ν 2 -m)+( w m-1 + ρ m - w 3 + ρ m )(4ν -m) + ρ = w 2 + ρ m -3ν 2 + (4ν -m) -(4ν -m) + w 3 + ρ m ( 5ν 2 -m)+( w m-1 + ρ m - w 3 + ρ m )(4ν -m) + ρ = w 2 + ρ m ( 5ν 2 -m) + w 3 + ρ m ( 5ν 2 -m) + ( w m-1 + ρ m - w 2 + ρ m - w 3 + ρ m )(4ν -m) + ρ = w 2 + ρ m ( 3ν 2 -6)+ w 3 + ρ m ( 3ν 2 -6)+( w m-1 + ρ m - w 2 + ρ m - w 3 + ρ m )(3ν -6)+ρ (as m-ν = 6).
Hence,
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w 2 + ρ m ( 3ν 2 -6) + w 3 + ρ m ( 3ν 2 -6) + ( w m-1 + ρ m + ρ- w 2 + ρ m - w 3 + ρ m )(3ν -6).
(2.3.9)
We have w m-1 ≥ w 2 + w 3 , by Lemma 2.2.2, it follows that
w m-1 + ρ m ≥ w 2 + ρ m + w 3 + ρ m -1. • If w m-1 + ρ m - w 2 + ρ m - w 3 + ρ m ≥ 0, using ν ≥ 4 in (2.3.9) (ν ≤ 3 is solved [9]), we get m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ 0. • If w m-1 + ρ m - w 2 + ρ m - w 3 + ρ m = -1, then w 2 + ρ m + w 3 + ρ m = w m-1 + ρ m + 1, that is w 2 + ρ m + w 3 + ρ m = q + 1.
(2.3.10)
We have
w 3 + w 3 ∈ Ap(S, m) * \min ≤ S (Ap(S, m)) namely w 3 + w 3 ∈Ap(S, m), then w m-1 ≥ w 3 + w 3 . By Lemma 2.2.2, we have w m-1 +ρ m ≥ 2 w 3 +ρ m -1.
In particular,
w 3 + ρ m ≤ q + 1 2 . ( 2
= w 3 +ρ m = q+1
2 , in particular q is odd, then we have to assume that q ≥ 5. Now, using (2.3.10), q ≥ 5 and the hypothesis (2
+ 1 q )ν ≥ m = ν + 6 in (2.3.9), we get m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ ( w 2 + ρ m + w 3 + ρ m )( 3ν 2 -6)+ w m-1 + ρ m - w 2 + ρ m - w 3 + ρ m 3ν -6) + ρ = (q + 1)( 3ν 2 -6) -(3ν -6) + ρ = ν( 3q 2 + 3 2 -3) -6q + ρ ≥ ν( 3q 2 - 3 2 ) -qν -ν + ρ (as 6q ≤ qν + ν) = ν( q 2 - 5 2 ) + ρ ≥ 0 (as q ≥ 5).
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture in this case.
Therefore, Wilf's conjecture holds if m -ν = 6 and (2 + 1 q )ν ≥ m. Thus, the proof is complete. Next, we will generalize a result for Sammartano ( [19]) and show that Wilf's conjecture holds for numerical semigroups satisfying (2 + 1 q )ν ≥ m, using Lemma 2.
f + 1 = qm -ρ for some q ∈ N, 0 ≤ ρ ≤ m -1. If (2 + 1 q )ν ≥ m, then S satisfies Wilf's conjecture.
Proof.
• If m -ν ≤ 3, then by Lemma 2.3.6 Wilf's conjecture holds.
• If m -ν = 4, then by Corollary 2.3.8 Wilf's conjecture holds.
• If m -ν = 5, then by Theorem 2.3.10 Wilf's conjecture holds.
• If m -ν = 6 and (2 + 1 q )ν ≥ m, then by Lemma 2.3.12 Wilf's conjecture holds. Note that 2ν < m. We have (2 + 1 q )ν = (2 + 1 4 )6 ≥ 13 = m. Thus, the conditions of Theorem 2.3.13 are valid.
• If m -ν > 6 and (2 + 1 q )ν ≥ m,
Corollary 2.3.15. Let S be a numerical semigroup with multiplicity m and conductor
f + 1 = qm -ρ for some q ∈ N, 0 ≤ ρ ≤ m -1. If m ≤ 8 + 4
q , then S satisfies Wilf's conjecture.
Proof. If ν < 4, then S satisfies Wilf's conjecture (see [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]). Hence, we can suppose that ν ≥ 4. Thus,
(2 + 1 q )ν ≥ (2 + 1 q )4 ≥ m.
By using Theorem 2.3.13 S satisfies Wilf's conjecture. Thus, the proof is complete.
Numerical semigroups with w
m-1 ≥ w α-1 + w α and ( α+3 3 )ν ≥ m
In this Section, we will show that if S is a numerical semigroup such that
w m-1 ≥ w α-1 + w α and ( α + 3 3 )ν ≥ m,
then S satisfies Wilf's conjecture.
Theorem 2.4.1. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < w 2 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 ≥ w α-1 + w α for some 1 < α < m -1. If ( α+33 )ν ≥ m, then S satisfies Wilf's conjecture.
Proof. We may assume that ρ ≥ (3-q)αm 2α+6 . Indeed, if 0 ≤ ρ < (3-q)αm 2α+6 , then q < 3 and Wilf's conjecture holds for this case (see [14]). We are going to show that S satisfies Wilf's conjecture by means of Proposition 2.1.3. We have
α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) = α j=1 w j + ρ m (jν -m) - α j=1 w j-1 + ρ m (jν -m) = α j=1 w j + ρ m (jν -m) - α-1 j=0 w j + ρ m (j + 1)ν -m = w α + ρ m (αν -m) - w 0 + ρ m (ν -m) - α-1 j=1 w j + ρ m ν = w α + ρ m (αν -m) - w α-1 + ρ m ν - α-2 j=1 w j + ρ m ν (as w 0 +ρ m = 0) ≥ w α + ρ m (αν -m) - w α-1 + ρ m ν - α-2 j=1 1 2
w α + ρ m + w α-1 + ρ m ν (by Remark 2.2.1 (v)) = w α + ρ m (αν -m) - w α-1 + ρ m ν- w α + ρ m + w α-1 + ρ m (α -2)ν 2 .
Hence,
α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w α + ρ m ( α + 2 2 )ν -m - w α-1 + ρ m ( αν 2 ).
(2.4.1)
By (2.2.2), we have m-1 j=α+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ ( w m-1 + ρ m - w α + ρ m ) (α + 1)ν -m .
Since w m-1 ≥ w α-1 + w α , by Lemma 2.2.2, it follows that
w m-1 + ρ m ≥ w α-1 + ρ m + w α + ρ m -1. Let x = w m-1 +ρ m w α-1 +ρ m -wα+ρ m . Then, x ≥ -1 and w α-1 +ρ m + wα+ρ m = w m-1 +ρ m -x = q -x (by Remark 2.2.1 vi). Now, using ρ ≥ (3-q)αm 2α+6 and ( α+3 3 )ν ≥ m, we get m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = α j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m)+ m-1 j=α+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w α + ρ m ( α + 2 2 )ν -m - w α-1 + ρ m ( αν 2 ) + w m-1 + ρ m - w α + ρ m (α + 1)ν -m + ρ (by (2.4.1) and (2.2.2)) = w α-1 + ρ m -αν 2 + (α + 1)ν -m -(α + 1)ν -m + w α + ρ m ( α + 2 2 )ν -m + w m-1 + ρ m - w α + ρ m (α + 1)ν -m + ρ = w α-1 + ρ m + w α + ρ m ( α + 2 2 )ν -m + w m-1 + ρ m - w α-1 + ρ m - w α + ρ m (α + 1)ν -m +ρ ≥ q -x ( α + 2 2 )ν -m + x (α + 1)ν -m + ρ = ν(q + qα 2 + xα 2 ) -qm + ρ ≥ ν(q + qα 2 - α 2 ) -qm + (3 -q)αm 2α + 6 (as ρ ≥ (3-q)αm 2α+6 ) = ν(q + qα 2 - α 2 ) -m q(2α + 6) + (q -3)α 2α + 6 = ν(q + qα 2 - α 2 ) -m 3q α + 3 + 3qα 2(α + 3) - 3α 2(α + 3) = q + qα 2 - α 2 3 α + 3 ( α + 3 3 )ν -m ≥ 0 (
m -ν ≥ α(α-1) 2 -1 for some 7 ≤ α ≤ m -2. If (2 + α-3 q )ν ≥ m, then S satisfies Wilf's conjecture. Proof. Since m -ν ≥ α(α-1)
2 -1, then by Lemma 2.3.9, we have
w m-1 ≥ w 1 + w α or w m-1 ≥ w α-2 + w α-1 . Suppose that w m-1 ≥ w 1 + w α . Since (2 + α-3
q )ν ≥ m, by applying Theorem 2.3.1, S satisfies Wilf's conjecture. Now, suppose that w m-1 ≥ w α-2 + w α-1 . We may assume that q ≥ 4 (q ≤ 3 is solved [14], [START_REF] Eliahou | Wilf's conjecture and macaulay's theorem[END_REF]). Then, for α ≥ 7, we have ( α-1+3
3
)ν ≥ (2 + α-3 q )ν. Consequently, ( α-1+3
3
)ν ≥ m. Next, by applying Theorem 2.4.1, S satisfies Wilf's conjecture. Thus, the proof is complete. As a direct consequence of Theorem 2.4.1, we get the following Corollary.
Corollary 2.4.4. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < w 2 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 ≥ w α-1 + w α for some
1 < α < m -1. If m ≤ 4(α+3)
Proof. If ν < 4, then S satisfies Wilf's conjecture (see [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]). Hence, we can suppose that ν ≥ 4. Thus, ( α+33 )(ν) ≥ 4(α+3) 3 ≥ m. By applying Theorem 2.4.1 S satisfies Wilf's conjecture. Thus, the proof is complete. Proof. We are going to show that S satisfies Wilf's conjecture by means of Proposition 2.1.3. We have
Numerical semigroups with 2 +
x j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) = x j=1 w j + ρ m (jν -m) - x j=1 w j-1 + ρ m (jν -m) = x j=1 w j + ρ m (jν -m) - x-1 j=0 w j + ρ m (j + 1)ν -m = x-1 j=1 w j + ρ m (jν -m) + w x + ρ m (xν -m) - w 0 + ρ m (ν -m)- x-1 j=1 w j + ρ m (j + 1)ν -m = w x + ρ m (xν -m) - w 0 + ρ m (ν -m) - x-1 j=1 w j + ρ m ν = w x + ρ m (xν -m) - x-1 j=1 w j + ρ m ν (as w 0 +ρ m = 0) ≥ w x + ρ m (xν -m) - x-1 j=1 w x + ρ m ν (by Remark 2.2.1 (v)) = w x + ρ m (xν -m) - w x + ρ m (x -1)ν = w x + ρ m (ν -m).
Therefore,
x j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w x + ρ m (ν -m).
(2.5.1)
In addition,
y j=x+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) = y j=x+1 w j + ρ m (jν -m) - y j=x+1 w j-1 + ρ m (jν -m) = y j=x+1 w j + ρ m (jν -m) - y-1 j=x w j + ρ m (j + 1)ν -m = y-1 j=x+1 w j + ρ m (jν -m) + w y + ρ m (yν -m)- w x + ρ m (x + 1)ν -m - y-1 j=x+1 w j + ρ m (j + 1)ν -m = w y + ρ m (yν -m) - w x + ρ m (x + 1)ν -m - y-1 j=x+1 w j + ρ m ν ≥ w y + ρ m (yν -m) - w x + ρ m (x + 1)ν -m - y-1 j=x+1 w y + ρ m ν (using Remark 2.2.1 (v)) = w y + ρ m (yν -m) - w x + ρ m (x + 1)ν -m - w y + ρ m (y -x -1)ν = w y + ρ m (x + 1)ν -m - w x + ρ m (x + 1)ν -m .
Hence,
y j=x+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w y + ρ m (x + 1)ν -m - w x + ρ m (x + 1)ν -m . (2.5.2)
Moreover, we have
m-1 j=y+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ m-1 j=y+1 ( w j + ρ m - w j-1 + ρ m ) (y + 1)ν -m (using Remark 2.2.1 (v)) = (y + 1)ν -m m-1 j=y+1 w j + ρ m - m-1 j=y+1 w j-1 + ρ m = (y + 1)ν -m m-1 j=y+1 w j + ρ m - m-2 j=y w j + ρ m = w m-1 + ρ m - w y + ρ m (y + 1)ν -m . Therefore, m-1 j=y+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) ≥ w m-1 + ρ m - w y + ρ m (y + 1)ν -m . (2.5.3) Consequently, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ = x j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + y j=x+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + m-1 j=y+1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (ν -m)+ w y + ρ m (x + 1)ν -m - w x + ρ m (x + 1)ν -m +( w m-1 + ρ m - w y + ρ m ) (y + 1)ν -m + ρ (using (2.5.1), (2.5.2) and (2.5.3)) = w x + ρ m (-xν)+ w y + ρ m (x + 1)ν -m + w m-1 + ρ m - w y + ρ m (y + 1)ν -m + ρ = w x + ρ m -xν + (y + 1)ν -m -(y + 1)ν -m + w y + ρ m (x + 1)ν -m + w m-1 + ρ m - w y + ρ m (y + 1)ν -m + ρ = w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + w m-1 + ρ m - w y + ρ m - w x + ρ m (y + 1)ν -m +ρ. Consequently, m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + w m-1 + ρ m - w y + ρ m - w x + ρ m (y + 1)ν -m +ρ.
(2.5.4)
Since w m-1 -m ≥ w x + w y , it follows w m-1 + ρ m > w x + w y + ρ m .
(2.5.5)
Consider the following cases :
Case 1. If wx+ρ m = wx m + 1 and wy+ρ m = wy m + 1, then (2.5.5) gives w m-1 + ρ m ≥ w x + ρ m + w y + ρ m .
Then, from (2.5.4) and the hypothesis, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + ρ = ( w x m + 1) (y -x + 1)ν -m) + ( w y m + 1) (x + 1)ν -m + ρ = w x m + w y m + 2 (2 + wx m (y -x -1) + (y -2) + wy m (x -1) wx m + wy m + 2 )ν -m + ρ ≥ 0.
By Proposition 2.1.3, we get that Wilf's conjecture holds in this case.
m-1 + ρ m > w x + ρ m + w y + ρ m .
Then, from (2.5.4) and the hypothesis, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + (y + 1)ν -m + ρ = w x m (y -x + 1)ν -m + w y m + 1 (x + 1)ν -m + (y + 1)ν -m + ρ = w x m + w y m + 2 (2 + wx m (y -x -1) + (y -2) + wy m (x -1) + x wx m + wy m + 2 )ν -m + ρ > w x m + w y m + 2 (2 + wx m (y -x -1) + (y -2) + wy m (x -1) wx m + wy m + 2 )ν -m + ρ ≥ 0.
By Proposition 2.1.3, we get that Wilf's conjecture holds in this case.
w m-1 + ρ m > w x + ρ m + w y + ρ m .
Then, from (2.5.4) and the hypothesis, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + (y + 1)ν -m + ρ = w x m + 1 (y -x + 1)ν -m + w y m (x + 1)ν -m + (y + 1)ν -m + ρ = P xy (2 + wx m (y -x -1) + (y -2) + wy m (x -1) + (y -x) wx m + wy m + 2 )ν -m + ρ ≥ P xy (2 + wx m (y -x -1) + (y -2) + wy m (x -1) wx m + wy m + 2 )ν -m + ρ ≥ 0,
where
P xy = w x m + w y m + 2 .
By Proposition 2.1.3, we get that Wilf's conjecture holds in this case.
m-1 + ρ m > w x + ρ m + w y + ρ m .
Then, from (2.5.4) and the hypothesis, we have
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x + ρ m (y -x + 1)ν -m + w y + ρ m (x + 1)ν -m + (y + 1)ν -m + ρ = w x m (y -x + 1)ν -m + w y m (x + 1)ν -m + (y + 1)ν -m + ρ.
Hence,
m-1 j=1 ( w j + ρ m - w j-1 + ρ m )(jν -m) + ρ ≥ w x m + w y m + 1 (2 + wx m (y -x -1) + (y -1) + wy m (x -1) wx m + wy m + 1 )ν -m + ρ = w x m + w y m + 1 (2 + wx m (y -x -1) + (y -2) + wy m (x -1) wx m + wy m + 2 )ν -m + ρ ≥ 0.
By Proposition 2.1.3, we get that Wilf's conjecture holds in this case. Thus, Wilf's conjecture holds in all cases. Thus, the proof is complete.
As a direct consequence of Theorem 2.5.1, we get the following Corollaries.
Corollary 2.5.2. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let By Theorem 2.5.1, S satisfies Wilf's conjecture. Thus, the proof is complete.
w 0 = 0 < w 1 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 -m ≥ w x + w y , x ≥ α + 1, y -x ≥ α + 1 for some α ∈ N. If (2 + α)ν ≥ m,
Corollary 2.5.3. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 -m ≥ w x + w y , for some
x ≥ α + 1, y -x ≥ α + 1 and α ∈ N. If m ≤ 4(2 + α), then S satisfies Wilf's conjecture.
Proof. We may assume that ν ≥ 4 (ν ≤ 3 is solved [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF]), then (2 + α)ν ≥ (2 + α)4 ≥ m. By applying Corollary 2.5.2, S satisfies Wilf's conjecture. Thus, the proof is complete. Thus, the conditions of Theorem 2.5.1 are valid.
Numerical semigroups with
m -ν > (n-2)(n-3) 2
In this Section, we show that Wilf's conjecture holds for numerical semigroups with m -ν > (n-2)(n-3) 2 and also the conjecture holds for those with n ≤ 5.
Lemma 2.6.1. Let S be a numerical semigroup with multiplicity m, embedding dimension ν, Frobenius number f and n = |{s ∈ S; s < f }|. If m -ν > α(α-1) 2 for some 0 ≤ α ≤ m -2, then w α < f . In particular, n ≥ α + 2.
Proof. We claim that w α < f . Suppose by the way of contradiction that w α > f (w α = f ), and let w ∈ Ap(S, m) * \min ≤ S (Ap(S, m)). Then, there exists w i , w j ∈ Ap(S, m) * such that w = w i + w j (Corollary 1.0.29). Suppose that at least one of the two indicies, let's say i, is greater than or equal to α. Then, w = w i +w j ≥ w α +m ≥ f +1+m. Hence, w-m ∈ S which contradicts the fact that w ∈ Ap(S, m). Consequently, the two indicies are necessarly less than or equal to α -
1. Since |Ap(S, m) * \min ≤ S (Ap(S, m))| = m -ν (Corollary 1.0.31), we deduce that m -ν ≤ α(α-1)
2 which is impossible. Consequently, w α < f . Therefore, we get that {0, m, w 1 , w 2 , . . . , w α } ⊆ {s ∈ S; s < f }. Hence, n ≥ α + 2.
Theorem 2.6.2. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and n
= |{s ∈ S; s < f }|. If m -ν > (n-2)(n-3) 2 with 2 ≤ n ≤ m, then S satisfies Wilf's conjecture. Proof. By Lemma 2.6.1, the condition m -ν > (n-2)(n-3) 2 , gives that {0, m, w 1 , w 2 , . . . , w n-2 } ⊆ {s ∈ S; s < f }.
Therefore, {0, m, w 1 , w 2 , . . . , w n-2 } = {s ∈ S; s < f }. Hence, 2m > f . By [14], it follows that S satisfies Wilf's conjecture.
In [START_REF] Dobbs | On a question of wilf concerning numerical semigroups[END_REF], D. Dobbs and G. Matthews proved Wilf's conjecture for n ≤ 4 in a long technical proof. In [START_REF] Eliahou | Wilf's conjecture and macaulay's theorem[END_REF], S. Eliahou showed Wilf's conjecture has a positive answer for n ≤ 6. We are going to introduce a simpler proof for n ≤ 5 using the previous theorem (note that If n ≤ 5, then we can assume 2 ≤ n ≤ 5).
Corollary 2.6.3. Let S be a numerical semigroup with multiplicity m, embedding dimension ν and n = |{s ∈ S; s < f }|. If n ≤ 5, then S satisfies Wilf's conjecture.
Proof. By Theorem 2.3.6 and Theorem 2.3.10, we will assume that m -ν > 5 which is strictly greater than α(α-1) 2
for α ∈ {0, 1, 2, 3}. Hence, by applying Theorem 2.6.2 for n = α + 2 ∈ {2, 3, 4, 5} Wilf's conjecture holds.
3
Numerical semigroup of the form < m, m + 1, . . . , m + l, k(m + l) + r > Throughout this chapter we suppose that S is a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with k, l, m, r ∈ N * and r ≤ (k + 1)l + 1. The aim of this chapter is to determine the Frobenius number f (S) and the genus g(S). Also, it aims to characterize those numerical semigroups which are symmetric (resp. pseudo-symmetric) and to determine the set of pseudo-Frobenius numbers P F (S). Definition 3.0.1. Let k, l, m, r ∈ N * . For every 1 ≤ i ≤ m -1, write, by the euclidean division, i = α i (kl + r) + β i l + t i with 0 ≤ β i l + t i < kl + r and 0 ≤ t i < l. In particular
α i = i kl + r , β i = i -α i (kl + r) l and t i = i -α i (kl + r) -β i l.
For the convenience of the statement we will use the following notation :
i = α i (kl + r) + β i l + i t i where i = 1 if t i = 0, 0 if t i = 0.
Clearly i t i = t i but we shall use i later in the notations.
Proposition 3.0.2 and 3.0.3 give some properties that will be used in this Chapter using the notations used in Definition 3.0.1.
Proposition 3.0.2. Let the notations be as in Definition 3.0.1 and suppose that r ≤ (k + 1)l + 1, we have
β i + i ≤ 2k + 1.
Proof. By using Definition 3.0.1 and r ≤ (k + 1)l + 1, it follows that
β i l + i t i ≤ kl + r -1 ≤ kl + (k + 1)l = (2k + 1)l.
Case 1. If i = 0. We have
β i l = β i l + i t i ≤ (2k + 1)l. Consequently, β i ≤ 2k + 1. Hence, β i + i ≤ 2k + 1.
Case 2. If i = 1. We have β i l + i t i ≤ (2k + 1)l and t i ≥ 1 (as i = 1). If β i ≥ 2k + 1, then β i l + i t i ≥ (2k + 1)l + 1, which gives a contradiction. Consequently, β i ≤ 2k. Then,
β i + i ≤ 2k + 1.
Proposition 3.0.3. Let the notations be as in Definition 3.0.1 and suppose that r ≤ (k + 1)l + 1, we may assume that r ≤ min((k + 1)l + 1, m + l -1).
Proof. We claim that we may assume that r < m + l. Indeed, if r ≥ m + l, then there exist q , r ∈ N such that r = q (m + l) + r with r < m + l. Let k = k + q , then k(m + l) + r = (k + q )(m + l) + r = k (m + l) + r with r < m + l and r = r -q (m + l) ≤ (k + 1)l + 1 ≤ (k + q + 1)l + 1 = (k + 1)l + 1. Hence, S is a numerical semigroup generated by m, m + 1, . . . , m + l, k (m + l) + r with r ≤ (k + 1)l + 1.
Consequently, we may assume that r ≤ m + l -1. By hypothesis, we have r ≤ (k + 1)l + 1. Hence, we get our assumption. Thus, the proof is complete.
Apéry set of S
Let Ap(S, m) = {0, w(1), . . . , w(m -1)} be the Apéry set of S with respect to m, where w(i) is the smallest element of S which is congruent to i mod m. The following theorem gives a formula for the Apéry set of S. For all 1 ≤ i ≤ m -1 where i is written as in Definition 3.0.1, we have :
w(i) = m(kα i + β i + i ) + i.
Proof. Let λ i = m(kα i + β i + i ) + i where i is written as in Definition 3.0.1. We are going to show that λ i = w(i) for all 1 ≤ i ≤ m -1. To this end, we will show that λ i ∈ S, λ i is congruent to i mod m and λ i -m / ∈ S. • We have λ i ∈ S and λ i is congruent to i mod m. It follows from
λ i = m(kα i + β i + i ) + i = m(kα i + β i + i ) + α i (kl + r) + β i l + i t i = α i (k(m + l) + r) + β i (m + l) + i (m + t i ). (3.1.1)
• We will prove that λ i -m / ∈ S by the way of contradiction. From (3.1.1), we have
λ i -m = α i (k(m + l) + r) + β i (m + l) + ( i -1)m + i t i .
Suppose by the way of contradiction that λ i -m ∈ S. By Definition 1.0.7, there exist x, x l , . . . , x 0 ∈ N such that
λ i -m = x(k(m + l) + r) + x l (m + l) + . . . + x 1 (m + 1) + x 0 m. Thus, α i (k(m + l) + r) + β i (m + l) + ( i -1)m + i t i = x(k(m + l) + r) + x l (m + l) + . . . + x 1 (m + 1) + x 0 m.
In particular, m(kα
i + β i + i -1) + α i (kl + r) + β i l + i t i = m(kx + x l + . . . + x 1 + x 0 ) + x(kl + r) + x l l + x l-1 (l -1) + . . . + x 1 . (3.1.2)
To show that λ i -m / ∈ S, we are going to show first that x -α i > 0, then we will show that x -α i ≥ 2 and conclude our assertion from (3.1.2). Since 1 ≤ i ≤ m -1 and i = α i (kl + r) + β i l + i t i , it follows that
α i (kl + r) + β i l + i t i ≤ m -1. (3.1.3)
We claim that
kα i + β i + i -1 ≥ kx + x l + x l-1 + . . . + x 1 + x 0 .
Suppose by the way of contradiction that
kα i + β i + i -1 < kx + x l + x l-1 + . . . + x 1 + x 0 .
Then, from (3.1.3), we get
m(kα i + β i + i -1) + α i (kl + r) + β i l + i t i ≤ m(kx + x l + x l-1 + . . . + x 1 + x 0 -1) + m -1 = m(kx + x l + x l-1 + . . . + x 1 + x 0 ) -1 ≤ m(kx + x l + . . . + x 1 + x 0 ) + x(kl + r) + x l l + x l-1 (l -1) + . . . + x 1 -1.
This contradicts (3.1.2). Consequently,
kα i + β i + i -1 ≥ kx + x l + x l-1 + . . . + x 1 + x 0 . ( 3
α i (kl + r) + β i l + i t i ≤ x(kl + r) + x l l + x l-1 (l -1) + . . . + x 1 . (3.1.5)
If we multiply (3.1.4) by l, we get
α i kl + β i l ≥ (kx + x l + x l-1 + . . . + x 1 + x 0 )l + (1 -i )l. ( 3
xkl + (x -α i )r + x l l + x l-1 (l -1) + . . . + x 1 -i t i ≥ α i kl + β i l ≥ xkl + (x l + . . . + x 0 )l + (1 -i )l.
Consequently, (x -α i )r ≥ x 0 l + x 1 (l -1) + . . .
+ x l-1 + (1 -i )l + i t i .
Since x 0 , . . . , x l-1 , r ∈ N, l ∈ N * and i ∈ {0, 1}, we get that
x -α i > 0. (3.1.7)
Next, we aim to show that x -α i ≥ 2. From (3.1.2), we have
m(β i ) = m(k(x -α i ) + 1 -i + x l + . . . + x 1 + x 0 ) + (x -α i )(kl + r) -β i l -i t i + x l l + x l-1 (l -1) + . . . + x 1 . (3.1.8) Then, (x -α i )(kl + r) -β i l -i t i + x l l + x l-1 (l -1) + . . . + x 1 is divisible by m. Since x -α i > 0 (by (3.1. 7
)), β i l + i t i < kl + r (by definition) and x 1 , . . . , x l ∈ N, then there exists p ∈ N * such that
(x -α i )(kl + r) -β i l -i t i + x l l + x l-1 (l -1) + . . . + x 1 = pm. (3.1.9)
By substituting (3.1.9) in (3.1.8), we get
m(β i ) = m(k(x -α i ) + 1 -i + x l + . . . + x 1 + x 0 + p).
Consequently,
β i = k(x -α i ) + 1 -i + p + x l + . . . + x 1 + x 0 . (3.1.10)
From (3.1.9), it follows that
β i l = (x -α i )(kl + r) -i t i + x l l + x l-1 (l -1) + . . . + x 1 -pm. (3.1.11)
By multiplying (3.1.10) by l and using (3.1.11), we get the following :
(x -α i )r = p(m + l) + (1 -i )l + i t i + x l-1 + . . . + x 1 (l -1) + x 0 l.
Since i ∈ {0, 1}, x l-1 , . . . , x 0 ∈ N, p ∈ N * and 0 < r < m + l (Proposition 3.0.3), we get that
x -α i ≥ 2.
From (3.1.2), we have
m(β i + i -1) + β i l + i t i = m (x -α i )k + x l + . . . + x 1 + x 0 + (x -α i )(kl + r) + x l l + x l-1 (l -1) + . . . + x 1 .
Since x -α i ≥ 2 and x l , . . . , x 0 ∈ N, it follows that
m(β i + i -1) + β i l + i t i ≥ 2km + 2(kl + r). (3.1.12)
On the other hand, Since β i l + i t i < kl + r (by definition) and
β i + i -1 ≤ 2k (Proposition 3.0.2), it follows that m(β i + i -1) + β i l + i t i < 2km + kl + r. (3.1.13)
From (3.1.12) and (3.1.13) we get a contradiction. Consequently, λ i -m / ∈ S. Hence, w(i) = λ i = m(kα i + β i + i ) + i. Thus, the proof is complete.
Frobenius number of S
Definition 3.2.1. Let the notations be as above. Let
q = r -1 l .
Thus, ql ≤ r -1 and r -1 = ql + t with q, t ∈ N and t < l. Let t ∈ N defines as in 3.0.
1 t = 1 if t ≥ 1, 0 if t = 0.
Proposition 3.2.2 and 3.2.3 give some properties that will be used in this Chapter using the notations used in Definition 3.2.1.
Proposition 3.2.2. Under the above notations, we have
β i ≤ k + q.
Proof. By definition, we have
β i l + i t i ≤ kl + r -1 = (k + q)l + t with t < l.
Consequently,
β i l + i t i < (k + q + 1)l.
Suppose by the way of contradiction that β i > k + q. Hence, β i l + i t i ≥ (k + q + 1)l, which is impossible. Consequently, β i ≤ k + q. Thus, the proof is complete.
Proposition 3.2.3. Under the above notations, we have
q + t ≤ k + 1.
Proof. By definition, we have r -1 ≤ (k +1)l and r -1 = ql+t with q, t ∈ N and t < l. Thus, ql+t ≤ (k +1)l.
Case 1. If t = 0, then t = 0. Hence, ql ≤ (k + 1)l, which implies that q ≤ k + 1. Thus, q + t ≤ k + 1.
Case 2. If t = 1, then t ≥ 1. Hence, ql + t ≤ (k + 1)l with t ≥ 1, it follows that q ≤ k. Therefore, q + t ≤ k + 1. Thus, the proof is complete.
Next, we shall focus on the determination of the Frobenius number of S. We shall start with the following proposition that will enable us to determine the Frobenius number and will help us later in determining the Pseudo frobenius number of S. For all 1 ≤ i < j ≤ m -1 where i and j are written as in Definition 3.0.1, we have :
• If α i = α j -2, β j = j = 0 and β i + i = 2k + 1, then w(i) -w(j) > 0. • If α i = α j -1, β i + i > k + β j + j and β j + j ≤ k, then w(i) -w(j) > 0.
• Otherwise, w(i) -w(j) < 0.
Proof. Let 1 ≤ i < j ≤ m -1, where i = α i (kl + r) + β i l + i t i and j = α j (kl + r) + β j l + j t j be as defined in Definition 3.0.1. By Theorem 3.1.1, we have
w(i) = m(kα i + β i + i ) + i and w(j) = m(kα j + β j + j ) + j.
We claim that α i ≤ α j . In fact, since i < j, then i kl+r < j kl+r , which implies that i kl+r ≤ j kl+r . Hence, α i ≤ α j .
Case 1.
If α i = α j . We aim to show that β i ≤ β j . Indeed, suppose by the way of contradiction that β i > β j , then i = α i (kl + r) + β i l + i t i = α j (kl + r) + β i l + i t i ≥ α j (kl + r) + β i l ≥ α j (kl + r) + (β j + 1)l. Since j t j < l, we get that i > α j (kl + r) + β j l + j t j = j which is a contradiction with i < j. Hence,
β i ≤ β j . • If β i < β j . Then, w(j) -w(i) = m(kα j + β j + j ) + j -m(kα i + β i + i ) -i = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m((β j -β i ) + ( j -i )) + j -i.
Since β i < β j , i < j and i , j ∈ {0, 1}, it follows that w(j) -w(i) > 0.
• If β i = β j . We aim to show that i t i < j t j , in particular i ≤ j . Suppose by the way of contradiction that i t i ≥ j t j , then
i = α i (kl + r) + β i l + i t i = α j (kl + r) + β j l + i t i ≥ α j (kl + r) + β j l + j t j = j,
which is a contradiction with i < j. Hence, i t i < j t j . As i , j ∈ {0, 1}, we get that i ≤ j . Therefore,
w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m(( j -i )) + j -i.
Since i ≤ j and i < j, we obtain w(j) -w(i) > 0.
Consequently, if i < j and α i = α j , then w(i) -w(j) < 0.
Case 2. If α i < α j .
• If α i ≤ α j -3. By Proposition 3.0.2, we have i + β i ≤ 2k + 1. Then,
w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i ≥ m(3k + (β j -β i ) + ( j -i )) + j -i ≥ m(3k -2k -1 + β j + j ) + j -i.
Since k ≥ 1 and i < j, it follows that w(j) -w(i) > 0. Consequently, if i < j and α i ≤ α j -3, then w(i) -w(j) < 0.
• If α i = α j -2.
• If β j + j > 0. By Proposition 3.0.2, we have i + β i ≤ 2k + 1. Then,
w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m(2k + (β j -β i ) + ( j -i )) + j -i ≥ m(2k + β j + j -2k -1) + j -i ≥ m(2k + 1 -2k -1) + j -i.
Since i < j, we get w(j) -w(i) > 0.
• If β j = j = 0. Then,
w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m(2k -β i -i ) + j -i.
By Proposition 3.0.2, we have i + β i ≤ 2k + 1.
• If β i + i ≤ 2k. Since i < j, we obtain w(j) -w(i) > 0. • If β i + i = 2k + 1. Since j ≤ m -1, it follows that w(j) -w(i) < 0.
Consequently, if i < j and α i = α j -2, then w(i) -w(j) < 0 unless in the case where β j = j = 0 and β i + i = 2k + 1.
• If α i = α j -1.
• If β i + i ≤ k + β j + j . Then, w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m(k + (β j -β i ) + ( j -i )) + j -i ≥ m(k -k) + j -i.
Since i < j, we get w(j) -w(i) > 0.
• If β i + i > k + β j + j . Then, w(j) -w(i) = m((α j -α i )k + (β j -β i ) + ( j -i )) + j -i = m(k + (β j -β i ) + ( j -i )) + j -i ≤ m(k -k -1) + j -i.
Since j < m, it follows that w(j) -w(i) < 0. Note that if
β i + i > k + β j + j , then β j + j ≤ k it follows from β i + i ≤ 2k + 1 (Proposition 3.0.2).
Consequently, if i < j and α i = α j -1, then w(i) -w(j) < 0 unless in the case where β i + i > k + β j + j and β j + j ≤ k.
In conclusion, if i < j, we have w(i) -w(j) > 0, in the case α i = α j -2, β j = j = 0 and
β i + i = 2k + 1, or α i = α j -1, β i + i > k + β j + j and β j + j ≤ k,
and in the other cases w(i) -w(j) < 0.
Thus, the proof is complete.
The following theorem gives a formula for the Frobenius f (S). The Frobenius number f (S) of S is given by :
f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1 if S satisfies condition (H), m(kα m-1 + β m-1 + m-1 ) -1 otherwise, where (H) : m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 , with α m-1 ≥ 1, β m-1 + m-1 ≤ k, q + t > β m-1 + m-1 and r > 1.
Proof. By Lemma 1.0.21, we have f (S) = max(Ap(S, m)) -m. We are going to show that max(Ap(S, m)) = w((α m-1 -1)(kl + r) + kl + r -1) if S satisfies condition (H) w(m -1) otherwise.
By applying Proposition 3.2.4, we get
max(Ap(S, m)) = max w (α m-1 -2)(kl + r) + kl + r -1 ,w (α m-1 -1)(kl + r) + kl + r -1 , w(m -1) .
Recall that r -1 = ql + t with t < l, t = 0 if t = 0 and t = 1 if t = 0. We have
w (α m-1 -1)(kl + r) + kl + r -1 -w (α m-1 -2)(kl + r) + kl + r -1 = m k(α m-1 -1) + k + q + t + (α m-1 -1)(kl + r) + kl + r -1 -m k(α m-1 -2) + k + q + t -(α m-1 -2)(kl + r) + kl + r -1 = m(k) + kl + r > 0.
Consequently,
max(Ap(S, m)) = max w (α m-1 -1)(kl + r) + kl + r -1 , w(m -1) . Case 1. If α m-1 = 0, then i = (α m-1 -1)(kl + r) + kl + r -1 < 0. Hence, max(Ap(S, m)) = w(m -1) = m(kα m-1 + β m-1 + m-1 ) + m -1. Case 2. If α m-1 ≥ 1.
• If r = 1. Thus, r -1 = 0 and so are q and t . Therefore,
w m -1 -w (α m-1 -1)(kl + r) + kl + r -1 = w(m -1) -w (α m-1 -1)(kl + 1) + kl = m kα m-1 + β m-1 + m-1 + α m-1 (kl + 1) + β m-1 l + m-1 t m-1 -m k(α m-1 -1) + k -(α m-1 -1)(kl + 1) -kl = m(β m-1 + m-1 ) + β m-1 l + m-1 t m-1 + 1 > 0. Consequently, max(Ap(S, m)) = w(m -1) = m(kα m-1 + β m-1 + m-1 ) + m -1. • If r > 1 (r -1 = ql + t where t < l, t = 1 if t ≥ 1 and t = 0 if t = 0). Then, w(m -1) -w (α m-1 -1)(kl + r) + kl + r -1 = m(kα m-1 + β m-1 + m-1 ) + α m-1 (kl + r) + β m-1 l + m-1 t m-1 -m k(α m-1 -1) + k + q + t -(α m-1 -1)(kl + r) + kl + r -1 = m(β m-1 + m-1 -q -t ) + β m-1 l + m-1 t m-1 + 1. • If q + t > β m-1 + m-1 .
We have β m-1 + m-1 ≤ k in this case this follows from Proposition 3.2.3. Since α m-1 ≥ 1 in this case, we get
β m-1 l + m-1 t m-1 + 1 ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 = m -1. Consequently, w(m -1) -w (α m-1 -1)(kl + r) + kl + r -1 = m(β m-1 + m-1 -q -t ) + β m-1 l + m-1 t m-1 + 1 ≤ m(-1) + m -1 < 0. Hence, max(Ap(S, m)) = w (α m-1 -1)(kl + r) + kl + r -1 = m(kα m-1 + q + t ) + α m-1 (kl + r) -1. • If q + t ≤ β m-1 + m-1 . Then, w(m -1) -w (α m-1 -1)(kl + r) + kl + r -1 = m(β m-1 + m-1 -q -t ) + β m-1 l + m-1 t m-1 + 1 > 0.
Therefore,
max(Ap(S, m)) = w(m -1) = m(kα m-1 + β m-1 + m-1 ) + m -1.
Hence, if S satisfies condition (H), we get
max(Ap(S, m)) = w((α m-1 -1)(kl + r) + kl + r -1) = m(kα m-1 + q + t ) + α m-1 (kl + r) -1.
Otherwise, we obain
max(Ap(S, m)) = w(m -1) = m(kα m-1 + β m-1 + m-1 ) + m -1.
By applying Lemma 1.0.21, if S satisfies condition (H), we obtain
f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1, otherwise, we obtain f (S) = m(kα m-1 + β m-1 + m-1 ) -1.
Thus, the proof is complete.
Example 3.2.6. Consider the following numerical semigroups.
• S =< 19, 20, 21, 22, 52 >. By using GAP [8], we get that f (S) = 89. Note that k = 2, l = 3 and r = 8.
In addition, m -1 = 18 = 1(14
) + 1(3) + 1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 with α m-1 = 1, β m-1 = 1, m-1 = 1, t m-1 = 1 and r -1 = 7 = 2(3) + 1 = ql + t with q = 2, t = 1 , t = 1.
We have S verifies condition (H) and verifies the formula given in Theorem 3.2.5 as 89=19(2(1)+2+1-1)+1( 14)-1.
• S =< 11, 12, 13, 14, 36 >. By using GAP [8], we obtain that f (S) = 43. Note that k = 2, l = 3 and r = 8. In addition, m -1 = 10 = 0(14
) + 3(3) + 1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 with α m-1 = 0, β m-1 = 3, m-1 = 1, t m-1 = 1 and r -1 = 7 = 2(3) + 1 = ql + t with q = 2, t = 1 , t = 1.
We have S does not verify condition (H) as α m-1 = 0 and verifies the formula given in Theorem 3.2.5 as 43 = 11(2(0) + 3 + 1) -1.
Genus of S
The following theorem gives a formula for the genus g(S).
Theorem 3.3.1. Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1. The genus g(S) of S is given by :
g(S) = kα m-1 + β m-1 + 1 l(kα m-1 + β m-1 ) 2 + m-1 t m-1 + kα m-1 (α m-1 + 1)r 2 + (q + 1)(r -1 + t)α m-1 2 .
Proof. By using Lemma 1.0.21 and Theorem 3.1.1, we get
g(S) = 1 m w∈Ap(S,m) w - m -1 2 = 1 m m-1 i=0 (m(kα i + β i + i ) + i) - m -1 2 = m-1 i=0 (kα i + β i + i ) + 1 m m-1 i=0 i - m -1 2 = m-1 i=0 (kα i + β i + i ).
Now, we are going to divide the set {0 ≤ i ≤ m -1} into subsets and calculate the value of kα i + β i + i in each subset. We have
{0 ≤ i ≤ m -1} = {j(kl + r) ≤ i ≤ j(kl + r) + kl + (r -1); 0 ≤ j ≤ α m-1 -1} ∪ {α m-1 (kl + r) ≤ i ≤ α m-1 + β m-1 l + m-1 t m-1 } = ∪ 7 =1 A , where A 1 = {j(kl + r); 0 ≤ j ≤ α m-1 -1}, A 2 = {j(kl + r) + yl + 1 ≤ i ≤ j(kl + r) + (y + 1)l; 0 ≤ j ≤ α m-1 -1 and 0 ≤ y ≤ k -1}, A 3 = {j(kl + r) + (k + y)l + 1 ≤ i ≤ j(kl + r) + (k + y + 1)l; 0 ≤ j ≤ α m-1 -1 and 0 ≤ y ≤ q -1}, A 4 = {j(kl + r) + (k + q)l + 1 ≤ i ≤ j(kl + r) + (k + q)l + t; 0 ≤ j ≤ α m-1 -1}, A 5 = {α m-1 (kl + r)}, A 6 = {α m-1 (kl + r) + yl + 1 ≤ i ≤ α m-1 (kl + r) + (y + 1)l; 0 ≤ y ≤ β m-1 -1}, A 7 = {α m-1 (kl + r) + β m-1 l + 1 ≤ i ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 }.
Next, we will calculate the value of kα i + β i + i on each subset A .
If i ∈ A 1 , then kα i + β i + i = kj. If i ∈ A 2 , then kα i + β i + i = kj + y + 1. If i ∈ A 3 , then kα i + β i + i = kj + k + y + 1. If i ∈ A 4 , then kα i + β i + i = kj + k + q + 1. If i ∈ A 5 , then kα i + β i + i = kα m-1 . If i ∈ A 6 , then kα i + β i + i = kα m-1 + y + 1. If i ∈ A 7 , then kα i + β i + i = kα m-1 + β m-1 + 1. Therefore, g(S) = m-1 i=0 (kα i + β i + i ) = 7 =1 i∈A (kα i + β i + i ) = α m-1 -1 j=0 kα j(kl+r) + β j(kl+r) + j(kl+r) + k-1 y=0 j(kl+r)+(y+1)l i=j(kl+r)+yl+1 (kα i + β i + i ) + q-1 y=0 j(kl+r)+(k+y+1)l i=j(kl+r)+(k+y)l+1 (kα i + β i + i ) + j(kl+r)+(k+q)l+t i=j(kl+r)+(k+q)l+1 (kα i + β i + i ) + kα α m-1 (kl+r) + β α m-1 (kl+r) + α m-1 (kl+r) + β m-1 -1 y=0 α m-1 (kl+r)+(y+1)l i=α m-1 (kl+r)+yl+1 (kα i + β i + i ) + α m-1 (kl+r)+β m-1 l+ m-1 t m-1 i=α m-1 (kl+r)+β m-1 l+1 (kα i + β i + i ).
Equivalently,
g(S) = α m-1 -1 j=0 kj + k-1 y=0 j(kl+r)+(y+1)l i=j(kl+r)+yl+1 (kj + y + 1) + q-1 y=0 j(kl+r)+(y+1)l i=j(kl+r)+yl+1 (kj + k + y + 1) + j(kl+r)+(k+q)l+t i=j(kl+r)+(k+q)l+1 (kj + k + q + 1) +kα m-1 + β m-1 -1 y=0 α m-1 (kl+r)+(y+1)l i=α m-1 (kl+r)+yl+1 (kα m-1 + y + 1) + α m-1 (kl+r)+β m-1 l+ m-1 t m-1 i=α m-1 (kl+r)+β m-1 l+1 (kα m-1 + β m-1 + 1) = α m-1 -1 j=0 kj + k-1 y=0 (kj + y + 1)l + q-1 y=0 (kj + k + y + 1)l + (kj + k + q + 1)t +kα m-1 + β m-1 -1 y=0 (kα m-1 + y + 1)l+(kα m-1 + β m-1 + 1) m-1 t m-1 = α m-1 -1 j=0 kj + k(kj + 1)l + k(k -1)l 2 + (kj + k + 1)ql + (q -1)ql 2 + (kj + k + q + 1)t +kα m-1 + (kα m-1 + 1)β m-1 l+ β m-1 (β m-1 -1)l 2 + (kα m-1 + β m-1 + 1) m-1 t m-1 = α m-1 -1 j=0 k(kl + 1 + ql + t)j + (k + 1)(kl + 2ql + 2t) + 2qt + q(q -1)l 2 +kα m-1 + (kα m-1 + 1)β m-1 l + β m-1 (β m-1 -1)l 2 +(kα m-1 + β m-1 + 1) m-1 t m-1 = k(kl + 1 + ql + t)α m-1 (α m-1 -1) 2 + kα m-1 + (kα m-1 + 1)β m-1 l + (k + 1)(kl + 2ql + 2t) + 2qt + q(q -1)l α m-1 2 + β m-1 (β m-1 -1)l 2 +(kα m-1 + β m-1 + 1) m-1 t m-1 = (kα m-1 -k)(klα m-1 ) 2 + kα m-1 (α m-1 -1) 2 + (ql + t)(kα m-1 -k)α m-1 2 + (k + 1)klα m-1 2 + (ql + t)(2k + 2)α m-1 2 + 2qt + q(q -1)l α m-1 2 +kα m-1 + (kα m-1 + 1)β m-1 l 2 + β m-1 (klα m-1 ) 2 + β m-1 (β m-1 l) 2 +(kα m-1 + β m-1 + 1) m-1 t m-1 = (kα m-1 + β m-1 + 1) l(kα m-1 + β m-1 ) 2 + m-1 t m-1 + kα m-1 (α m-1 + 1) 2 + (ql + t)(kα m-1 + k + 2)α m-1 2 + 2qt + q(q -1)l α m-1 2 .
Finally,
g(S) = (kα m-1 + β m-1 + 1) l(kα m-1 + β m-1 ) 2 + m-1 t m-1 + kα m-1 (α m-1 + 1) 2 + (k + kα m-1 + 2)α m-1 (r -1) + 2qt + q(q -1)l α m-1 2 = (kα m-1 + β m-1 + 1) l(kα m-1 + β m-1 ) 2 + m-1 t m-1 + kα m-1 (α m-1 + 1)r 2 + (q + 1)(r -1 + t)α m-1 2 .
Thus, the proof is complete. • S =< 19, 20, 21, 22, 52 >. By using GAP [8], we get that g(S) = 50. Note that k = 2, l = 3 and r = 8.
In addition, m -1 = 18 = 1(14
) + 1(3) + 1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 with α m-1 = 1, β m-1 = 1, m-1 = 1, t m-1 = 1 and r -1 = 7 = 2(3) + 1 = ql + t with q = 2, t = 1. We have 50 = 2(1) + 1 + 1 3(2(1)+1) 2 + 1 + 2(1)(1+1)8 2 + (2+1)(8-1+1)(1)
2
. Hence, S verifies the formula given in Theorem 3.3.1.
• S =< 11, 12, 13, 14, 36 >. By using GAP [8], we get that g(S) = 22. Note that k = 2, l = 3 and r = 8.
In addition, m -1 = 10 = 0(14
) + 3(3) + 1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 with α m-1 = 0, β m-1 = 3, m-1 = 1, t m-1 = 1 and r -1 = 7 = 2(3) + 1 = ql + t with q = 2, t = 1. We have 22 = 2(0) + 3 + 1 3(2(0)+3) 2 + 1 + 2(0)(0+1)8 2 + (2+1)(8-1+1)(0)
2
. Hence, S verifies the formula given in Theorem 3.3.1.
Determination of symmetric and pseudo-symmetric numerical semigroups
Next, we shall focus on the determination of symmetric and pseudo-symmetric numerical semigroups. We shall start with a technical Lemma. Lemma 3.4.1. Let the notation be as defined in Definition 3.0.1 and in Definition 3.2.1, we have the following :
2g(S) -f (S) + 1 = F 1 if S satisfies condition (H), F 2 otherwise,
where
F 1 = α m-1 (1 -t )(kl + r) + kt + (q + 1)(t -1) +β m-1 l kα m-1 + β m-1 + 1 -q + 1 -t + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q + 1 -t + 1 -t -q, F 2 = α m-1 -m-1 (kl + r) + (k + q -β m-1 )r + k(l -1) + r + (q + 1)(t -1) +β m-1 l 1 -m-1 + m-1 t m-1 kα m-1 + β m-1 + 2 -m-1 -m-1 -β m-1 and (H) : m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 , with α m-1 ≥ 1, β m-1 + m-1 ≤ k, q + t > β m-1 + m-1 and r > 1.
Proof. By Theorem 3.2.5, we have f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1 if S satisfies condition (H) and f (S) = m(kα m-1 + β m-1 + m-1 ) -1 otherwise. Now, we use the formulas in Theorem 3.2.5 and Theorem 3.3.1.
Case 1. If f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1. Then, 2g(S) -f (S) + 1 = (kα m-1 + β m-1 + 1) l(kα m-1 + β m-1 ) + 2 m-1 t m-1 + k(α m-1 + 1)α m-1 r +(q + 1)(r -1)α m-1 + (q + 1)tα m-1 -m(kα m-1 + q + t -1) -α m-1 (kl + r) = α m-1 k 2 lα m-1 + klβ m-1 + 2k m-1 t m-1 + klβ m-1 + kl + k(α m-1 + 1)r +(q + 1)(r -1) + (q + 1)t -km -kl -r + β m-1 l β m-1 + 1 + m-1 t m-1 2β m-1 + 2 -m(q + t -1) = α m-1 kql + kt + qr + (q + 1)(t -1) + β m-1 l kα m-1 + β m-1 + 1 + m-1 t m-1 kα m-1 + 2β m-1 + 2 -α m-1 (kl + r)(q + t -1) -β m-1 l(q + t -1) -m-1 t m-1 (q + t -1) -(q + t -1) = α m-1 (1 -t )(kl + r) + kt + (q + 1)(t -1) + β m-1 l kα m-1 + β m-1 + 1 -q + 1 -t + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q + 1 -t + 1 -t -q. Case 2. If f (S) = m(kα m-1 + β m-1 + m-1 ) -1. Then, 2g(S) -f (S) + 1 = (kα m-1 + β m-1 + 1) l(kα m-1 + β m-1 ) + 2 m-1 t m-1 + k(α m-1 + 1)α m-1 r +(q + 1)(r -1)α m-1 + (q + 1)tα m-1 -m(kα m-1 + β m-1 + m-1 ) = α m-1 k 2 lα m-1 + klβ m-1 + 2k m-1 t m-1 + klβ m-1 + kl + k(α m-1 + 1)r +(q + 1)(r -1) + (q + 1)t -km + β m-1 l β m-1 + 1 + m-1 t m-1 2β m-1 + 2 -m(β m-1 + m-1 ). Therefore, 2g(S) -f (S) + 1 = α m-1 klβ m-1 + kl + kr + (q + 1)(r -1) + (q + 1)t -k + β m-1 l β m-1 + 1 + m-1 t m-1 kα m-1 + 2β m-1 + 2 -α m-1 (kl + r)(β m-1 + m-1 ) -β m-1 l(β m-1 + m-1 ) -m-1 t m-1 (β m-1 + m-1 ) -(β m-1 + m-1 ) = α m-1 -m-1 (kl + r) + (k + q -β m-1 )r + k(l -1) + r + (q + 1)(t -1) +β m-1 l 1 -m-1 + m-1 t m-1 kα m-1 + β m-1 + 2 -m-1 -m-1 -β m-1 .
Thus, the proof is complete.
Determination of symmetric numerical semigroup
The following theorem gives the set of symmetric numerical semigroups. Then, S is symmetric if and only if it satisfies one of the following :
1. S =< 2k + 3, 2k + 4, k(2k + 4) + k + 2 > .
2. S =< 2kl + 3, 2kl + 4, . . . , (2k + 1)l + 3, k((2k + 1)l + 3) + kl + 2 > with l ≥ 2.
S =< β
m-1 + 1, β m-1 + 2, k(β m-1 + 2) + r > with β m-1 ≥ 1.
4. S =< (α m-1 + 1)(k + q + 1), (α m-1 + 1)(k + q + 1) + 1, k((α m-1 + 1)(k + q + 1) + 1) + q + 1 > .
S =< β
m-1 l + 2, . . . , (β m-1 + 1)l + 2, k((β m-1 + 1)l + 2) + ql + t + 1 > with t ≥ 1 and l ≥ 2. 6. S =< (α m-1 + 1)((k + q)l + 2), . . . , (α m-1 + 1)((k + q)l + 2) + l, k((α m-1 + 1)((k + q)l + 2) + l) + ql + 2 > with l ≥ 2. 7. S =< β m-1 l + 2, . . . , (β m-1 + 1)l + 2, k((β m-1 + 1)l + 2) + ql + 1 > with l ≥ 2. 8. S =< α m-1 (kl + 1) + (k -1)l + 2, . . . , α m-1 (kl + 1) + kl + 2, k(α m-1 (kl + 1) + kl + 2) + 1 > with l ≥ 2.
Proof. By Lemma 1.0.34, we have S is symmetric if and only if 2g(S) -f (S) + 1 = 0.
Case 1. If S satisfies condition (H). By Theorem 3.2.5,
f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1 with α m-1 > 0, β m-1 + m-1 ≤ k, q + t > β m-1 + m-1 and r > 1.
By using Lemma (3.4.1), S is symmetric if and only if
α m-1 (1 -t )(kl + r) + kt + (q + 1)(t -1) +β m-1 l kα m-1 + β m-1 + 1 -q + 1 -t + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q + 1 -t + 1 -t = q.
• If t = 0 (t = 0 and r = ql + 1 in this case). Then, S is symmetric if and only if
α m-1 kl + q(l -1) + β m-1 l kα m-1 + β m-1 + 2 -q + m-1 t m-1 kα m-1 + 2β m-1 + 3 -q + 1 = q. (3.4.1) Since l ≥ 1, α m-1 ≥ 1 and q ≤ k + 1 (Proposition 3.2.3), it follows that kl + q(l -1) ≥ kl, kα m-1 + β m-1 + 2 -q ≥ 1 and kα m-1 + 2β m-1 + 3 -q ≥ 2. Consequently, (3.4.1) implies that α m-1 kl + β m-1 l + m-1 t m-1 (2) + 1 ≤ q. (3.4.2) As α m-1 ≥ 1, l ≥ 1 and q ≤ k + 1 (Proposition 3.2.3), then (3.4.2) implies that l = 1, α m-1 = 1, β m-1 = 0, m-1 t m-1 = 0 and q = k + 1. As r = ql + 1 in this case (as t = t = 0), we get r = k + 2.
Therefore, S is symmetric in this case if and only if
α m-1 = 1, β m-1 = 0, t m-1 = 0, l = 1, r = k + 2. (3.4.3) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By substituting (3.4.3) in S, we obtain
S =< 2k + 3, 2k + 4, k(2k + 4) + k + 2 > . • If t = 1 (t ≥ 1)
. Then, S is symmetric if and only if
α m-1 kt + (q + 1)(t -1) + β m-1 l kα m-1 + β m-1 + 1 -q + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q = q.
(3.4.4)
Since t ≥ 1, α m-1 ≥ 1 and q ≤ k (by Proposition 3.2.3 as t = 1), it follows that kt
+ (q + 1)(t -1) ≥ kt, kα m-1 + β m-1 + 1 -q ≥ β m-1 + 1 and kα m-1 + 2β m-1 + 2 -q ≥ 2β m-1 + 2. Consequently, (3.4.4) implies that α m-1 (kt) + β m-1 l(β m-1 + 1) + m-1 t m-1 (2β m-1 + 2) ≤ q. (3.4.5)
As α m-1 ≥ 1, t ≥ 1 and q ≤ k (by Proposition 3.2.3 as t = 1), then (3.4.5) implies that α m-1 = 1, t = 1, β m-1 = 0, m-1 t m-1 = 0 and q = k. As r = ql + t + 1, we get r = kl + 2. Since t ≥ 1, then l ≥ 2. Therefore, S is symmetric in this case if and only if
α m-1 = 1, β m-1 = 0, t m-1 = 0, r = kl + 2, l ≥ 2. ( 3.4.6)
We have m -
1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By substituting (3.4.6) in S, we obtain
S =< 2kl + 3, 2kl + 4, . . . , (2k + 1)l + 3, k((2k + 1)l + 3) + kl + 2 > with l ≥ 2.
Case 2. If S does not satisfy condition (H). By Theorem 3.2.5,
f (S) = m(kα m-1 + β m-1 + m-1 ) -1 with α m-1 = 0 or β m-1 + m-1 > k or β m-1 + m-1 ≥ q + t or r = 1. (3.4.7)
By using Lemma 3.4.1, S is symmetric if and only if
α m-1 -m-1 (kl + r) + (k + q -β m-1 )r + k(l -1) + r + (q + 1)(t -1) +β m-1 l 1 -m-1 + m-1 t m-1 kα m-1 + β m-1 + 2 -m-1 = β m-1 + m-1 . • If m-1 = 0 (t m-1 = 0). We have r = ql + t + 1, then S is symmetric if and only if α m-1 (k + q -β m-1 )r + k(l -1) + q(l -1) + (q + 2)t + β m-1 (l -1) = 0. (3.4.8)
• If β m-1 = 0. Since m-1 = 0 and m = 1 (S = N), it follows that α m-1 ≥ 1. Since β m-1 = 0, m-1 = 0, and α m-1 ≥ 1, by using (3.4.7), we get r = 1 in particular q = 0 and t = t = 0. In this case by using (3.4.8), we obtain S is symmetric if and only if α m-1 (k + k(l -1)) = 0. As α m-1 ≥ 1, it follows that kl = 0 which is impossible (as k > 0 and l > 0).
• If β m-1 ≥ 1. Since l ≥ 1 and β m-1 ≤ k + q (Proposition 3.2.
2), it follows that α m-1 (k + qβ m-1 )r + k(l -1) + q(l -1) + (q + 2)t ≥ 0 and β m-1 (l -1) ≥ 0. By using (3.4.8), we get
β m-1 (l -1) = 0 (3.4.9) and α m-1 (k + q -β m-1 )r + k(l -1) + q(l -1) + (q + 2)t = 0. (3.4.10)
Since β m-1 ≥ 1, then (3.4.9) implies that l = 1. By substituting l = 1 in (3.4.10), we get
α m-1 (k + q -β m-1 )r + (q + 2)t = 0. (3.4.11)
Now, (3.4.11) implies that α m-1 = 0 (3.4.12)
or (k + q -β m-1 )r + (q + 2)t = 0. (3.4.13)
Since β m-1 ≤ k + q (Proposition 3.2.2), r > 0 and q + 2 > 0, then (3.4.13) implies that β m-1 = k + q and t = 0. As r = ql + t + 1 (with l = 1 proved above), then r = ql + 1 = q + 1 in this case. Therefore, S is symmetric in this case if and only if
α m-1 = 0, β m-1 ≥ 1, m-1 = 0, l = 1 (3.4.14) or β m-1 = k + q, m-1 = 0, l = 1, r = q + 1. (3.4.15) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 .
S =< β m-1 + 1, β m-1 + 2, k(β m-1 + 2) + r > with β m-1 ≥ 1 or S =< (α m-1 + 1)(k + q + 1), (α m-1 + 1)(k + q + 1) + 1, k((α m-1 + 1)(k + q + 1) + 1) + q + 1 > . • If m-1 = 1 (t m-1 ≥ 1)
. Then, S is symmetric if and only if
α m-1 (k + q -β m-1 )r + (q + 1)(t -1) +(t m-1 -1) kα m-1 + β m-1 + 1 = 0.
(3.4.16)
• If t ≥ 1. Since β m-1 ≤ k + q (Proposition 3.2.2) and t ≥ 1, it follows that α m-1 (k + q -β m-1 )r + (q + 1)(t -1) ≥ 0.
We have kα m-1 + β m-1 + 1 > 0 and t m-1 ≥ 1. By using (3.4.16), we get
t m-1 = 1 (3.4.17) and α m-1 (k + q -β m-1 )r + (q + 1)(t -1) = 0. (3.4.18)
Since
t m-1 = 1, it follows that l ≥ 2 ( m-1 t m-1 < l). Now, (3.4.18) gives α m-1 = 0 or (k + q - β m-1 )r +(q +1)(t-1) = 0. If (k +q -β m-1 )r +(q +1)(t-1) = 0, since β m-1 ≤ k +q (Proposition 3.2.
2), r > 0, q + 1 > 0 and t ≥ 1, it follows that t = 1 and β m-1 = k + q. Consequently, S is symmetric if and only if t m-1 = 1 with α m-1 = 0, l ≥ 2 or t m-1 = 1 with t = 1, β m-1 = k + q, l ≥ 2. Since r = ql + t + 1 and t ≥ 1 in this case, it follows that S is symmetric if and only if
α m-1 = 0, t m-1 = 1, t ≥ 1 with l ≥ 2 (3.4.19) or β m-1 = k + q, t m-1 = 1, r = ql + 2 with l ≥ 2. ( 3.4.20)
We have m -
1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 .
S =< β m-1 l + 2, . . . , (β m-1 + 1)l + 2, k((β m-1 + 1)l + 2) + ql + t + 1 > with t ≥ 1 and l ≥ 2 or S =< (α m-1 + 1)((k + q)l + 2), . . . , (α m-1 + 1)((k + q)l + 2) + l, k((α m-1 + 1)((k + q)l + 2) + l) + ql + 2 > with l ≥ 2.
• If t = 0. We have r = ql + 1, then (3.4.16) implies that S is symmetric if and only if
α m-1 (k + q -1 -β m-1 )r + q(l -1) +(t m-1 -1) kα m-1 + β m-1 + 1 = 0. (3.4.21)
We have r = ql+1 and 1 ≤ m-1 t m-1 ≤ l-1 in this case. On the other hand,
β m-1 l+ m-1 t m-1 ≤ kl + r -1 = kl + ql. Hence, β m-1 ≤ k + q -1 (as m-1 t m-1 ≥ 1). Since l ≥ 1 and β m-1 ≤ k + q -1 in this case, it follows that α m-1 (k + q -1 -β m-1 )r + q(l -1) ≥ 0.
We have t m-1 ≥ 1 and kα m-1 + β m-1 + 1 > 0. By using (3.4.21), we get
t m-1 = 1 (3.4.22) and α m-1 (k + q -1 -β m-1 )r + q(l -1) = 0. (3.4.23) Since t m-1 = 1, it follows that l ≥ 2 ( m-1 t m-1 < l). Now, (3.4.23) gives α m-1 = 0 or (k + q - 1 -β m-1 )r + q(l -1) = 0. If (k + q -1 -β m-1 )r + q(l -1) = 0, since β m-1 ≤ k + q -1
in this case (proved above), r > 0 and l ≥ 1, it follows that β m-1 = k +q -1 and q(l -1) = 0. Since t m-1 = 1, it follows that l ≥ 2, in particular q(l -1) = 0 gives q = 0. Thus, in the second case, we have β m-1 = k + q -1 with q = 0 (r = 1 in this case as t = 0). Consequently, S is symmetric if and only if t m-1 = 1 with α m-1 = 0, l ≥ 2 or t m-1 = 1 with q = 0 (r = 1),
β m-1 = k + q -1 = k -1, l ≥ 2.
As r = ql + 1 in this case, it follows that S is symmetric if and only if
α m-1 = 0, t m-1 = 1, r = ql + 1 with l ≥ 2 (3.4.24) or β m-1 = k -1, t m-1 = 1, r = 1 with l ≥ 2. (3.4.25) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By using (3.4.24) and (3.4.25), we get S is symmetric if and only if
S =< β m-1 l + 2, . . . , (β m-1 + 1)l + 2, k((β m-1 + 1)l + 2) + ql + 1 > with l ≥ 2 or S =< α m-1 (kl + 1) + (k -1)l + 2, . . . , α m-1 (kl + 1) + kl + 2, k(α m-1 (kl + 1) + kl + 2) + 1 > with l ≥ 2.
Thus, the proof is complete.
Example 3.4.3. Consider the following numerical semigroups.
1. S =< 9, 10, 35 >. By using GAP [8], we get that S is symmetric. Note that m = 9 = 2k + 3 with k = 3. In addition, l = 1 and r = 5 = k + 2. Hence, S verifies the formula in Theorem 3.4.2.
2. S =< 15, 16, 17, 18, 44 >. By using GAP [8], we get that S is symmetric. Note that l = 3 and m = 15 = 2kl + 3 with k = 2. Moreover, r = 8 = kl + 2. Hence, S verifies the formula in Theorem 3.4.2.
3. S =< 8, 9, 48 >. By using GAP [8], we get that S is symmetric. Note that l = 1. Moreover, m = 8 = 7(1) + 1 where α m-1 = 0, β m-1 = 7 ≥ 1 and t m-1 = 0. In addition, k = 5 and r = 3. Hence, S verifies the formula in Theorem 3.4.2.
4. S =< 10, 11, 35 >. By using GAP [8], we get that S is symmetric. Note that l = 1. In addition, m = 10 = 1(3 + 1 + 1) + 3 + 1 + 1 = (1 + 1)(3 + 1 + 1) where α m-1 = 1, β m-1 = 3 + 1 = k + q such that k = 3 and q = 1, t m-1 = 0 and r = 2 = q + 1. Hence, S verifies the formula in Theorem 3.4.2.
5. S =< 18, 19, 20, 21, 22, 54 >. By using GAP [8], we get that S is symmetric. Note that l = 4 ≥ 2 and m = 18 = 4(4) + 1 + 1 where α m-1 = 0, β m-1 = 4 and t m-1 = 1. In addition k = 2 and r = 10 = 2(4) + 1 + 1 = ql + t + 1 such that t = 1 ≥ 1. Hence, S verifies the formula in Theorem 3.4.2.
6. S =< 16, 17, 18, 40 >. By using GAP [8], we get that S is symmetric. Note that l = 2 ≥ 2, k = 2, q = 1 and r = 4 = ql + 2. In addition,
m = 16 = 1((2 + 1)2 + 2) + (2 + 1)2 + 2 = (1 + 1)((2 + 1)2 + 2) = α m-1 ((k + q)l + 2) + (k + q)l + 2 where α m-1 = 1, β m-1 = k + q = 3 and t m-1 = 1.
Hence, S verifies the formula in Theorem 3.4.2.
7. S =< 18, 19, 20, 21, 22, 75 >. By using GAP [8], we get that S is symmetric. Note that l = 4 ≥ 2 and m = 18 = 4(4) + 1 + 1 where α m-1 = 0, β m-1 = 4 and t m-1 = 1. In addition k = 3 and r = 9 = 2(4) + 1 = ql + 1. Hence, S verifies the formula in Theorem 3.4.2.
8. S =< 89, 90, 91, 92, 93, 94, 565 >. By using GAP [8], we get that S is symmetric. Note that l = 5 ≥ 2, k = 6 and r = 1. In addition, m = 2(5(6) + 1) + (6 -1)(5) + 2, where α m-1 = 2, β m-1 = 5 = k -1 and t m-1 = 1. Hence, S verifies the formula in Theorem 3.4.2.
Determination of pseudo-symmetric numerical semigroup
Now, we shall characterize the set of pseudo-symmetric numerical semigroups.
Theorem 3.4.4. Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1.
Then, S is pseudo-symmetric if and only if it satisfies one of the following :
1. S =< 9, 10, 13 > .
2. S =< 2k + 2, 2k + 3, k(2k + 3) + k + 1 > . 3. S =< (2k -1)l + 3, . . . , 2kl + 3, k(2kl + 3) + (k -1)l + 2 > with l ≥ 2. 4. S =< 2(2l + 2) + 1, . . . , 2(2l + 2) + 1 + l, (2(2l + 2) + 1 + l) + l + 2 > with l ≥ 2. 5. S =< 2k + 1, 2k + 2, k(2k + 2) + 1 > with k ≥ 2.
6. S =< 3, 4, 5 > .
f (S) = m(kα m-1 + q + t -1) + α m-1 (kl + r) -1 with α m-1 > 0, β m-1 + m-1 ≤ k, q + t > β m-1 + m-1 and r > 1.
By using Lemma 3.4.1, S is pseudo-symmetric if and only if
α m-1 (1 -t )(kl + r) + kt + (q + 1)(t -1) +β m-1 l kα m-1 + β m-1 + 1 -q + 1 -t + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q + 1 -t + 1 -t = q + 1.
• If t = 0 (t = 0, r = ql + 1). Since r = ql + 1, it follows that S is pseudo-symmetric if and only if
α m-1 kl + q(l -1) + β m-1 l kα m-1 + β m-1 + 2 -q + m-1 t m-1 kα m-1 + 2β m-1 + 3 -q = q. (3.4.26) Since α m-1 ≥ 1 and q ≤ k + 1 (Proposition 3.2.3), it follows that kα m-1 + β m-1 + 2 -q ≥ β m-1 + 1 and kα m-1 + 2β m-1 + 3 -q ≥ 2β m-1 + 2. Consequently, (3.4.26) implies that α m-1 (kl + q(l -1)) + β m-1 l(β m-1 + 1) + m-1 t m-1 (2β m-1 + 2) ≤ q. As α m-1 ≥ 1, k ≥ 1, l ≥ 1 and q ≤ k + 1 (Proposition 3.2.3), it follows that l = 1, β m-1 = 0, m-1 t m-1 = 0, α m-1 ∈ {1, 2}. (3.4.27)
• If α m-1 = 2. By substituting (3.4.27) in (3.4.26), it follows that S is pseudo-symmetric if and only if 2k = q. As q ≤ k + 1 (Proposition 3.2.3), we get k = 1 and q = 2k = 2. As r = ql + 1 with l = 1 in this case, we obtain r = 3. Thus, we have • If α m-1 = 1. By substituting (3.4.27) in (3.4.26), it follows that k = q. As r = ql + 1 with l = 1 in this case, we obtain r = k + 1. Thus, we have
α m-1 = 2, β m-1 = 0, t m-1 = 0, l = 1, r = 3, k = 1. ( 3
α m-1 = 1, β m-1 = 0, t m-1 = 0, l = 1, r = k + 1. (3.4.29)
We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 . By substituting (3.4.29) in S, we get
S =< 2k + 2, 2k + 3, k(2k + 3) + k + 1 > . • If t = 1 (t ≥ 1)
. Then, S is pseudo-symmetric if and only if
α m-1 kt + (q + 1)(t -1) + β m-1 l kα m-1 + β m-1 + 1 -q + m-1 t m-1 kα m-1 + 2β m-1 + 2 -q = q + 1.
(3.4.30)
Since α m-1 ≥ 1 and q ≤ k (by Proposition 3.2.3 as t = 1), it follows that kα m-1
+ β m-1 + 1 -q ≥ β m-1 + 1 and kα m-1 + 2β m-1 + 2 -q ≥ 2β m-1 + 2. Consequently, (3.4.30) implies that α m-1 (kt + (q + 1)(t -1)) + β m-1 l(β m-1 + 1) + m-1 t m-1 (2β m-1 + 2) ≤ q + 1. ( 3
α m-1 ∈ {1, 2}, β m-1 = 0, m-1 t m-1 = 0, t = 1, l ≥ 2. ( 3.4.32)
• If α m-1 = 1. By substituting (3.4.32) in (3.4.30), it follows that k = q + 1. As r = ql + t + 1 and t = 1, then r = (k -1)l + 2 in this case. Thus, we have • If α m-1 = 2. By substituting (3.4.32), in (3.4.30), it follows that 2k = q + 1. As q ≤ k (by Proposition 3.2.3 as t = 1) in this case, we obtain k = 1 and q = 1. Since r = ql + t + 1 with t = 1 and q = 1 we get r = l + 2. Therefore, we have
α m-1 = 1, β m-1 = 0, t m-1 = 0, r = (k -1)l + 2, l ≥ 2. ( 3
α m-1 = 2, β m-1 = 0, t m-1 = 0, k = 1, r = l + 2, l ≥ 2. (3.4.34) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By substituting (3.4.34) in S, we obtain
S =< 2(2l + 2) + 1, . . . , 2(2l + 2) + 1 + l, (2(2l + 2) + 1 + l) + l + 2 > with l ≥ 2.
Case 2. If S does not satisfy condition (H). By Theorem 3.2.5,
f (S) = m(kα m-1 + β m-1 + m-1 ) -1 with α m-1 = 0 or β m-1 + m-1 > k or β m-1 + m-1 ≥ q + t or r = 1. ( 3
α m-1 -m-1 (kl + r) + (k + q -β m-1 )r + k(l -1) + r + (q + 1)(t -1) +β m-1 l 1 -m-1 + m-1 t m-1 kα m-1 + β m-1 + 2 -m-1 = β m-1 + m-1 + 1.
• If m-1 = 0. We have r = ql + t + 1, then S is pseudo-symmetric if and only if
α m-1 (k + q -β m-1 )r + k(l -1) + q(l -1) + (q + 2)t + β m-1 (l -1) = 1. (3.4.36) • If β m-1 ≥ 1. Since β m-1 ≤ k + q (Proposition 3.2.
2) and l ≥ 1, it follows that α m-1 (k + qβ m-1 )r + k(l -1) + q(l -1) + (q + 2)t ≥ 0 and β m-1 (l -1) ≥ 0. By using (3.4.36), we get that l ≤ 2.
• If l = 2. From (3.4.36), as β m-1 ≥ 1 and β m-1 ≤ k + q (Proposition 3.2.2), it follows that α m-1 = 0 and β m-1 = 1. In this case as m-1 = 0, we get that m = 3. Since l = 2, then ν = 4 > m which is impossible (ν ≤ m). • If l = 1. By using (3.4.36), we get S is pseudo-symmetric if and only if α m-1 (k + qβ m-1 )r + (q + 2)t = 1. Hence, α m-1 = 1 and (k + q -β m-1 )r + (q + 2)t = 1. Since β m-1 ≤ k + q (Proposition 3.2.2), r > 0 and q + 2 > 1, we get that t = 0, β m-1 = k + q -1 and r = 1 (in particular q = 0). Thus, β m-1 = k -1. We have β m-1 ≥ 1, then k ≥ 2. Thus, we have
α m-1 = 1, β m-1 = k -1, t m-1 = 0, r = 1, l = 1 (3.4.37) with k ≥ 2. We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By substituting (3.4.37) in S, we obtain
S =< 2k + 1, 2k + 2, k(2k + 2) + 1 > with k ≥ 2.
• If β m-1 = 0. Since m-1 = 0 and m = 1 (S = N), it follows that α m-1 ≥ 1. As α m-1 ≥ 1, β m-1 = 0 and m-1 = 0, then (3.4.35) implies that r = 1 in particular q = 0 and t = t = 0. Then, (3.4.36) implies that S is pseudo-symmetric if and only if α m-1 k+k(l-1) = 1. Therefore, α m-1 = 1 and kl = 1. Thus, we have
α m-1 = 1, β m-1 = 0, t m-1 = 0, k = 1, l = 1, r = 1. (3.4.38) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1
. By substituting (3.4.38) in S, we get S =< 3, 4, 5 > .
• If m-1 = 1 (t m-1 ≥ 1)
. Then, S is pseudo-symmetric if and only if α m-1 (k + q -β m-1 )r + (q + 1)(t -1) + (t m-1 -1) kα m-1 + β m-1 + 1 = 1.
(3.4.39)
• If t ≥ 1. Since t m-1 ≥ 1, β m-1 ≤ k + q (Proposition 3.2.
2) and t ≥ 1, it follows that α m-1 (k + q -β m-1 )r + (q + 1)(t -1) ≥ 0 and (t m-1 -1) kα m-1 + β m-1 + 1 ≥ 0. From (3.4.39), it follows that
α m-1 (k + q -β m-1 )r + (q + 1)(t -1) = 0, (t m-1 -1) kα m-1 + β m-1 + 1 = 1 (3.4.40) or α m-1 (k + q -β m-1 )r + (q + 1)(t -1) = 1, (t m-1 -1) kα m-1 + β m-1 + 1 = 0.
(3.4.41) From (3.4.40) as (t m-1 -1) kα m-1 + β m-1 + 1 = 1, it follows that α m-1 = 0, β m-1 = 0 and t m-1 = 2. Thus, m = 3 but ν ≤ m = 3, then l = 1 for must. On the other hand, t < l = 1 which implies that t = 0 which is impossible as t ≥ 1 in this case. Thus, we do not have case (3.4.40). Now, consider (3.4.41). As kα m-1 + β m-1 + 1 > 0, then (3.4.41) implies that t m-1 = 1, α m-1 = 1 and (k + q -β m-1 )r + (q + 1)(t -1) = 1.
(3.4.42)
Since t ≥ 1, r > 0, q + 1 > 0 and β m-1 ≤ k + q (Proposition 3.2.2), then (3.4.42) implies that q = 0, t = 2 and β m-1 = k + q = k (the case where β m-1 = k + q -1, r = 1 and t = 1 is impossible as r = 1 implies that t = 0 and we get a contradiction). As r = ql + t + 1, we get r = 3. In addition, as t = 2, it follows that l ≥ 3. Thus, we have • If t = 0. We have r = ql + 1, from (3.4.39), it follows that S is pseudo-symmetric if and only if
α m-1 = 1, β m-1 = k, t m-1 = 1, r = 3, l ≥ 3. ( 3
α m-1 (k + q -1 -β m-1 )r + q(l -1) +(t m-1 -1) kα m-1 + β m-1 + 1 = 1. (3.4.44)
We have r = ql+1 and 1 ≤ m-1 t m-1 ≤ l-1. On the other hand,
β m-1 l+ m-1 t m-1 ≤ kl+r-1 = kl + ql. Hence, β m-1 ≤ k + q -1 (as m-1 t m-1 ≥ 1). Since l ≥ 1, t m-1 ≥ 1 and β m-1 ≤ k + q -1, it follows that α m-1 (k + q -1 -β m-1 )r + q(l -1) ≥ 0 and (t m-1 -1) kα m-1 + β m-1 + 1 ≥ 0.
From (3.4.44), it follows that
α m-1 (k + q -1 -β m-1 )r + q(l -1) = 0, (t m-1 -1) kα m-1 + β m-1 + 1 = 1, (3.4.45) or α m-1 (k + q -1 -β m-1 )r + q(l -1) = 1, (t m-1 -1) kα m-1 + β m-1 + 1 = 0.
+ β m-1 + 1 ≥ 1, it follows that t m-1 = 1. Since t m-1 = 1 (t m-1 ≤ l -1)
, we get l ≥ 2. In addition, (3.4.46) implies that α m-1 = 1, and
(k + q -1 -β m-1 )r + q(l -1) = 1. (3.4.47)
Since β m-1 ≤ k + q -1 in this case as stated above, r ≥ 1 and l ≥ 1, then (3.4.47) implies that Thus, the proof is complete.
r = 1 (q = 0), β m-1 = k + q -2 = k -2 (3.4.48) or l = 2, q = 1, β m-1 = k + q -1 = k. ( 3
Example 3.4.5. Consider the following numerical semigroups.
1. S =< 9, 10, 13 > . By using GAP [8], we get that S is pseudo-symmetric. Moreover, S verifies the formula in Theorem 3.4.4.
2. S =< 32, 33, 511 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 1, k = 15 and r = 16 = k + 1. In addition, m = 32 = 2(15) + 2. Hence, S verifies the formula in Theorem 3.4.4.
3. S =< 15, 16, 17, 18, 19, 44 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 4, k = 2 and r = 6 = (k -1)l + 2. In addition, m = 15 = (2(2) -1)4 + 3 = (2k -1)l + 3. Hence, S verifies the formula in Theorem 3.4.4.
4. S =< 17, 18, 19, 20, 25 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 3, k = 1, r = 5 = l + 2. Moreover, m = 17 = 2(2(3) + 2) + 1 = 2(2l + 2) + 1. Hence, S verifies the formula in Theorem 3.4.4.
5. S =< 9, 10, 41 >, By using GAP [8], we get that S is pseudo-symmetric. Note that l = 1, r = 1 and k = 4 ≥ 2. Moreover, m = 9 = 2(4) + 1 = 2k + 1. Hence, S verifies the formula in Theorem 3.4.4.
6. S =< 3, 4, 5 >. By using GAP [8], we get that S is pseudo-symmetric. Moreover, S verifies the formula in Theorem 3.4.4. 9. S =< 9, 10, 11, 14 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 2, k = 1 and r = 3. Moreover, m = 9 = 4(1) + 5 = 4k + 5. Hence, S verifies the formula in Theorem 3.4.4.
Pseudo-Frobenius Numbers
The aim of this Section, is to determine the set of pseudo-Frobenius Numbers of S. We are going to introduce some Lemmas that will help us in determining the set of pseudo-Frobenius Numbers of S.
Lemma 3.5.1. (see [START_REF][END_REF]) Let S be a numerical semigroup and n ∈ S * . Let Ap(S, n) = {w(i); w(i) ≡ i mod n, 0 ≤ i ≤ n -1} be the Apéry set of S with respect to n and let P F (S) be the set of pseudo-Frobenius numbers of S. Then, w(x) -n ∈ P F (S) if and only if w(x + y) + n ≤ w(x) + w(y) for all 1 ≤ y ≤ n -1 where x + y = x + y mod n.
Proof. Let w(x)-n ∈ P F (S). By definition of P F (S), we have w(x)-n+S * ⊆ S. Then, w(x)+w(y)-n ∈ S (as w(y) ∈ S * ). On the other hand, both w(x) + w(y) -n and w(x + y) are congruent to x + y mod n, then by definition of the elements of the Apéry set of S, we get
w(x + y) ≤ w(x) + w(y) -n, ∀ 1 ≤ y ≤ n -1.
Conversely, suppose that w(x + y) + n ≤ w(x) + w(y), ∀ 1 ≤ y ≤ n -1. By definition of the elements of the Apéry set of S, we have w(x) -n / ∈ S, then it is left to show that w(x) -n + S * ⊆ S to get that w(x)-n ∈ P F (S). Indeed, we have w(x)-n+w(y) ≥ w(x + y) for all 1 ≤ y ≤ n-1 and both w(x)-n+w(y) and w(x + y) are congruent to x + y mod n. Consequently, from the definition of the elements of the Apéry set of S, it follows thatw(
x) -n + w(y) ∈ S, ∀ 1 ≤ y ≤ n -1 which implies that w(x) -n + S * ⊆ S.
The later follows from the fact that for all s ∈ S, there exists (k, w) ∈ N × Ap(S, n) such that s = kn + w and that n ∈ S (Proposition 1.0.13). Hence, our assertion holds. Thus, the proof is complete.
By applying Lemma 3.5.1 on our numerical semigroup, we get Proposition 3.5.2. Proposition 3.5.2, mainly equation (3.5.1), will be used later in determining P F (S). Proposition 3.5.2. Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1. Then, for all 1 ≤ y ≤ m -1. Thus, the proof is complete.
w(x) -m ∈ P F (S) if and only if ∀ 1 ≤ y ≤ m -1, m(kα x+y + β x+y + x+y + 1) + x + y ≤ m(k(α x + α y ) + β x + β y + x + y ) + x + y. ( 3
In Lemma 3.5.3, we give cases where (3.5.1) does not hold. This will allow us to determine some elements that are not in P F (S).
(k + 1)l + 1. Let x = α x (kl + r) + β x l + x t x , y 1 = kl + r, y 2 = 1 and y 3 = l + 1 -x t x .
We have the following :
1. Suppose that x + y 1 ≤ m -1. For all r ∈ N, x does not satisfy (3.5.1) for y 1 .
2. Suppose that x + y 2 ≤ m -1 and x t x = 0. If one of the following conditions holds :
• r -1 = ql + t with t > 0;
• r -1 = ql with q > 0 and β x = k + q;
• r = 1 and β x = k, then x does not satisfy (3.5.1) for y 2 .
3. Suppose x + y 3 ≤ m -1. If one of the following conditions holds :
• r -1 = ql + t with t > 0 and β x = k + q ;
• r -1 = ql with q > 0, β x = k + q -1 and β x = k + q;
• r = 1, β x = k -1 and β x = k, then x does not satisfy (3.5.1) for y 3 .
Proof.
1. We have m(kα
y 1 + β y 1 + y 1 ) = m(k). Thus, m(k(α x + α y 1 ) + β x + β y 1 + x + y 1 ) + x + y 1 = m k(α x + 1) + β x + x + (α x + 1)(kl + r) + β x l + x t x . (3.5.2) Since x + y 1 ≤ m -1, it follows that x + y 1 = x + y 1 . For all r ∈ N, we have x + y 1 = (α x + 1)(kl + r) + β x l + x t x . Hence, m(kα x+y 1 + β x+y 1 + x+y 1 + 1) + x + y 1 = m k(α x + 1) + β x + x + 1 + (α x + 1)(kl + r) + β x l + x t x .
(3.5.3) By using (3.5.2) and (3.5.3), it follows that x does not satisfy (3.5.1) for y 1 .
2. If x = 0, then x = α x (kl + r) + β x l. We have m(kα
y 2 + β y 2 + y 2 ) = m. Therefore, m(k(α x + α y 2 ) + β x + β y 2 + x + y 2 ) + x + y 2 = m kα x + β x + 1 + α x (kl + r) + β x l + 1. (3.5.4) Since x + y 2 ≤ m -1, it follows that x + y 2 = x + y 2 .
If one of the following conditions holds :
In addition, if x = α m-1 (kl +r)+(β m-1 -1)l, then x = 0 and x+y 2 ≤ m-1. Since x does not satisfy (3.5.1) for y 2 if x + y 2 ≤ m -1 and x = 0, it follows that if x = α m-1 (kl + r) + (β m-1 -1)l, then w(x) -m / ∈ P F (S) (Lemma 3.5.3). By the same argument we have if x = (α m-1 -1)(kl + r) + (k + q)l + x t x such that x = 0, then w(x) -m / ∈ P F (S). By using (3.5.12) we deduce that
P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + (β m-1 -1)l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x
with x = 1 and (k + q)l + x t x > β m-1 l}.
(3.5.13)
Case 1.3. If m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 , i.e., ( m-1 = 1). We have (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + (kl + r) ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 and (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + 1 + (kl + r) > α m-1 (kl + r) + β m-1 l + m-1 t m-1 . Consequently, x + y 1 ≤ m -1 iff x ≤ (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 .
Since x does not satisfy (3.5.1) for
y 1 if x + y 1 ≤ m -1, we deduce that if x ≤ (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S) (Lemma 3.5.3). In particular, P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + 1 ≤ x ≤ m -1}. (3.5.14) Moreover, α m-1 (kl + r) + (β m-1 -1)l + l -1 + l + 1 -(l -1) ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 and α m-1 (kl + r) + β m-1 l + l + 1 > α m-1 (kl + r) + β m-1 l + m-1 t m-1 . Consequently, x + y 3 ≤ m -1 iff x ≤ α m-1 (kl + r) + (β m-1 -1)l + l -1.
Since x does not satisfy (3.5.1) for y 3 if x + y 3 ≤ m -1 and β x = k + q (Lemma 3.5.3), by using (3.5.14) we deduce that if
x ≤ α m-1 (kl + r) + (β m-1 -1)l + l -1 and x = (α m-1 -1)(kl + r) + (k + q)l + x t x with (k + q)l + x t x > β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S). In particular, P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + β m-1 l ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with (k + q)l + x t x > β m-1 l + m-1 t m-1 }. (3.5.15)
In addition, if x = α m-1 (kl + r) + β m-1 l, then x = 0 and x + y 2 ≤ m -1. Since x does not satisfy (3.5.1) for y 2 if x + y 2 ≤ m -1 and x = 0, we deduce that if x = α m-1 (kl + r) + β m-1 l, then w(x) -m / ∈ P F (S) (Lemma 3.5.3). By the same argument we have if x = (α m-1 -1)(kl + r) + (k + q)l + x t x such that x = 0, then w(x) -m / ∈ P F (S). By using (3.5.15), we deduce that
P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + β m-1 l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with x = 1 and (k + q)l + x t x > β m-1 l + m-1 t m-1 }. (3.5.16) Case 2. If r -1 = ql for some q ∈ N * . Case 2.1. If m -1 = α m-1 (kl + r), (i.e., β m-1 = m-1 = 0). We have (α m-1 -1)(kl + r) + (kl + r) ≤ α m-1 (kl + r) and (α m-1 -1)(kl + r) + 1 + (kl + r) > α m-1 (kl + r). Consequently, x + y 1 ≤ m -1 iff x ≤ (α m-1 -1)(kl + r).
Since x does not satisfy (3.5.1) for y 1 if x+y 1 ≤ m-1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl+r), then w(x) -m / ∈ P F (S). In particular,
P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + 1 ≤ x ≤ m -1}. (3.5.17) Moreover, (α m-1 -1)(kl + r) + (k + q -1)l + l -1 + l + 1 -(l -1) ≤ α m-1 (kl + r) and (α m-1 -1)(kl + r) + (k + q)l + l + 1 > α m-1 (kl + r).
Consequently,
x + y 3 ≤ m -1 iff x ≤ (α m-1 -1)(kl + r) + (k + q -1)l + l -1.
Since x does not satisfy (3.5.1) for y 3 in the case x+y 3 ≤ m-1, β x = k +q -1 and β x = k +q (Lemma 3.5.3), by using (3.5.17) we deduce that if x ≤ (α m-1 -1)(kl + r) + (k + q -2)l + l -1, then w(x) -m / ∈ P F (S). In particular,
P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + (k + q -1)l ≤ x ≤ m -1}.
(3.5.18)
In addition, if x = (α m-1 -1)(kl + r) + (k + q -1)l, then x = 0, β x = k + q and x + y 2 ≤ m -1. Since x does not satisfy (3.5.1) for y 2 in the case x + y 2 ≤ m -1, x = 0 and β x = k + q (Lemma 3.5.3), we deduce that if x = (α m-1 -1)(kl + r) + (k + q -1)l, then w(x) -m / ∈ P F (S). By using (3.5.18), we obtain
P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + (k + q -1)l + 1 ≤ x ≤ m -1}. (3.5.19) Case 2.2. If m -1 = α m-1 (kl + r) + β m-1 l (i.e., β m-1 > 0, m-1 = 0). We have (α m-1 -1)(kl + r) + β m-1 l + (kl + r) ≤ α m-1 (kl + r) + β m-1 l and (α m-1 -1)(kl + r) + β m-1 l + 1 + (kl + r) > α m-1 (kl + r) + β m-1 l. Consequently, x + y 1 ≤ m -1 iff x ≤ (α m-1 -1)(kl + r) + β m-1 l.
Since x does not satisfy (3.5.1) for y
1 if x + y 1 ≤ m -1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 - 1)(kl + r) + β m-1 l, then w(x) -m / ∈ P F (S). In particular, P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + β m-1 l + 1 ≤ x ≤ m -1}. (3.5.20) Moreover, α m-1 (kl + r) + (β m-1 -2)l + l -1 + l + 1 -(l -1) ≤ α m-1 (kl + r) + β m-1 l and α m-1 (kl + r) + (β m-1 -1)l + l + 1 > α m-1 (kl + r) + β m-1 l. Consequently, x + y 3 ≤ m -1 iff x ≤ α m-1 (kl + r) + (β m-1 -2)l + l -1.
Since x does not satsify (3.5.1) for y 3 in the case x + y 3 ≤ m -1, β x = k + q -1 and β x = k + q (Lemma 3.5.3), by using (3.5.20) we deduce that if x ≤ α m-1 (kl + r) + (β m-1 -2)l + l -1 such that x = (α m-1 -1)(kl+r)+(k +q -1)l+ x t x with (k +q -1)l+ x t x > β m-1 l and x = (α m-1 -1)(kl+r)+(k +q)l with (k + q)l > β m-1 l, then w(x) -m / ∈ P F (S). In particular,
P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + (β m-1 -1)l ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with (k + q -1)l + x t x > β m-1 l} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l}. (3.5.21) If x = α m-1 (kl + r) + (β m-1 -1)l, then x = 0, β m-1 -1 = k + q (as β m-1 ≤ k + q) and x + y 2 ≤ m -1.
Since x does not satsify (3.5.1) for y 2 in the case x + y 2 ≤ m -1, x = 0 and β x = k + q (Lemma 3.5.3), we deduce that if x = α m-1 (kl + r) + (β m-1 -1)l, then w(x) -m / ∈ P F (S). By the same argument we have if x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x such that x = 0, then w(x) -m / ∈ P F (S). By using (3.5.21), we get P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + (β m-1 -1)l + 1 ≤ x ≤ m -1}
∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 and (k + q -1)l + x t x > β m-1 l} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l}. Consequently, x + y 3 ≤ m -1 iff x ≤ α m-1 (kl + r) + (β m-1 -1)l + l -1.
Since x does not satisfy (3.5.1) for y 3 in the case x + y 3 ≤ m -1, β x = k + q -1 and β x = k + q (Lemma 3.5.3), by using (3.5.23) we deduce that if x ≤ α m-1 (kl + r) + (β m-1 -1)l + l -1 such that x = (α m-1 -1)(kl+r)+(k+q)l with (k+q)l > β m-1 l+ m-1 t m-1 and x = (α m-1 -1)(kl+r)+(k+q-1)l+ x t x with (k + q -1)l + x t x > β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S). In particular, P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + β m-1 l ≤ x ≤ m -1}
∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l + m-1 t m-1 } ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with (k + q -1)l + x t x > β m-1 l + m-1 t m-1 }. Proof. Suppose by the way of contradiction that x does not satisfy (3.5.1) for y. Consequently, w(x + y) + m > w(x) + w(y).
We have x + y > m -1 and 1 ≤ x, y ≤ m -1, thus x + y = x + y + m, x + y < x and x + y < y. Since w(x)+w(y) and w(x + y) are both elements in S that are congruent to x+y mod m, then from the definition of the element of the Apéry set of S, it follows that w(x) + w(y) = w(x + y) + x 0 m for some x 0 ∈ N. On the other hand, w(x + y) + m > w(x) + w(y). Thus, x 0 = 0 and w(x + y) = w(x) + w(y).
(3.5.39)
As x ≥ 1 and y ≥ 1 , it follows that w(y) > 0 and w(x) > 0.Then, (3.5.39) implies that w(x + y) > w(x) with x + y < x and w(x + y) > w(y) with x + y < y. By Proposition 3.2.4, we have if i < j, then w(i) > w(j) if and only if it satisfies one of the following :
1. α i = α j -2, β j = j = 0 and β i + i = 2k + 1.
2. α i = α j -1, β i + i > k + β j + j and β j + j ≤ k.
In particular, if i < j such that w(i) > w(j), then α i ≤ α j - ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with x = 1 and (k + q)l + x t x > β m-1 l}.
In addition, if x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x such that x = 1 and (k + q -1)l + x t x > β m-1 l + m-1 t m-1 or x = (α m-1 -1)(kl + r) + (k + q)l such that (k + q)l > β m-1 l + m-1 t m-1 , then x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact, write y = α y (kl + r) + β y l + y t y . Since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. Since x + y ≤ m -1, it follows that y = β y l + y t y (as (k + q -1)l + x t x > β m-1 l + m-1 t m-1 or (k + q)l > β m-1 l + m-1 t m-1 ). Thus, m(kα y + β y + y ) = m(β y + y ). Since x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 or x = (α m-1 -1)(kl + r) + (k + q)l, we get that m(kα x + β x + x ) = m(kα m-1 + q). Consequently, m(k(α x + α y ) + β x + β y + x + y ) = m(kα m-1 + q + β y + y ).
(3.5.70)
• If x = (α m-1 -1)(kl + r) + (k + q)l such that (k + q)l > β m-1 l + m-1 t m-1 . We have x + y = (α m-1 -1)(kl + r) + (k + q)l + 1 + β y l + y t y -1 = α m-1 (kl + r) + β y l + y t y -1 with β y l + y t y -1 ≥ 0 as y = β y l + y t y ≥ 1. Then, • If x = (α m-1 -1)(kl+r)+(k+q-1)l+ x t x such that x = 1 and (k+q-1)l+ x t x > β m-1 l+ m-1 t m-1 .
• If β y ≥ 1. We have
x + y = (α m-1 -1)(kl + r) + (k + q -1)l + x t x + β y l + y t y = α m-1 (kl + r) + (β y -1)l + y t y + x t x -1.
Since 1 ≤ x t x ≤ l -1 and y t y ≤ l -1, it follows that 0 ≤ y t y + x t x -1 ≤ 2l -2. Hence, m(kα x+y + β x+y + x+y ) ≤ m(kα m-1 + β y + y ).
• If β y = 0. Then, y = y t y with y = 1 (as y ≥ 1). Since 1 ≤ x t x ≤ l -1 and 1 ≤ y t y ≤ l -1, it follows that 2 ≤ y t y + x t x ≤ 2l -2. If 2 ≤ y t y + x t x ≤ l, then x + y = (α m-1 -1)(kl + r) + (k + q -1)l + x t x + y t y with x t x + y t y ≤ l and if y t y + x t x ≥ l + 1, then x + y = α m-1 (kl + r) + ( x t x + y t y -(l + 1)) with 0 ≤ x t x + y t y -(l + 1) ≤ l -3. Since q ∈ N * , then m(kα x+y + β x+y + x+y ) ≤ m(kα m-1 + q -1 + 1).
Since q ∈ N * and β y + y ≥ 1 (as y ≥ 1), it follows that m(kα x+y + β x+y + x+y ) ≤ m(kα m-1 + β y + q -1 + y ). Hence, if x = (α m-1 -1)(kl+r)+(k+q-1)l+ x t x such that x = 1 and (k+q-1)l+ x t x > β m-1 l+ m-1 t m-1 , or x = (α m-1 -1)(kl + r) + (k + q)l such that (k + q)l > β m-1 l + m-1 t m-1 , then x satisfies (3.5.1) for 1 ≤ y ≤ m -1. Therefore, {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 and (k + q -1)l + x t x > β m-1 l + m-1 t m-1 } ∪{w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l} with (k + q)l > β m-1 l + m-1 t m-1 } ⊆ P F (S). ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 and (k + q -1)l + x t x > β m-1 l + m-1 t m-1 } ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l + m-1 t m-1 }.
wx m (y-x-1)+(y-2)+ wy m (x-1)
Remark 1 . 0 . 2 .Example 1 . 0 . 3 .
102103 All semigroups considered in this thesis are submonoids of (N, +), hence commutative, that is, a + b = b + a for all a, b ∈ S. Consider the following examples :
η
j = {k ∈ N; n k = j} . Let 0 ≤ k ≤ q -2. If s ∈ S ∩ I k then s + m ∈ S ∩ I k+1 . This implies that n k ≤ n k+1 .Let for example S = 4, 6, 13 . We have c(S) = c = 16 = 4 • 4, hence I k = [km, (k + 1)m[ for all k ≥ 0. Moreover, n 0 = 1, n 1 = 2, n 2 = 2, n 3 = 3, and n k = 4 for all k ≥ 4. We also have η 1 = 1, η 2 = 2, η 3 = 1.
v)<
By definition, we have w i < w j for all 0 ≤ i < j ≤ m -1. Thus, w i +ρ m By Lemma 1.0.21, we have f = max(Ap(S, m)) -m = w m-1 -m. Hence, w m-1 +ρ m
νν
≥ m and w m-1 -m ≥ w x + w y In this Section, we will show that if S is a numerical Semigroup such that w m-1 -m ≥ w x + w y and 2 + wx m (y -x -1) + (y -2) + wy m (x -1) ≥ m, then S satisfies Wilf's conjecture. Theorem 2.5.1. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Let w 0 = 0 < w 1 < . . . < w m-1 be the elements of Ap(S, m). Suppose that w m-1 -m ≥ w x + w y for some 0 < x < y < m -1. If 2 + wx m (y -x -1) + (y -2) + wy m (x -1) wx m + wy m + 2 ν ≥ m, then S satisfies Wilf's conjecture.
then (2.5.5) gives w
Case 4 .
4 If wx+ρ m = wx m and wy+ρ m = wy m , then (2.5.5) gives w
≥
then S satisfies Wilf's conjecture. (2 + α)ν ≥ m.
Example 2. 5 . 4 . 6 ≥
546 Consider the following numerical semigroup S =< 19, 21, 23, 25, 27, 28 > . Note that 2ν < m. We have w 4 = 27, w 5 = 28 and w m-1 -m = 64 i.e., w m-1 -m ≥ w 4 + w 5 . Moreover, 2 + wx m (y -x -1) + (y -2) + wy m (x -1) 19 = m.
Theorem 3 . 1 . 1 .
311 Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1.
.1. 4 )
4 From (3.1.2) and (3.1.4), it follows that
.1. 6 )
6 Using (3.1.6) and (3.1.5), it follows that
Example 3 . 1 . 2 .
312 Consider the following numerical semigroup S =< 19, 20, 21, 22, 52 > . Note that m = 19, l = 3, k = 2 and r = 8. Let Ap(S, m) = {0, w(1), . . . , w(m -1)} be the Apéry basis of S. By using GAP [8], we obtain Ap(S, m) = {0, 20, 21, 22, 42, 43, 44, 64, 65, 66, 86, 87, 88, 108, 52, 72, 73, 74, 94} and they verify the formula given in Theorem 3.1.1.
Proposition 3 . 2 . 4 .
324 Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1.
Theorem 3 . 2 . 5 .
325 Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1.
Example 3 . 3 . 2 .
332 Consider the examples in Example 3.2.6.
Theorem 3 . 4 . 2 .
342 Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1.
.4.28) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 . By substituting (3.4.28) in S, we get S =< 9, 10, 13 > .
.4.33) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 . By substituting (3.4.33) in S, we get S =< (2k -1)l + 3, . . . , 2kl + 3, k(2kl + 3) + (k -1)l + 2 > with l ≥ 2.
.4.35) By using Lemma 3.4.1, S is pseudo-symmetric if and only if
.4.43) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 . By substituting (3.4.43) in S, we obtain S =< 2kl + 5, . . . , (2k + 1)l + 5, k((2k + 1)l + 5) + 3 > with l ≥ 3.
( 3 .
3 4.46) From(3.4.45), as kα m-1 + β m-1 + 1 ≥ 1, it follows that α m-1 = β m-1 = 0 and t m-1 = 2. In this case, we have m = 3. As ν ≤ m = 3, it follows that l = 1 which contradicts t m-1 = 2 ≤ l -1. Thus, we do not have case(3.4.45). Now, from(3.4.46), as kα m-1
.4.49) Since r = ql + 1 in this case, from α m-1 = 1, t m-1 = 1, l ≥ 2, (3.4.48) and (3.4.49), it follows thatα m-1 = 1, β m-1 = k -2, t m-1 = 1, r = 1, l ≥ 2 (3.4.50) or α m-1 = 1, β m-1 = k, t m-1 = 1, l = 2, r = 3. (3.4.51) We have m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 . By (3.4.50) and (3.4.51), we get S =< (2k -2)l + 3, . . . , (2k -1)l + 3, k((2k -1)l + 3) + 1 > with l ≥ 2 or S =< 4k + 5, 4k + 6, 4k + 7, k(4k + 7) + 3 > .
7 .
7 S =< 45, 46, 47, 48, 49, 50, 203 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 5 ≥ 2, r = 3 and k = 4. Moreover, m = 45 = 2(4(5)) + 5 = 2kl + 5. Hence, S verifies the formula in Theorem 3.4.4. 8. S =< 19, 20, 21, 22, 23, 70 >. By using GAP [8], we get that S is pseudo-symmetric. Note that l = 4 ≥ 2, r = 1 and k = 3. Moreover, m = 19 = (2(3) -2)4 + 3 = (2k -2)l + 3. Hence, S verifies the formula in Theorem 3.4.4.
Lemma 3 . 5 . 3 .
353 Let S be a numerical semigroup minimally generated by m, . . . , m + l, k(m + l) + r with r ≤
( 3 . 5 . 22 ) 2 . 3 .
352223 Case If m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 , i.e., ( m-1 = 1). We have(α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + (kl + r) ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 and (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + 1 + (kl + r) > α m-1 (kl + r) + β m-1 l + m-1 t m-1 . Consequently, x + y 1 ≤ m -1 iff x ≤ (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 .Since x does not satisfy (3.5.1) for y 1 if x + y 1 ≤ m -1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S). In particular,P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + β m-1 l + m-1 t m-1 + 1 ≤ x ≤ m -1}. (3.5.23) Moreover, α m-1 (kl + r) + (β m-1 -1)l + l -1 + l + 1 -(l -1) ≤ α m-1 (kl + r) + β m-1 l + m-1 t m-1 and α m-1 (kl + r) + β m-1 l + l + 1 > α m-1 (kl + r) + β m-1 l + m-1 t m-1 .
( 3 .Case 3 . 3 Lemma 3 . 5 . 6 .
333356 5.24) Hence, x does not satisfy (3.5.1) for y if l ≥ 2. Consequently, if x = (α m-1 -1)(kl + 1) + kl and l ≥ 2, then w(x) -m / ∈ P F (S). By using (3.5.32), we getP F (S) ⊆ {w(x) -m; α m-1 (kl + 1) + (β m-1 -1)l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + (k -1)l + 1 with (k -1)l + 1 > β m-1 l} ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + kl with l = 1 and kl > β m-1 l}. If m -1 = α m-1 (kl + 1) + β m-1 l + m-1 t m-1 , i.e., ( m-1 = 1). We have (α m-1 -1)(kl + 1) + β m-1 l + m-1 t m-1 + (kl + 1) ≤ α m-1 (kl + 1) + β m-1 l + m-1 t m-1and(α m-1 -1)(kl + 1) + β m-1 l + m-1 t m-1 + 1 + (kl + 1) > α m-1 (kl + 1) + β m-1 l + m-1 t m-1 .Consequently,x + y 1 ≤ m -1 iff x ≤ (α m-1 -1)(kl + 1) + β m-1 l + m-1 t m-1 .Since x does not satisfy (3.5.1) fory 1 if x + y 1 ≤ m -1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl + 1) + β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S). In particular, P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + 1) + β m-1 l + m-1 t m-1 + 1 ≤ x ≤ m -1}. (3.5.34) Moreover, α m-1 (kl + 1) + (β m-1 -1)l + l -1 + l + 1 -(l -1) ≤ α m-1 (kl + 1) + β m-1 l + m-1 t m-1 and α m-1 (kl + 1) + β m-1 l + l + 1 > α m-1 (kl + 1) + β m-1 l + m-1 t m-1 .Consequently,x + y 3 ≤ m -1 iff x ≤ α m-1 (kl + 1) + (β m-1 -1)l + l -1.Since x does not satisfy (3.5.1) for y 3 in the case x+y 3 ≤ m-1, β x = k-1 and β x = k (Lemma 3.5.3), by using (3.5.34) we deduce that if x ≤ α m-1 (kl+1)+(β m-1 -1)l+l-1 such that x = (α m-1 -1)(kl+1)+(k-1)l+ x t x with (k -1)l + x t x > β m-1 l + m-1 t m-1 and x = (α m-1 -1)(kl + 1) + kl with kl > β m-1 l + m-1 t m-1 , then w(x) -m / ∈ P F (S). In particular,P F (S) ⊆ {w(x) -m; α m-1 (kl + 1) + β m-1 l ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + (k -1)l + x t x with (k -1)l + x t x > β m-1 l + m-1 t m-1 } ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + kl with kl > β m-1 l + m-1 t m-1 }. (3.5.35) If x = α m-1 (kl + 1) + β m-1 l, then x = 0, x + y 2 ≤ m -1 and β x = β m-1 = k (if β m-1 = k, then m -1 = α m-1 (kl +1)+kl + m-1 t m-1, as m-1 = 1 and r = 1, we get a contradiction). Since x does not satisfy (3.5.1) for y 2 in the case x+y 2 ≤ m-1, x = 0 and β x = k (Lemma 3.5.3), we deduce that if x = α m-1 (kl+1)+β m-1 l, then w(x) -m / ∈ P F (S). By the same argument we have if x = (α m-1 -1)(kl + 1) + (k -1)l + x t x such that x = 0, then w(x) -m / ∈ P F (S). By using (3.5.35) we deduce thatP F (S) ⊆ {w(x) -m; α m-1 (kl + 1) + β m-1 l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + (k -1)l + x t x with x = 1 and (k -1)l + x t x > β m-1 l + m-1 t m-1 } ∪ {w(x) -m; x = (α m-1 -1)(kl + 1) + kl with kl > β m-1 l + m-1 t m-1 }.(3.5.36) Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1. Let 1 ≤ x, y ≤ m -1 such that x + y > m -1 and x = α x (kl + r) + β x l + x t x with β x + x > 0. Then, x satisfies (3.5.1) for y.
1 .Theorem 3 . 5 . 7 .Case 1 . 0 . 1 . 1 .
13571011 Write x + y = α x+y (kl + r) + β x+y l + x+y t x+y and y = α y (kl + r) + β y l + y t y . Since w(x + y) > w(x) with x + y < x and w(x + y) > w(y) with x + y < y, it follows that α x+y ≤ α x -1 and α x+y ≤ α y -1. By proposition 3.0.2, we have β x+y + x+y ≤ 2k + 1. Hence, w(x + y) = m(kα x+y + β x+y + x+y ) + x + y ≤ m(kα x+y + 2k + 1) + x + y = m(kα x+y + 2k) + x + y. (3.5.40) On the other hand, w(y) = m(kα y + β y + y ) + y ≥ m(k(α x+y + 1)) + y. By using β x + x > 0 (hypothesis) and α x ≥ α x+y + 1, we get w(x) = m(kα x + β x + x ) + x ≥ m(k(α x+y + 1) + 1) + x. Consequently, w(x) + w(y) ≥ m(kα x+y + 2k + 1) + x + y. (3.5.41) But (3.5.41) and (3.5.40) contradicts (3.5.39). Therefore, x satisfies (3.5.1) for y. Now, we are ready to determine the set of pseudo-Frobenius Numbers of S. Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1. For all 1 ≤ x ≤ m -1, write x = α x (kl + r) + β x l + x t x as in Definition 3.0.1 and w(x) = m(kα x + β x + x ) + x as in Theorem 3.1.1. We have the following : If r -1 = ql + t for some q, t ∈ N with t < l and t = Case If m -1 = α m-1 (kl + r) (i.e., β m-1 = m-1 = 0), then P F (S) = {w(x) -m; (α m-1 -1)(kl + r) + (k + q)l + 1 ≤ x ≤ m -1}. Case 1.2. If m -1 = α m-1 (kl + r) + β m-1 l (i.e., β m-1 > 0, m-1 = 0), then P F (S) = {w(x) -m; α m-1 (kl + r) + (β m-1 -1)l + 1 ≤ x ≤ m -1}
m(kα x+y + β x+y + x+y ) ≤ m(kα m-1 + β y + y ).(3.5.71) By using (3.5.70), (3.5.71) and q ∈ N * , we get that x satisfies (3.5.1) for all 1 ≤ y ≤ m -1.
(3.5.72) By using (3.5.70) and (3.5.72), we get that x satisfies (3.5.1) for 1 ≤ y ≤ m -1.
( 3 .
3 5.73) By using (3.5.25), (3.5.73) and (3.5.69), we getP F (S) = {w(x) -m; α m-1 (kl + r) + β m-1 l + 1 ≤ x ≤ m -1}
Table des matières
des Equivalent form of Wilf's conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Remerciements iii
Introduction
1 Basics and notations
2 Wilf 's conjecture
2.1 wx m + wy m +2
2.2 Technical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Numerical semigroups with w m-1 ≥ w 1 + w α and (2 + α-3 q )ν ≥ m . . . . . . . . . 2.4 Numerical semigroups with w m-1 ≥ w α-1 + w α and ( α+3 3 )ν ≥ m . . . . . . . . . . 2.5 Numerical semigroups with 2 + wx m (y-x-1)+(y-2)+ wy m (x-1)
by hypothesis).
• If x ≥ 0. By Remark 2.2.1 (ii), we have w 1 +ρ m ≥ 1. From (2.3.1), it follows that
m-1
j=1
by hypothesis).
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture. Thus, the proof is complete.
Example 2.3.2. Consider the following numerical semigroup
S = < 19, 21, 23, 25, 27, 28 > .
≥ 22 = m, thus the conditions of Theorem 2.4.1 are valid. The following Corollary 2.4.3 is an extension for Corollary 2.3.4 using Theorems 2.3.1 and 2.4.1. Let S be a numerical semigroup with multiplicity m and embedding dimension ν. Suppose that
( 7+3 3 )7 Corollary 2.4.3. α+3 3 )ν =
by hypothesis).
Using Proposition 2.1.3, we get that S satisfies Wilf's conjecture. Thus, the proof is complete. Example 2.4.2. Consider the following numerical semigroup S =< 22, 23, 25, 27, 29, 31, 33 > .
Note that 3ν < m. We have w 6 = 33, w 7 = 46 and w m-1 = 87 i.e., w m-1 ≥ w 6 + w 7 . Moreover, (
By substituting (3.4.14) and (3.4.15) in S, we get that S is symmetric if and only if
.4.31) Since t ≥ 1, it follows that l ≥ 2. As α m-1 ≥ 1, t ≥ 1, l ≥ 1 and q ≤ k (by Proposition 3.2.3 as t = 1), then (3.4.31) implies that
By applying Theorem 3.1.1, we get w(x) -m ∈ P F (S) if and only if m(kα x+y + β x+y + x+y + 1) + x + y ≤ m(k(α x + α y ) + β x + β y + x + y ) + x + y
.5.1) Proof. By Lemma 3.5.1, we have w(x) -m ∈ P F (S) if and only if w(x + y) + m ≤ w(x) + w(y), ∀ 1 ≤ y ≤ m -1 where x + y = x + y mod m.
for some 1 < α < m-1. If (2 + α-3q )ν ≥ m, then S satisfies Wilf's conjecture.
, then S satisfies Wilf's conjecture.
• r -1 = ql + t with t > 0;
• r -1 = ql with q > 0 and β x = k + q;
• r = 1 and β x = k, then x + y 2 = α x (kl + r) + β x l + 1. We have m(kα x+y 2 + β x+y 2 + x+y 2 + 1) + x + y 2 = m kα x + β x + 2 + α x (kl + r) + β x l + 1.
(3.5.5) By using (3.5.4) and (3.5.5), it follows that x does not satisfy (3.5.1) for y 2 .
3. We have m(kα y 3 + β y 3 + y 3 ) = m(2) if x = 0 and m(kα y 3 + β y 3 + y 3 ) = m if x = 1. Therefore, m(k(α x + α y 3 ) + β x + β y 3 + x + y 3 ) + x + y 3 = m(kα x + β x + 2) + α x (kl + r) + (β x + 1)l + 1.
(3.5.6) Since x + y 3 ≤ m -1, it follows that x + y 3 = x + y 3 . If one of the following conditions holds :
• r -1 = ql + t with t > 0 and β x = k + q ;
• r -1 = ql with q > 0, β x = k + q -1 and β x = k + q;
• r = 1, β x = k -1 and β x = k, then x + y 3 = α x (kl + r) + (β x + 1)l + 1. We have m(kα x+y 3 + β x+y 3 + x+y 3 + 1) + x + y 3 = m kα x + β x + 3 + α x (kl + r) + (β x + 1)l + 1.
(3.5.7) By using (3.5.6) and (3.5.7), it follows that x does not satisfy (3.5.1) for y 3 .
Thus, the proof is complete. Theorem 3.5.4 will determine the elements that do not belong to P F (S). Theorem 3.5.4. Let S be a numerical semigroup minimally generated by m, m + 1, . . . , m + l, k(m + l) + r with r ≤ (k + 1)l + 1. For all 1 ≤ x ≤ m -1, write x = α x (kl + r) + β x l + x t x as in Definition 3.0.1 and w(x) = m(kα x + β x + x ) + x as in Theorem 3.1.1. We have the following : Case 1. If r -1 = ql + t for some q, t ∈ N with t < l and t = 0.
Case 1.1. If m -1 = α m-1 (kl + r) (i.e., β m-1 = m-1 = 0), then
∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with x = 1 and (k + q)l + x t x > β m-1 l}.
Case 1.3. If m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 (i.e., m-1 = 1), then P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + β m-1 l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with x = 1 and (k + q)l + x t x > β m-1 l + m-1 t m-1 }.
Case 2. If r -1 = ql for some q ∈ N * .
Case 2.1. If m -1 = α m-1 (kl + r) (i.e., β m-1 = m-1 = 0), then P F (S) ⊆ {w(x) -m; (α m-1 -1)(kl + r) + (k + q -1)l + 1 ≤ x ≤ m -1}.
Case 2.2. If m -1 = α m-1 (kl + r) + β m-1 l (i.e., β m-1 > 0, m-1 = 0), then P F (S) ⊆ {w(x) -m; α m-1 (kl + r) + (β m-1 -1)l + 1 ≤ x ≤ m -1} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 and (k + q -1)l + x t x > β m-1 l} ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l}.
Case 2.3. If m -1 = α m-1 (kl + r) + β m-1 l + m-1 t m-1 (i.e., m-1 = 1), then
∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x with x = 1 and (k + q -1)l + x t x > β m-1 l + m-1 t m-1 } ∪ {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l + m-1 t m-1 }.
Proof. Case 1. If r -1 = ql + t for some q, t ∈ N with t < l and t = 0.
Since x does not satisfy (3.5.1) for y 1 if x+y 1 ≤ m-1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl+r), then w(x) -m / ∈ P F (S). In particular,
Consequently,
Since x does not satisfy (3.5.1) for y 3 if x + y 3 ≤ m -1 and β x = k + q (Lemma 3.5.3), by using (3.5.8), we get that if x ≤ (α m-1 -1)(kl + r) + (k + q -1)l + l -1, then w(x) -m / ∈ P F (S). In particular,
In addition, if x = (α m-1 -1)(kl + r) + (k + q)l, then x = 0 and x + y 2 ≤ m -1. Since x does not satisfy (3.5.1) for y 2 if x + y 2 ≤ m -1 and x = 0, we deduce that if x = (α m-1 -1)(kl + r) + (k + q)l, then w(x) -m / ∈ P F (S) (Lemma 3.5.3). By using (3.5.9), we obtain
Since x does not satisfy (3.5.1) for
3), then w(x) -m / ∈ P F (S). In particular,
Since x does not satisfy (3.5.1) for y 3 if x + y 3 ≤ m -1 and β x = k + q (Lemma 3.5.3), by using (3.5.11) we
(3.5.12)
as m-1 = 1 and r = ql + 1, we get a contradiction) and x + y 2 ≤ m -1. Since x does not satisfy (3.5.1) for y 2 in the case x + y 2 ≤ m -1, x = 0 and β x = k + q (Lemma 3.5.3), we deduce that if x = α m-1 (kl + r) + β m-1 l, then w(x) -m / ∈ P F (S). By the same argument we have if x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x such that x = 0, then w(x) -m / ∈ P F (S). By using (3.5.24), we obtain
(3.5.25)
and (α m-1 -1)(kl + 1) + 1 + (kl + 1) > α m-1 (kl + 1).
Consequently,
Since x does not satisfy (3.5.1) for y 1 if x+y 1 ≤ m-1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl+1), then w(x) -m / ∈ P F (S). In particular,
and (α m-1 -1)(kl + 1) + kl + l + 1 > α m-1 (kl + 1).
Consequently,
Since x does not satisfy (3.5.1) for y 3 in the case x + y 3 ≤ m -1, β x = k -1 and β x = k (Lemma 3.5.3), by using (3.5.26) we deduce that if x ≤ (α m-1 -1)(kl + 1) + (k -2)l + l -1, then w(x) -m / ∈ P F (S). In particular,
Since x does not satisfy (3.5.1) for y 2 in the case x + y 2 ≤ m -1, x = 0 and β x = k (Lemma 3.5.3), we deduce that if x = (α m-1 -1)(kl + 1) + (k -1)l, then w(x) -m / ∈ P F (S). Hence,
Since x does not satisfy (3.5.1) for y 1 if x + y 1 ≤ m -1 (Lemma 3.5.3), we deduce that if x ≤ (α m-1 -1)(kl + 1) + β m-1 l, then w(x) -m / ∈ P F (S). In particular,
Since x does not satisfy (3.5.1) for y 3 in the case x+y 3 ≤ m-1, β x = k-1 and β x = k (Lemma 3.5.3), by using (3.5.29) we deduce that if
. By using (3.5.30), we get
If we take y = l, then x does not satisfy (3.5.1) for y . Indeed, we have
. By using (3.5.31), we get
with kl > β m-1 l}.
(3.5.32)
Next, let x = (α m-1 -1)(kl + 1) + kl. Suppose that l ≥ 2. If we take y = 2, then x does not satisfy (3.5.1) for y . Indeed, x + y = α m-1 (kl + 1) + 1. We have x + y ≤ m -1 which gives x + y = x + y . In addition, m(kα
If we take y = l, then x does not satisfy (3.5.1) for y . Indeed, we have x + y = α m-1 (kl + 1) + ( x t x -1) with x t x -1 ≥ 1. We have x + y ≤ m -1 which gives x + y = x + y . In addition,
On the other hand, m(k(α x + α y ) +
. By using (3.5.36), we get
(3.5.37)
Next, let x = (α m-1 -1)(kl + 1) + kl. Suppose that l ≥ 2. If we take y = 2, then x does not satisfy (3.5.1) for y . Indeed, x + y = α m-1 (kl + 1) + 1. We have x + y ≤ m -1 which gives x + y = x + y . In addition,
On the other hand,
Hence, x does not satisfy (3.5.1) for y . Consequently, if x = (α m-1 -1)(kl + 1) + kl and l ≥ 2, then w(x) -m / ∈ P F (S). By using (3.5.37), we get
Thus, the proof is complete.
Lemmas 3.5.5 and 3.5.6 give cases where (3.5.1) holds. This will allow us to determine later some numbers that belong to P F (S).
Lemma 3.5.5. Let S be a numerical semigroup minimally generated by m, m
On the other hand, m(kα
Therefore,
Hence, x = m -1 satisfies (3.5.1) for all 1 ≤ y ≤ m -1. Thus, the proof is complete.
Proof. Case 1. If r -1 = ql + t for some q, t ∈ N with t < l and t = 0.
Case 1.1. If m-1 = α m-1 (kl+r) (i.e., β m-1 = m-1 = 0). We claim that if (α m-1 -1)(kl+r)+(k+q)l+1 ≤ x ≤ m -1, then x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact,
• If x = m -1, then by using Lemma 3.5.5 x satisfies (3.5.1) for all 1 ≤ y ≤ m -1.
then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular (α m-1 -1)(kl + r)
and m(kα
In addition, we have m(kα Therefore,
By using (3.5.10) and (3.5.44), we get
In fact, write y = α y (kl + r) + β y l + y t y . Since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. Since x + y ≤ m -1 and (k + q)l + x t x > β m-1 l, it follows that y = β y l + y t y . Thus, m(kα
.5.48)
We have 1 ≤ x t x ≤ t (as By using (3.5.48) and (3.5.49), we get that x satisfies (3.5.1) for 1 ≤ y ≤ m -1. Therefore, {w(x) -m; x = (α m-1 -1)(kl + r) + (k + q)l + x t x with x = 1 and (k + q)l + x t x > β m-1 l} ⊆ P F (S).
(3.5.50) By using (3.5.13), (3.5.50) and (3.5.47), we obtain
x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. Indeed, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular,
(3.5.52) By using (3.5.52) and (3.5.51), we get that x satisfies (3.5.1) for all y. Consequently,
Furthermore, if x = (α m-1 -1)(kl+r)+(k+q)l+ x t x such that x = 1 and (k+q)l+ x t x > β m-1 l+ m-1 t m-1 , then x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact, write y = α y (kl + r) + β y l + y t y . Since β x + x > 0, then from Lemma 3.5.6 we may assume that x+y ≤ m-1 and this implies that x + y = x+y. Since x+y ≤ m-1 and (k + q)l + x t x > β m-1 l + m-1 t m-1 , it follows that y = β y l + y t y . Thus, m(kα y + β y + y ) = m(β y + y ). Since x = (α m-1 -1)(kl+r)+(k+q)l+ x t x such that x = 1, we get that m(kα
(3.5.56) By using (3.5.16), (3.5.56) and (3.5.53), we get
In fact,
• If x = m -1, then by using Lemma 3.5.5, it follows that x satisfies (3.5.1) for all 1 ≤ y ≤ m -1.
• If (α m-1 -1)(kl + r) + (k + q -1)l + 1 ≤ x ≤ m -2 = (α m-1 -1)(kl + r) + (k + q)l. Indeed, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular, Therefore,
) for all 1 ≤ y ≤ m -1. Indeed, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular,
In fact, write y = α y (kl + r) + β y l + y t y . Since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that
By using (3.5.63), (3.5.64) and q ∈ N * , we get that x satisfies (3.5.1) for all 1 ≤ y ≤ m -1.
• If x = (α m-1 -1)(kl + r) + (k + q -1)l + x t x such that x = 1 and (k + q -1)l + x t x > β m-1 l.
• If β y ≥ 1. We have
• If β y = 0. Then, y = y t y with y = 1 (as y ≥ 1). Since 1
Since q ∈ N * and β y + y ≥ 1 (as y ≥ 1), it follows that Hence
x = (α m-1 -1)(kl + r) + (k + q)l with (k + q)l > β m-1 l} ⊆ P F (S).
(3.5.66) By using (3.5.22), (3.5.66) and (3.5.62), we get
) for all 1 ≤ y ≤ m -1. Indeed, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular,
By using (3.5.68) and (3.5.67), we get that x satisfies (3.5.1) for all y. Consequently,
x ≤ m -1, then x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact,
• If x = m -1, then by using Lemma 3.5.5 x satisfies (3.5.1) for 1 ≤ y ≤ m -1.
• If (α m-1 -1)(kl + 1) + (k -1)l + 1 ≤ x ≤ m -2 = (α m-1 -1)(kl + 1) + kl. Indeed, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular, (α m-1 -1)(kl • If x = (α m-1 -1)(kl + 1) + (k -1)l + 1 such that (k -1)l + 1 > β m-1 l.
• If β y ≥ 1. We have • If β y = 0. Then, y = y t y with y = 1 (as y ≥ 1). We have x + y = (α m-1 -1)(kl + 1) + (k -1)l + 1 + y t y with 1 + y t y ≤ l (as y t y < l). Hence,
x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact, since β x + x > 0, then from Lemma 3.5.6 we may assume that x + y ≤ m -1 and this implies that x + y = x + y. In particular,
, then x satisfies (3.5.1) for all 1 ≤ y ≤ m -1. In fact, write y = α y (kl + 1) + β y l + y t y . Since β x + x > 0, then from Lemma 3.5.6 we may assume that x+y ≤ m-1 and this implies that x + y = x+y. Since x+y ≤ m-1, it follows that y = β y l+ y t y (as (k - • Thus, the proof is complete. Note that m = 12, k = 2, l = 3 and r = 5 (α m-1 = 1, β m-1 = 0, m-1 t m-1 = 0, q = 1, t = 1). S verifies the formula in Theorem 3.5.7.
2. S =< 18, 19, 20, 21, 47 >. By using GAP [8], we get that P F (S) = {64, 69, 70, 71} = {w(10) -18, w(15) -18, w(16) -18, w(17) -18}.
Note that m = 18, k = 2, l = 3 and r = 5 (α m-1 = 1, β m-1 = 2, m-1 t m-1 = 0, q = 1, t = 1). S verifies the formula in Theorem 3.5.7.
Thèse de Doctorat
Mariam DHAYNI
Abstract
The thesis is made up of two parts. We study in the first part Wilf's conjecture for numerical semigroups.
We give an equivalent form of Wilf's conjecture in terms of the Apéry set, embedding dimension and multiplicity of a numerical semigroup. We also give an affirmative answer for the conjecture in certain cases.
In the second part, we consider a class of almost arithmetic numerical semigroups and give for this class of semigroups explicit formulas for the Apéry set, the Frobenius number, the genus and the pseudo-Frobenius numbers. We also characterize the symmetric (resp. pseudo-symmetric) numerical semigroups for this class of numerical semigroups.
Mots clés
Semigroupes numériques, conjecture de Wilf, semi-groupes presque arithmétiques, nombre de Frobenius, Problème des pièces de monnaie.
Key Words
Numerical semigroups, Wilf's conjecture, almost arithmetic numerical semigroups Frobenius number, money changing problem. |